3 August 2017

What I learn from .019 : Growing Object-Oriented Software, Guided by Tests” By Steve Freeman, Nat Pryce.

Growing Object-Oriented Software, Guided by Tests” By Steve Freeman, Nat Pryce.

I am in the middle of mastering TDD. It helps me to gain confidence in code and helps me to change code without being surprised by unexpected shit hit the fan. It makes your writing code slower and you need be kept balance with a number of tests and ensure there are no duplicate tests as it makes changes time-consuming. There is some aspect of TDD that I am confused about it. I needed a book that will show TDD from project start to keep improving stage where you just keep add new features and improve existing ones.
Guess what? There is a book for that. It is called “Growing Object-Oriented Software, Guided by Tests” By Steve Freeman, Nat Pryce.
This book is great if you know basics of TDD and you want to understand how to use TDD methodology from project starts to maintenance phase based on the real world for example.
It helps you shape TDD skill set. I learnt a lot, but below you can find my favourite takeaways from this book.:

  1. Test data builder. This produces a more focused test with less code. We can name the builder after the features that are common, and the domain objects after their differences. We could add a factory method that returns a copy of the builder with its current state:
    1. Order orderWithDiscount = hatAndCape.but().withDiscount(0.10).build();
Order orderWithGiftVoucher = hatAndCape.but().withGiftVoucher("abc").build();
For complex setups, the safest option is to make the “with” methods functional
and have each one return a new copy of the builder instead of itself.
I really like this pattern in larger projects because it helps me create context  I needed for test without noise (like set things that are not used in test)
  1. Logging.
    1. Support logging (errors and warn) is part of the user interface of the application. These messages are intended to be tracked by support staff that handle support.It should be used to diagnose a failure or monitor the progress of the running system.
    2. Diagnostic logging (info, debug and trace) is infrastructure for programmers. These messages should not be turned on in production because they’re intended to help the programmers understand what’s going on inside the system they’re developing
It is an interesting approach which I would like to try it in one of my future projects. Proper logging is critical in diagnosing and fixing problems.
  1. Interface and Protocol. An interface describes whether two components will fit together, while a protocol describes whether they will work together.
  2. public static void main(String... args) throws Exception {
Main main = new Main();
XMPPAuctionHouse auctionHouse =
I like concept of using content to describe elements of args[]

  1. Keep the code as simple as possible, so it’s easier to understand and modify. Developers spend far more time reading code than writing it, so that’s what we should optimise for readability.Write the Test That You’d Want to Read.
Easier said than done, but I am getting better and better at this, but it is the mantra which all developers should repeat over and over again.
  1. Train wreck” code is bad because it lower readability. For example: master.getModelisable().getDockablePanel().getCustomizer()).getSaveItem().setEnabled(Boolean.FALSE.booleanValue());  It should be master.allowSavingOfCustomisations();
  2. Values and Objects. It’s important to distinguish between values that model unchanging quantities or measurements, and objects that have an identity. Values and Objects. values are immutable, so they’re simpler and have no meaningful identity; objects have state, so they have identity and relationships with each other.
    1. Values are immutable instances that model fixed quantities. They have no individual identity, so two value instances are effectively the same if they have the same state. This means that it makes no sense to compare the identity of two values; doing so can cause some subtle bugs—think of the different ways of comparing two copies of new Integer(999). That’s why we’re taught to use string1.equals(string2) in Java rather than string1 == string2. string1.equals(string2) in Java rather than string1 == string2.
    2. Objects, on the other hand, use the mutable state to model their behaviour over time.
  3. As the code scales up, the only way we can continue to understand and maintain it is by structuring the functionality into objects, objects into packages, packages into programs, and programs into systems. We use two principal heuristics to guide this structuring:
    1. Separation of concerns. When we have to change the behaviour of a system, we want to change as little code as possible. If all the relevant changes are in one area of code, we don’t have to hunt around the system to get the job done. Because we cannot predict when we will have to change any particular part of the system, we gather together code that will change for the same reason.
    2. Higher levels of abstraction. The only way for humans to deal with complexity is to avoid it, by working at higher levels of abstraction. We can get more done if we program by combining components of useful functionality rather than manipulating variables and control flow; that’s why most people order food from a menu In terms of dishes, rather than detail the recipes used to create them.
  4. UNUSED_CHAT is a meaningful name for a constant that is defined as null.
  5. Keep the Code Compiling. Try to minimise the time when we have code that does not compile by keeping changes incremental. When we have compilation failures, we can’t be quite sure where the boundaries of our changes are, since the compiler can’t tell us. This, in turn, means that we can’t check into our source repository, which we like to do often.The more code we have open, the more we have to keep in our heads which, ironically, usually means we move more slowly. TDD shows how fine-grained our development steps can be.
  6. Where do we start when we have to write a new class or feature? the simplest thing that could possibly work” but simply should not be interpreted as simplistic. We prefer to start by testing the simplest success case. Once that’s working, we’ll have a better idea of the real structure of the solution and can prioritise between handling any possible failures we noticed along the way and further success cases.
  7. From author’s experience, when the code is difficult to test, the most likely cause is that design needs improving. Author value code that is easy to maintain over code that is easy to write.
  8. Encapsulation vs Information hiding.
  9. Encapsulation ensures that the behaviour of an object can only be affected through its API. Information hiding Conceals how an object implements its functionality behind the abstraction of its API.
  10. Many object-oriented languages support encapsulation by providing control over the visibility of an object’s features to other objects, but that’s not enough. Objects can break encapsulation by sharing references to mutable objects, an effect is known as aliasing. Aliasing is essential for conventional object- oriented systems (otherwise no two objects would be able to communicate), but accidental aliasing can couple unrelated parts of a system so it behaves mysteriously and is inflexible to change. We follow standard practices to maintain encapsulation when coding: define immutable value types, avoid global variables and singletons, copy collections and mutable values when passing them between objects, and so on.
  11. No And’s, Or’s, or But’s. Every object should have a single, clearly defined responsibility; this is the “single responsibility” principle. When we’re adding behaviour to a system, this principle helps us decide whether to extend an existing object or create a new service for an object to call.
  12. Impl Classes Are Meaningless. Sometimes we see code with classes named by adding “Impl” to the single interface they implement. This is better than leaving the class name unchanged and prefixing an “I” to the interface, but not by much. A name like BookingImpl is duplication; it says exactly the same as implements Booking, which is a “code smell.” We would not be happy with such obvious duplication elsewhere in our code, so we ought to refactor it away. It might just be a naming problem.
  13. I agreed with that but I found quirky too because my answer to that used default implementation, but it is context based
  14. It comes handy to have RuntimeException like SomethingWentHorribleWrongException or Defect when the code reaches a condition that could only be caused by a programming error, rather than a failure in the runtime environment.
  15. We like to start by writing a test as if its implementation already exists, and then filling in whatever is needed to make it work—what Abelson and Sussman call “programming by wishful thinking”.


  1. How to start a new project using TDD approach? We want to start from building a “walking skeleton” that we can deploy it into a production-like environment, and then run the tests through the deployed system. Including the deployment step in the testing process. It is critical for two reasons. First, this is the sort of error-prone activity that should not be done by hand, so we want our scripts to have been thoroughly exercised by the time we have to deploy for real. Second, The development team bumps into the rest of the organisation and has to learn how it operates currently. If it’s going to take six weeks and four signatures to set up a database, we want to know now, not two weeks before delivery. To design initial structure, we have to have some understanding of the purpose of the system and we need a high-level view of the client’s requirements, both functional and non-functional, to guide our choices. The tools we build to implement the “walking skeleton” are there to support this learning process. Deploying and testing right from the start of a project forces the team to understand how their system fits into the world. It flushes out the “unknown unknown” technical and organisational risks so they can be addressed while there’s still time. Understand problem → Automate Build, Deployment, End to End Tests → TDD cycle.
  2. The “walking skeleton” is an implementation of the thinnest possible slice of real functionality that we can automatically build, deploy, and test end-to-end. For example, for a database-backed web application, a skeleton would show a flat web page with fields from the database.
  3. Iteration Zero. In most Agile projects, there’s a first stage where the team is doing initial analysis, setting up its physical and technical environments, and otherwise getting started. The team isn’t adding much visible functionality since almost all the work is infrastructure so it might not make sense to count this as a conventional iteration for scheduling purposes. A common practice is to call this step iteration zero: “iteration” because the team still needs to time-box its activities and “zero” because it’s before functional development starts in iteration one. One important task for iteration zero is to use the walking skeleton to test-drive the initial architecture.
  4. Iteration Zero. In most Agile projects, there’s a first stage where the team is doing initial analysis, setting up its physical and technical environments, and otherwise getting started. The team isn’t adding much visible functionality since almost all the work is infrastructure so it might not make sense to count this as a conventional iteration for scheduling purposes. A common practice is to call this step iteration zero: “iteration” because the team still needs to time-box its activities and “zero” because it’s before functional development starts in iteration one. One important task for iteration zero is to use the walking skeleton to test-drive the initial architecture.

  1. As we develop the system, we use TDD to give us feedback on the quality of both its implementation (“Does it work?”) and design (“Is it well structured?”).
  2. Writing tests:
    1. makes us clarify the acceptance criteria for the next piece of work—we have to ask ourselves how we can tell when we’re done (design);
    2. encourages us to write loosely coupled components, so they can easily be tested in isolation and, at higher levels, combined together (design);
    3. adds an executable description of what the code does (design);
    4. detects errors while the context is fresh in our mind (implementation);
  3. The Golden Rule of Test-Driven Development: Never write new functionality without a failing test.

  1. Levels of Testing. We build a hierarchy of tests to gain confidence on various levels:
    1. Acceptance: Does the whole system work?
    2. Integration: Does our code work against code we can't change?
    3. Unit: Do our objects do the right thing, are they convenient to work with?
  2. Watch the Test Fail. We always watch the test fail before writing the code to make it pass, and check the diagnostic message. If the test fails in a way we didn’t expect, we know we’ve misunderstood something or the code is incomplete, so we fix that. As we write the production code, we keep running the test to see our progress and to check the error diagnostics as the system is built up behind the test. Where necessary, we extend or modify the support code to ensure the error messages are always clear and relevant.
  3. One common mistake is thinking about testing methods. A test called testBidAccepted() tells us what it does, but not what it’s for.
  4. The end-to-end test shows us the end points of that process so we can explore our way through space in the middle.
  5. Common causes of test brittleness include:
    1. The tests are too tightly coupled to unrelated parts of the system or unrelated behaviour of the object(s) they’re testing;
  6. The tests overspecify the expected behaviour of the target code, constraining it more than necessary.
  7. There is duplication when multiple tests exercise the same production code behaviour.
  8. Test brittleness is not just an attribute of how the tests are written; it’s also related to the design of the system. If an object is difficult to decouple from its environment because it has many dependencies or its dependencies are hidden,  its tests will fail when distant parts of the system change.
  9. We mock object to manages expectations and stubbing for the test. The essential structure of a test is:
    1. Create any required mock objects.
    2. Create any real objects, including the target object.
    3. Specify how you expect the mock objects to be called by the target object.
    4. Call the triggering method(s) on the target object.
    5. Assert that any resulting values are valid and that all the expected calls have been made.
  10. We start work on a new feature by writing failing acceptance tests that demonstrate that the system does not yet have the feature we’re about to write and track our progress towards completion of the feature.We write the acceptance test using the only terminology from the application’s domain.It helps to understand what the system should do, without tying us to any of our initial assumptions about the implementation or underlying technologies.
  11. Small, Focused, Well-Named Tests The easiest way to improve diagnostics is to keep each test small and focused and give tests readable names. If a test is small, its name should tell us most of what we need to know about what has gone wrong.
  12. The point of a test is not to pass but to fail. We want the production code to pass its tests, but we also want the tests to detect and report any errors that do exist. A “failing” test has actually succeeded at the job it was designed to do.
  13. Even unexpected test failures, in an area unrelated to where we are working, can be valuable because they reveal implicit relationships in the code that we hadn’t noticed.

I highly recommended this book.