Building software is expensive. I’m not talking about creating software, I mean taking software as written (source code) and running it through compilers and linkers and post-processors and packagers and obfuscators and installer-generators. It might not seem so, but look under the covers and you will find a wealth of costs and potential savings…
Lifecycle of the Developer
The developer has a concept he needs to translate into software. He (or she) does not sit and meditate until it comes to him, then streams it effortlessly into the computer. Rather he tries something, and tries something else, and writes some conditions (tests) to limit the scope of his options, and cycles over and over and over again between four main activities: creating -> building -> executing tests -> discovering. The developer then wraps around, having discovered and learned (found the bug or identified a future direction) and begins to create again.
If you break this down, there are two states – active and waiting – that the developer is in at any point. He is active when he is learning and he is active when he is creating. He is waiting when he is building and executing tests. So the developer’s ability to do further learning/creative work comes from how long he has to wait for building/executing the software.
None of the above would be interesting except for the fact that the developer does so many times per day. How many times? Well, now we get to the math.
Large Builds = Fewer Builds
Most software I’ve seen in my career has been heavy, bloated software. Lots of deep interlinking of code with lots of tests that go from the front of the system to the back, etc. On a typical build machine, this has often translated into 5-20 minutes to build and execute a subset of the automated tests.
When the build is this large, developers will tend to optimize their behaviour around this limitation. If I, as a developer, have to wait for 5-20 minutes between changes, I will start to group up changes into larger and larger batches so the total cost of building as a percentage of my time goes down. This may mean that I work on several things and then rebuild. It can be easy to lose track of all the balls being juggled, but worse – this behaviour encourages a reduction in test execution. I might also decide that I’m going to only build and not execute my tests, because they are taking, say, 75% of the total time waiting.
More changes + fewer tests = larger difficulties if the code is not exactly perfect or if I didn’t think of everything.
Each time I build, then, I risk breaking more, as any small change I introduce will be harder to find, and possibly have more effects. And I’m less likely to see the problem for many of these cycles due to the loss of the feedback from the tests. And the longer an error persists, the more likely it will compound with other errors, etc.
Reduced Value from Experienced Developers
This is not an absolute, but what I have found is that the longer the build, the less you are likely to obtain as much relative value from an experienced developer, vs. his less experienced counterparts. This is simply because less of the day is spent doing the tasks of development – creating and learning, and more time is spent waiting. Any monkey can wait, but high-value developers are high-cost and making them wait reduces the value you get from that increased developer productivity.
“Hold on a minute, Bub,” you might say, “an experienced developer will not stand for that delay, and will take steps to improve his conditions…”
It sort of makes the point. The more experienced the developer, the less likely he is to stomach the delay, as he feels the tangible loss of his value to the development effort. Often such developers will either batch work as discussed above, with all its issues, or will work to shorten build times.
Shortening Build Times – The path to developer productivity
The shorter you can make build times, the greater percentage of time can be spent in creative and learning tasks, without the accompanying batching problems. The shorter you can make builds, the more frequently you can execute tests. The more you shorten builds, the less waste in your developer’s lifecycle.
But how?
Spend the money on better hardware
I’m not kidding. Paying $3000/developer for better equipment (just to take a number out of the air)… yes, for EACH developer, is a remarkably small cost when you consider the whole picture. If a developer makes $50,000/year (doesn’t matter if that’s high or low, it’s simple math… adjust for your market) and if they’re being slowed down by 20% by their builds being 10 minute builds (again, conservative slowdown for that high a build length), then consider: If they’re slowed down by 20%, then you’re throwing away $10,000/year on wasted developer productivity. Even more for higher-paid developers. Hardware is almost always cheaper than developer time, and if they’re using equipment that they complain is too slow, then it’s a very small investment for a large payoff.
Break the projects into smaller sub-projects
If you have a large project and containing, say, 6000 classes, 21,000 lines of code, and lots of interdependencies, plus tests, that’s a hefty build. But if I change one class in the middle, not every last thing has to be completely rebuilt. Yes javac (if you use it) or other change-sensitive compilers or build systems can do some speedup on that, but you still end up running all the tests, because you don’t know inherently which tests are not relevant because of the dependencies.
If you break the projects into smaller sub-projects, and then use some central dependency management system such as Maven, and a continuous integration system that understands the dependencies, it can build those projects with changes, and then only build the down-stream dependencies
This has many side-benefits. For one, smaller sub-projects means fewer circular dependencies, which improves reusability of those units. Further more, smaller units are easier to test, and developers are less intimidated when writing tests for smaller units. This “virtuous cycle” means developers are writing more tests, which improves quality which wastes less developer time on later defects, etc.
Using such systems can also provide more isolation between these units which reduces contention when many developers are working on a system simultaneously, as there are fewer places where developers would have to integrate their changes directly with another’s, and such locations become easier to track and manage.
Encourage more unit testing
This is tricky, because it’s easy for a developer to think that writing a single set of system tests which tests all the way through the system is faster than writing a larger volume of isolated unit tests. Below the surface, you begin to find that by executing a long (1-2 minute) integration test, and perhaps executing 5-15 of them can very quickly result in 5-30 minute builds. These end up then either getting run and delaying development, or they don’t get run at all by the developer until much later. This latter case, alluded to above is the bane of software quality, because it means that large volumes of change are being introduced without any sort of regression test to ensure that error hasn’t been introduced. If integration tests are run once per day (and you don’t have enough isolated unit tests to cover the same code) then you can have a day’s worth of several developers’ error colliding, which can cause hours of work teasing apart the problems.
Unit tests should execute in under 1 second, in my experience (perhaps 2-5 seconds for tests that involve multithreading). You should be able to have hundreds of unit tests, and a good test execution framework should execute many of them in parallel to speed up test execution. (Maven’s surefire doesn’t do this per-se, but does support TestNG which certainly supports parallel execution of tests). Furthermore, isolated unit testing works with hard-coded test data which cannot be changed by this or that environment change caused by other teams or infrastructure groups. They are portable tests. They should work in any environment, with rare exceptions.
Additionally, isolated testing encourages a stronger “code-to-contract” or “strong API” approach to design, which has long been understood to improve quality. Such approaches mean that you can have more complete code and branch coverage, but probably more importantly, more case or scenario coverage.
The Bottom Line
Ultimately, software is about delivering as much value as possible, sustainably, as early as possible. This is true in both commercial and open-source contexts, but is certainly and especially true in a commercial context. Every delay in developer productivity is a delay in realizing the value provided, which often equates to a delay in gaining revenue. By reducing the wait-time for developers due to build and test execution you can achieve at least large fractional gains, and sometimes you achieve economies that allow an order of magnitude in productivity gains from the combined factors mentioned above. If you start to get the feeling that “it costs too much”, or “writing all those tests takes time,” or “re-organizing the projects will be too disruptive,” make sure you’re thinking of the whole cost. Don’t sub-optimize on a part of the value at the cost of delivering value to your customer, sustainably, as early as possible. Choosing any path that slows this value stream down is throwing money away, even if it seems to save costs/time in a localized way. “Penny-wise, Pound(Dollar)-foolish” has never been more true.
Affiliated Promotions:
Please share!













