About Christian Gruber

An agile coach, management consultant, software architect, and developer, Christian teaches and consults on issues of software development process and practice, both deep in and high above the coding level. A former Sun contractor and Deloitte consultant, his background has given him a focus on service and value that works quite well with his lean and agile philosophies. He maintains his own blog at http://www.israfil.net/blog/geekinasuit/ and works with Berteig Consulting as an affilliate consultant through his own consulting firm, Israfil Consulting Services Corporation.

ANN: June Agile Software Engineering Practices

Isráfíl Consulting Services is pleased to announce our upcoming:

Agile Software Engineering Practices Courses (2 day)

Software Developers, Technical Architects and Lead Developers for teams that currently use or are intending to use Agile methods such as Scrum, Extreme Programming or OpenAgile will benefit from attending this course.
After completing this course you will:

  • increase your development productivity
  • be familiar with basic disciplines to create well-tested, defect-free code
  • be able to integrate successfully into Agile teams
  • understand what makes healthy, maintainable code
  • receive a Certificate of Attendance
  • receive $100.00 discount on a 3-day Scrum training and certification course by our partners

Available Classes:

  • 2009-06-22: 2-day Agile Software Engineering Practices – Ottawa, ON $1450.00 CAD [16 spaces]
    • Register by June 1 and get the early-bird price of $1,250.00.
  • 2009-06-25: 2-day Agile Software Engineering Practices – Markham, ON $1450.00 CAD [16 spaces]
    • Register by June 1 and get the early-bird price of $1,250.00.


Register!: http://www.israfil.net/publictraining/registerCourse Details: http://www.israfil.net/publictraining/coursesClass Schedules: http://www.israfil.net/publictraining/scheduleFor more information, please e-mail us at: training@israfil.net

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Report average velocity and fail 50% of the time

The question of “expected velocity” and long-term planning has come up at more than one client. A recent client conversation got me thinking, however, questioning how to interpret velocity when estimating and plotting a roadmap based on a current backlog of features. Assume, for a moment, a backlog of story-pointed features, and 10 good iterations (consistent team, no odd occurrences that would affect velocity). Mathematically average velocity (well, a mean really) is a 50/50 proposition for any subsequent iteration. Some organizations don’t find this level of confidence acceptable. What velocity should be reported as expected for iteration/sprint planning and roadmap forecasting, and how should it be used?

Context

Interpreting velocity, before anything else, requires some context. An agile organization that sees estimates as hypothetical might find this article is of less use. In fact, a good question is whether estimation is even a value-added activity. For this post assume an organization that sees strong value in estimation and planning.

Culture

The biggest piece of context is to know the organizational culture. This is important in two respects, and both of these cultural factors are important because they impact how Velocity is understood within the organization.

What is Failure?

First is the meaning of failure in the organization. Is failure to deliver what was committed to by the planned date considered a failure of the team, or is it simply a fact to be understood and accounted for in future planning? Even in Agile organizations, the former is often true and a hard habit to break. If not delivering to expectations is considered failure and has negative consequences, then that means that estimation is being treated not as estimation, but as prediction and contract. Velocity is therefore a commitment, and should therefore be used conservatively.

Consistency or Speed?

The second item to know is whether consistency and predictability of delivery is of a higher strategic value than the actual rate of delivery. This is often un-stated. Usually people want fast and consistent delivery. The truth is that you can get consistent, or fast software development, or a balance between the two. Lack of trust is usually a strong motivation to encourage consistency over speed, or a history of quality problems, etc. In this case, as well, Velocity is more of a boundary than an indicator.

Emotional Loading in Estimation (or why not Low-ball?)

If estimation is seen as binding, contractual, or limiting, then additional emotions get overloaded. Trust, promise, and betrayal are words used in such organizational cultures. Distrust is usually a strong factor, especially between silos (business vs. technology, company vs. project management vs. customer, etc.). So when people are asked to give estimates, even using agile-friendly mechanisms such as story points, there is usually a process of cementing that estimate into a part of an accountability model, so estimates start to get conservative. People are then accused of low-balling, others are accused of irrational expectations… we’ve all seen this. The language clearly becomes one of contention and blame. Even the term low-balling is often an outright pejorative term for estimating too conservatively.

This doesn’t happen only in agile environments, and project managers in traditional PMBOK frameworks have long factored risk into “contingency budgets”. Interestingly, however, if a Project Manager were to factor risk into the task estimates, they’d be “low-balling capacity,” yet if they were to factor it out and layer it on top of the project work, it’s “contingency budgeting” (At least in a few experiences I’ve had). Either way, someone’s adding a factor for uncertainty, based on the need to predict conservatively or liberally or somewhere in between.

That’s the point of the article: how can Agile projects use velocity to estimate as conservatively (or liberally) as is appropriate?

An average is a 50% chance to succeed (or fail)

Velocity is not a constant. It’s a set of instantaneous values on a curve, with instances being iterations. That means that it varies, and is therefore only meaningful statistically. So how do you reasonably use velocity statistically, and improve confidence? One way is to stop delivering against “average” velocity.

A lot of coaches use average velocity over the previous N iterations. This is not helpful for all sorts of reasons, if estimation is a commitment. By definition, average (well, actually a mean, but they’re close) is a 50/50 proposition. If you report the average team velocity (assuming it’s accurate), then about half the time the team will be under and about half the time the team will be over, statistically. So basically an average is a crap shoot, when taken in any given instance. It’s can only be good in the long run. For this to work, the long-haul has to include permission to fail and a lot of trust. Teams need to be able to go miss dates but will sometimes exceed dates and it should all wash out in the end. In organizations such as I’m describing, that trust isn’t there, so. Additionally, if the language of commitment is around meeting instantaneous iteration commitments (as opposed to delivering high-quality customer value as quickly as is sustain-ably possible) then you aren’t playing the long-game, you’re playing a very short-game.

Simulate Velocity, not work

In a PMI training course I took when I was at Sun Microsystems, we were nicely informed that two point estimates of tasks are a perfect way to fail half the time, per the above logic. One point estimates are just idiotic. Three point estimates were better. We simulated with a monte-carlo algorithm and found a curve and a distribution, and then determined a confidence level yadda yadda. Well, we’re trying to avoid wasting a lot of time estimating up-front, but one way to start representing velocity properly is to do the same kind of statistical modelling done in traditional product management, only simulate velocity, not work items.

In this approach, you take the last N iterations (say 10). Determine the maximum velocity (optimistic) and the minimum velocity (pessimistic), and then the mode (the velocity value that seems to occur most frequently). Then you do monte-carlo simulation so you get a statistical pattern. Now, you actually can determine an answer based on confidence. If you want to be right with an 80% confidence, you pick a velocity where 80% of the simulated runs were successful. (Note – there are a paucity of excel templates to do this math automatically, and often they are for sale. It would be nice to have a few functions with arbitrary distributions based on min-max-mode to help this along.)

It’s not perfect, and it’s a potentially huge amount of administrative overhead. Elsewhere I’ve referenced blogs that entirely oppose any estimation at all, but if you are gong to, then working statistically with simulation is the only way to take small sample numbers meaningful.

Commitment Velocity: Low-Ball as a policy.

Another approach, one perhaps controversial, but taught by some Scrum trainers is to pick the lowest historical delivered velocity. This is a commitment-based approach, on the assumption that building trust around consistent delivery is critical to building sound relationships where product owners and teams can safely state their needs and get things done with a minimum of contractual behaviour. By taking the minimum, you force a low-ball capacity, which means you can have high-confidence of success after a few iterations. You have, likely, after a while, some spare time on your hands. Teams can then choose to pull more work in (without adjusting their commitment velocity), work on “technical debt”, improve their skills, etc. A team could raise their commitment velocity in certain inflection points in the project. A new team member is added that provides a necessary skill not previously available, and after a few iterations the team is consistently hitting a higher number, but this is a careful process to ensure that they are committing, and if they don’t make their new number, it goes down to what they got accomplished.

Indemnify teams’ learning

An arguably healthier option, if you have built enough trust, is to simply indemnify a team from failing to meet the estimate. Since you’re doing mathematics on actuals to generate an expected future number, everyone can acknowledge that past behaviour is no guarantee of future behaviour, and simply use it for capacity planning. In this case, estimation is actually estimation, not commitment or contract. The team is expected to be ahead sometimes, and behind sometimes. The upside of this is that a lot of extra time isn’t spent playing with fictional numbers. Teams are spending their efforts on delivery as quickly-yet-sustain-ably as they can, and the organization treats them as trusted professionals in this. The temptation to assume you can predict the future is seen as folly, and the estimates are used to guide overall direction, not to make outward customer commitments.

Don’t be mindless

There may be other approaches, I’m sure. The agile community is certainly not short of people who love this topic and can talk for hours on “proper” estimation. The point of this post is merely to point out some options, and ask you to look at your organizational culture, team culture, customer culture, the meaning of terms like commitment, failure, success, consistency, speed, etc. As you understand the culture, balance consistency vs. speed, trust, and other factors to choose a method of estimation that meets your goals. Don’t do estimation based on your own, internal cultural assumptions, as you may have developed or been taught techniques that are useful when and where they were taught, but may no longer be so. Or maybe they weren’t so useful then either. Regardless, this because estimation cuts at the heart of the dialogue between producer and consumer, and establishes parameters for that discussion, it’s critical that you think your choice through.

[Christian also blogs at http://www.geekinasuit.com/]

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

ANN: Agile Software Engineering Practices training by Isráfíl Consulting

Isráfíl Consulting is finally prepared for the first series of its Agile Software Engineering Practices training courses. This series is offered in partnership with Berteig Consulting who are graciously hosting the registration process. Their team has also helped greatly in shaping the presentation style and structure of the course. The initial run will be in Ottawa, Toronto (Markham), and Kitchener/Waterloo.   

Topics covered will include Test Driven Development (TDD), testability, supportive infrastructure such as build and continuous integration, team metrics, incremental design and evolutionary architecture, dependency injection, and so much more. (This course won’t present the planning side of XP, but covers many other aspects common to XP projects) It makes a great complement for training in Agile Processes such as XP, Scrum, or OpenAgile. The overview slide presentation is available for free download from the Isráfíl web site.

The courses are scheduled for:

The course is $1250 CAD per student, and participants receive a transferrable discount of $100 CAD for other training with Berteig Consulting as a part of our ongoing partnership. I initially prototyped this course in Ottawa this December, and am very excited to see this through in several locales. Class size is limited to 15, so we can keep the instruction style more involved. The above schedules are linked to Berteig Consulting’s course system and have registration links at the bottom of the description. Locations are TBD, but will be updated at the above links as soon as they’re finalized.

A further series is planned for several US cities in March, and we’ll be sure to announce them as well.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Yet another misunderstanding of TDD, testing, and code coverage

I was vaguely annoyed to see this blog article featured in JavaLobby’s recent mailout. Not because Kevin Pang doesn’t make some good points about the limits of code coverage, but because his title is needlessly controversial. And, because JavaLobby is engaging in some agile-baiting by publishing it without some editorial restraint.

In asking the question, “Is code coverage all that useful,” he asserts at the beginning of his article that Test Driven Development (TDD) proponents “often tend to push code coverage as a useful metric for gauging how well tested an application is.” This statement is true, but the remainder of the blog post takes apart code coverage as a valid “one true metric,” a claim that TDD proponents don’t make, except in Kevin’s interpretation.

He further asserts that “100% code coverage has long been the ultimate goal of testing fanatics.” This isn’t true. High code coverage is a desired attribute of a well tested system, but the goal is to have a fully and sufficiently tested system. Code coverage is indicative, but not proof, of a well-tested system. How do I mean that? Any system whose authors have taken the time to sufficiently test it such that it gets > 95% code coverage is likely (in my experience) thinking through how to test their system in order to fully express its happy paths, edge cases, etc. However, the code coverage here is a symptom, not a cause, of a well-tested system. And the metric can be gamed. Actually, when imposed as a management quality criterion, it usually is gamed. Good metrics should confirm a result obtained by other means, or provide leading indicators. Few numeric measurements are subtle enough to really drive system development.

Having said that, I have used code-coverage in this way, but in context, as I’ll mention later in this post.

Kevin provides example code similar to the following:

String foo(boolean condition) {
    if (condition)
        return "true";
    else
        return "false";
}

… and talks about how if the unit tests are only testing the true path, then this is only working on 50% coverage. Good so far. But then he goes on to express that “code coverage only tells us what was executed by our unit tests, not what executed correctly.” He is carefully telling us that a unit test executing a line doesn’t guarantee that the line is working as intended. Um… that’s obvious. And if the tests didn’t pass correctly, then the line should not be considered covered. It seems there are some unclear assumptions on how testing needs to work, so let me get some assertions out of the way…

  1. Code coverage is only meaningful in the context of well-written tests. It doesn’t save you from crappy tests.
  2. Code coverage should only be measured on a line/branch if the covering tests are passing.
  3. Code coverage suggests insufficiency, but doesn’t guarantee sufficiency.
  4. Test-driven code will likely have the symptom of nearly perfect coverage.
  5. Test-driven code will be sufficiently tested, because the author wrote all the tests that form, in full, the requirements/spec of that code.
  6. Perfectly covered code will not necessarily be sufficiently tested.

What I’m driving at is that Kevin is arguing against something entirely different than that which TDD proponents argue. He’s arguing against a common misunderstanding of how TDD works. On point 1 he and I are in agreement. Many of his commentators mention #3 (and he states it in various ways himself). His description of what code coverage doesn’t give you is absurd when you take #2 into account (we assume that a line of covered code is only covered if the covering test is passing). But most importantly – “TDD proponents” would, in my experience, find this whole line of explanation rather irrelevant, as it is an argument against code-coverage as a single metric for code quality, and they would attempt to achieve code quality through thoroughness of testing by driving the development through tests. TDD is a design methodology, not a testing methodology. You just get tests as side-effect artifacts of the approach. Useful in their own right? Sure, but it’s only sort of the point. It isn’t just writing the tests-first.

In other words – TDD implies high or perfect coverage. But the inverse is not necessarily true.

How do you achieve thoroughness by driving your development with tests? You imagine the functionality you need next (your next increment of useful change), and you write or modify your tests to “require” the new piece of functionality. They you write it, then you go green. Code coverage doesn’t enter into it, because you should have near perfect coverage at all times by implication, because every new piece of functionality you develop is preceded by tests which test its main paths and error states, upper and lower bounds, etc. Code coverage in this model is a great way to notice that you screwed up and missed something, but nothing else.

So, is code-coverage useful? Heck yeah! I’ve used coverage to discover lots of waste in my system. I’ve removed whole sets of APIs that were “just in case I need them” APIs, because they become rote (lots of accessors/mutators that are not called in normal operations). Is code coverage the only way I would find them? No. If I’m dealing with a system that wasn’t driven with tests, or was poorly tested in general, I may use coverage as a quick health meter, but probably not. Going from zero to 90% on legacy code is likely to be less valuable than just re-writing whole subsystems using TDD… and often more expensive.

Regardless, while Kevin is formally asking “is code coverage useful?” he’s really asking (rhetorically) is it reasonable to worship code coverage as the primary metric. But if no one’s asserting the positive, why is he questioning it? He may be dealing with a lot of people with misunderstandings of how TDD works. He could be dealing with metrics bigots. He could be dealing with management-imposed-metrics initiatives which often fail. It might be a pet peeve or he’s annoyed with TDD and this is a great way to do some agile-baiting of his own. I don’t know him, so I can’t say. His comments seem reasonable, so I assume no ill intent. But the answer to his rhetorical question is “yes, but in context.” Not surprising, since most rhetorically asked questions are answerable in this fashion. Hopefully it’s a bit clearer where it’s useful (and where/how) it’s not.

(This article is a cross-post from “Geek in a Suit”)

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Finally – a solid metric for code quality.

Bob C. Martin (Uncle Bob to you and me) suggested, in his “quintessence” keynote at the Agile2008 conference that he had the perfect metric for code quality. Cyclomatic complexity and others were interesting mostly to those who invented them, etc. His answer was brilliant, and was easily measured during code reviews:

WTFs per minute

I love it. All you need is a counter and a stop-watch. Start code-review and start watch and start clicking anytime you see code that makes you say or think “What the F???”. This dovetails nicely with his original recommendation for a new statement in the agile manifesto: “Craftsmanship over Crap”.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Dependecy Injection on J2ME/CLDC devices.

This post is a little geeky and technical and product-related for AgileAdvice, and is a shameless self-promotion. Nevertheless, since testability, test-driven-development, and incremental design are non-exclusive sub-topics of Agile, I though I’d report this here anyway.

Many developers use the Dependency Injection and Inversion of Control (IoC) patterns through such IoC containers as Spring, Hivemind, Picocontainer, and others. They have all sorts of benefits to testability, flexibility, etc. that I won’t repeat here, but can be read about here, here, and here. A great summary of the history of “IoC” can be found here. J2ME developers, however, especially those on limited devices that use the CLDC configuration of J2ME, cannot use the substantial number of IoC/DI containers out there, because they nearly all rely on reflection. These also often make use of APIs not present in the CLDC – APIs which could not easily be added. Lastly there’s a tendency among developers of “embedded software” to be very suspicious of complexity.

In working out some examples of DI as part of a testability workshop at one of my clients, I whipped up a quick DI container, and being the freak that I am, hardened it until it was suitable for production, because I hate half-finished products. So allow me to introduce the Israfil Micro Container. (That is, the Container from the Israfil Micro project). As I mention in the docs, “FemtoContainer” just was too ridiculous, and this container is smaller than pico-container. The project is BSD licensed, and hosted on googlecode, so source is freely available and there’s an issue/feature tracker, yadda yadda.

Essentially I believe that people working on cellphones and set-top boxes shouldn’t be constrained out of some basic software design approaches – you just have to bend the design approach to fit the environment. So hopefully this is of use to more than one of my clients. It currently supports an auto-wiring registration, delayed object creation (until first need), and forthcoming are some basic lifecycle support, and a few other nicities. It does not use reflection (you use a little adapter for object creation instead), and performs quicker than pico-container. Low, low overhead. It’s also less than 10 classes and interfaces (including the two classes in the util project). It’s built with Maven2, so you can use it in any Maven2-built project with ease, but of course you can always also just download the jar (and the required util jar too). Enjoy…

P.S. There are a few other bits on googlecode that I’m working on in the micro-zone. Some minimalist backports of some of java.lang.concurrency (just the locks), as well as some of the java.util.Collections stuff. Not finished, but also part of the googlecode project.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Quality is not an attribute, it’s a mindset

This was actually cribbed from a Bruce Schneier blog post about security…

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent–or protect against–those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society–in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.  

There’s an interesting parallel between this statement and how most software quality is handled. Quality and Security are similar. In fact, I see security as a very specific subset of quality-mindedness. Certainly both require the same mindset to ensure – rather than thinking merely “how will this work”, a quality-focused person will also, or perhaps alternately think: “how might this be breakable”. From this simple change in thinking flows several important approaches

  • Constraint-based thinking (as opposed to solution based thinking): allows an architect/developer to conceive of the set of possible solutions, rather than an enumeration of solutions. By looking at constraints, a developer implements the lean principle of deciding as late as possible, with as full information as possible.
  • Test-First: As one thinks of how it might break, scenarios emerge that can form the basis of test cases. These cases form a sort of executable acceptance criteria
  • Lateral Thinking: The constraint+test approach starts to get people into a very different mode, where vastly different kinds of solutions show up. The creative exercise of trying to break something provides insights that can change the whole approach of the system.

 Schneier goes on to ponder 

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities–again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.  

 Here again – I think it’s possible to help people get a mind-set about quality, but some do seem to have a knack. It’s important to have some of these people on your teams, as they’ll disturb the waters and identify potential failure modes. These are going to be the ones who want to “mistake proof” (to borrow Toyota’s phrase) the system by writing more unit tests and other executable proofs of the system. But most importantly (and I can personally testify to this) it is critical that people just write more tests. It is a learned skill to start to think of “how might this fail” until it becomes a background mental thread, always popping up risk models.A related concept is Demmings’ “systems-thinking”, which, applied to software quality, causes one to start looking at whole ecosystems of error states. This is when fearless re-factoring starts to pay off, because the elimination of duplication allows one to catch classes of error in fewer and fewer locations, where they’re easier to fix. There are many and multifarious spin-off effects of this inverted questioning and the mindset it generates. Try it yourself. When you’re writing code, ask yourself how you might break it? What inputs, external state, etc. might cause it to fail, crash, or behave in odd ways. This starts to show you where you might have state leaking into the wild, or side-effects from excessively complex interactions in your code. So quality focus can start to improve not only the external perception of your product, but also its fitness to new requirements by making it more resilient and less brittle. Cleaner interactions and less duplication allow for much faster implementation of new features.I could go on, but I just wanted to convey this sense of “attitude” or “mindset,” over mere technique. Technique can help you get to a certain level, but you have to let it “click”, and the powerful questions can sometimes help.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Stonecutters, Paycheck Earners, or Cathedral Builders?

All credit for this is due to Mary Poppendieck as this is entirely cribbed from her Agile2007 talk on agile leadership.

A man walks into a quarry and sees three people with pickaxes. He walks up to the first one and asks, “What are you doing?” The first quarry worker irritably replies, “I’m cutting stone, what does it look like? I cut stone today, I cut stone yesterday, and I will cut stone tomorrow!” The man asks the same of the second person who replies, “I’m making a living for my family.” The man turns to the third person and asks him, “so what are you doing here?” The third worker looks up for a moment, looks back at the man with a proud expression and says, “I’m building a Cathedral!”

The moral of the parable is likely clear, but it bears applying to organizational dynamics. Basically, consider that everyone gets annoyed with aspects of their jobs. The question is one of response. Basically, if a person is annoyed with his job, does he:

  • Complain? He is probably a stonecutter.
  • Ignore it? He is probably a paycheque earner.
  • Fix it? He is a cathedral builder.

Cathedral builders are absolutely critical to a healthy organization. They push the organization towards a vision, often propagating the high-level vision throughout all levels of the organization. Unfortunately, these are also people who annoy the stonecutters and paycheque earners, because they won’t participate in the complaints, and they agitate for changes which make it hard to ignore things and just “do the job.” But your success will rely on them… find them, shelter them, and grow them. And whatever you do, don’t “promote” them into positions where they aren’t effective. Empower them, and if you need to add salary and title that’s fine, but let them find their own area of maximal contribution. Guaranteed you, Mr. business owner, aren’t smart enough to see what that is.

Organizations that fail to see this remain mediocre or failing organizations. Organizations that find ways of harnessing their workforce and coaxing people into the next level of engagement, succeed.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Systems thinking… another view

One of my favorite books in the world is Systemantics, by John Gall. This irreverent look at systems and how they fail has a lot to teach a community that is attempting to re-work the systems of software development. Much of it justifies the “simple set of principles, applied” approach that most Agile methods use. It should also provide good insights into anyone trying to develop and architect complex software systems. The best kind of parody is one that’s hard to tell if it’s parody, because it’s so insightful.

The book is available from the author at the Systems Bible page.

A decent quick look at the kind of material found inside can be found here: Commentary on the principles of “Systemantics”, by Anthony Judge

And this article goes into more discussion about Gall’s laws of systemantics: Bart Stewart on Systemantics

UPDATED: Best quote yet: “Admittedly, it’s not easy to imagine what a self-organizing car engine would look like, but maybe it’s time someone tried.” -Bart Stewart

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

The cost of building

Building software is expensive. I’m not talking about creating software, I mean taking software as written (source code) and running it through compilers and linkers and post-processors and packagers and obfuscators and installer-generators. It might not seem so, but look under the covers and you will find a wealth of costs and potential savings…

Lifecycle of the Developer

The developer has a concept he needs to translate into software. He (or she) does not sit and meditate until it comes to him, then streams it effortlessly into the computer. Rather he tries something, and tries something else, and writes some conditions (tests) to limit the scope of his options, and cycles over and over and over again between four main activities: creating -> building -> executing tests -> discovering. The developer then wraps around, having discovered and learned (found the bug or identified a future direction) and begins to create again.

If you break this down, there are two states – active and waiting – that the developer is in at any point. He is active when he is learning and he is active when he is creating. He is waiting when he is building and executing tests. So the developer’s ability to do further learning/creative work comes from how long he has to wait for building/executing the software.

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Offshoring References

Many people are trying out Agile Work in software development. The current industry climate is one that has focused business stakeholders’ attentions on re-examining their core priorities. Where Agile optimizes on “Time to market”, the offshoring “over-the-wall” approach to software development seeks to optimize on raw dollar cost.

What do you do if your organization is attempting both? There are some good resources on-line about this already, however I have yet to see good case-studies with published figures. Also, much of what’s out there comes in the form of mini-articles that are no more than promotional ads for X proprietary “agile-offshore” solution. Regardless, some of the following may help avoid the worst problems of integrating two quite different approaches.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Just-In-Time Value Delivery and Waste Elimination

Lean Manufacturing

If I am manufacturing computers, and I receive a large batch of CPU’s… say… Intel Pentium IIIs, I put them in inventory. There is a recurring montetary cost associated with keeping this inventory. There is, however, also a hidden cost. If I use up half my inventory to build computers, and then I inventory those computers, and I ship half of those computers, I now have 3/4 of my initial chips waiting around, earning me no money at all.

Then, Intel ships the new Pentium 4. What happens now? Well, I basically have to throw out my Pentium 3 stock (either by making low-cost, reduced-margin computers just to get rid of them, or by trying to sell the chips wholesale at cut rates). I also have to either slash prices on my remaining stock of computers, or re-fit them with the new processors. All of this is very expensive, and may eliminate most or all of the profits I was expecting to make on the original purchase. It is a scenario that is rife with waste.

Most manufacturing industry players have figured this out, and moved to a Just-In-Time model of production. Toyota pioneered much of this in the 1970s, and now inventory is seen as a liability, rather than an asset. This waste-free approach is a centrepiece of the Lean manufacturing revolution

Waste and Inventory in Software Development

Unfortunately, there is a parallel in computer software creation that has been rather poorly understood by businesses that need custom software development. Waste can be considered as anything that is unfinished, and/or unused. In software development terms, this can be applied to documents that will never be read or that might be unnecessary, and other such things. It is also true of unfinished software.

Traditional Software Development Example

Let us suppose that we ask someone to develop a piece of software with thirty features. Let us further suppose that this software will take about twelve months to produce. Let’s further suppose that ten of these are business-critical features, and that all of the features’ definitions are highly market-driven.

So a traditional software project starts to develop them. First there is a requirements analysis phase, then a design phase. Throughout there are lots of approval stages, sign-offs, etc. Then some team codes up the software starting somewhere around month five. Coding proceeds for about three months, after which the software goes through a testing process in month nine. In theory, at the end of this testing process (month twelve), we are supposed to receive the software for use, and we should be able to receive the value it was supposed to offer. However, defects are found in testing, requiring some re-work. The software is delayed by a further three months, and we finally receive it. By the time we receive it, our competetors have defined new features, and we have to submit new feature requests, whose results will not be seen for several months.

So for the entire development process, we received no business value, and continued to pay. Fifteen months from first investment until we started to achieve results that could mean revenue from the system. At this point, the software is partially obsolete. This is quite similar to the manufacturing scenario above.

Lean and Agile Software Development

Agile and Lean software development practices change this process by delivering business value a little-bit at a time. In an Agile software project, the business specifies what the most important features are (in our case the ten business-critical ones). The team then begins working in small time-boxes – say, two to four weeks. In each time-box, the team works on a small but well-defined list of features taken from the top of the prioritized feature list that we specified. At the end of the time-box, those features that were worked on are delivered to a degree of quality that each of the delivered features would be suitable for use by the customer. The whole might not be ready, but any features worked-on would be complete and production-ready.

If the team can average about three features per month (about what they pulled off in the above example), then by month four, the team could deliver a piece of software that has all ten business-critical features ready for use. The whole might not be ready, but the customers could determine whether those ten were worth taking the software in its current state, and packaging it up and deploying it. Quite simply, the software may not be “done”, but it’s “done enough”. Then the team can continue to roll-out less valueable features from the prioritized list.

Market and customer needs volatility

This becomes even more important when we consider the competition’s changes. In the traditional example above, after fifteen months, additional features were required. These may have been discovered in month six. Traditionally, these could only be considered in month sixteen. In Agile and Lean software development, however, work on these changes could be started in month seven or sooner. So the business received value early, or “just-in-time”, and they could get high-value changes “just-in-time”.

Quality

Because testing is built-in to the process, the customer is able to validate the product at the end of every time-box, so there are no large batches of re-work. The only time large batches of rework are necessary are when large changes to the basic requirements are requested. And then, Agile does not resist the customer’s desire to change, but recognizes it as an essential part of software delivery, and adapts to the changing consditions. As a result, the Agile and Lean methods are optimized to produce quality product in small increments, so quality doesn’t suffer from customers or markets changing their minds.

Summary

Agile and Lean principles are very powerful, and allow for business value to be delivered sooner to end-customers. This allows for better quality and risk management. It also allows strategic and tactical decision-making by executives to be undertaken when such decisions are most needed – not six months too late.

Firms that embrace Lean and Agile development principles are beginning to see the competetive differentiation that Toyota saw vis-a-vis the rest of the automotive industry throughout the 70′s and 80′s. It is only in the last two decades that, after adopting Lean practices, Toyota’s competitors have begun to close the gap. Agile software shops are beginning to achieve these same competetive gains. Those who don’t adapt to just-in-time value delivery – who don’t eliminate waste in their processes – will feel the results as their competitors take products and services to market far faster, and with more responsiveness to their customers.

“If you’re not on the steam-roller, you’re part of the asphalt.” – Lighthouse Design, circa 1996.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Value redux (more on business value added)

In a previous article, Mishkin portrayed three kinds of added value that can be considered when analyzing a value stream: Customer Value Added (CVA), Business Value Added (BVA), and Non-Value Added(NVA). I recommend peeking at this article before reading this one.

The second flavour mentioned is Business Value Added. This is, to quote the above, “activity that is required to operate the business but the customer is unwilling to pay for.” I wanted to look at this a bit closer.

Why would we separate this kind of value? On one hand, if the customer wouldn’t pay for it, is it really value? Management and executive would say, “Yes”, since it amounts to operational infrastructure or overhead necessary to the business. By this view, without certain “horizontal” functions, the entire enterprise to which the lean analysis is being applied would not be possible – therefore it’s of immense value to the customer, however indirect. On the other hand, if it isn’t of value to the customer – ie. the customer cannot see the value, then why is it there at all? This is a tough question.

A fanatical Lean interpretation would be to say that this kind of value doesn’t exist. Either a reasonable customer would be willing to pay for it, explicitly, if indirectly, or it simply is not value added. In essence, business value added can be seen as something of an allowance – a departure from the sharp scalpel of lean value-stream analysis. Its purpose is to provide to those who do these “BVA” functions some explicit value, since these are often those whom lean analysts must convince of the accuracy of their analyses. By this interpretation, BVA is strictly a cop-out. It doesn’t provide value to the product/service produced, and is therefore a target for waste elimination.

There are very good reasons to look at BVA from either side. The discipline of waste elimination on one hand, and validating necessary if unprofitable work on the other. As in most things, a balanced view is wisest here, and perhaps BVA is merely a short-hand for such a view. BVA, like NVA is waste. However, there are some necessary and unavoidable wastes.

Traffic lights are on all night, even when cars aren’t available, because no one has figured out an efficient way of turning them off until cars appear. It’s necessary infrastructure. But it is waste. It’s power being consumed without result.

A full highway is a jammed highway. Having about 15-20% capacity under-used actually tends to optimize the on-ramp-to-off-ramp cycle time. Here is unused roadway, where clearly one could fit on more cars. Yet, according to queueing theory, this localized “waste” is necessary in that it optimizes the whole system. Such system-supporting overhead is counterintuitive, but bears itself out in many natural systems.

The process of tracking project progress does not improve the final product, but it aids in the organization of the corporation/entity that is providing the initial funding, and satisfies necessary requirements that are not direct from the customer. It may result in better resource allocation, for example. The daily meeting is time not spent on the product, but those 5-15 minutes can unjam horrible project roadblocks. Eliminating this “NVA” or “BVA” step would be catastrophic for the whole system.

Because BVA, like NVA, is waste, it should always be examined for reduction and elimination. Unlike other NVA, however, I would argue that BVA is that minimum NVA activity that cannot be trimmed, without unduely sacrificing the effort as a whole. By calling it NVA we keep it in perspective. Perhaps it might be better called “unavoidable non-value added” (UNVA) activity. It’s value is not direct, and it is effort and resources taken away from the customer. Where possible, it should be eliminated as NVA. However, those who perform these functions can rest assured that, once the process is leaned-out, what they are doing is unavoidable and necessary work.

BVA (UNVA) can be a very useful tool for clarifying process, so long as the analyst doesn’t get sloppy about treating it as waste.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Why Should Business-People Care About Continuous Integration?

Continuous integration, and for that matter, TDD, FDD, and other Agile practices and methods can be obscure to someone who hasn’t run across them before. Since some Agile approaches really relate to engineers more directly than to their managers and executives, I have been asked why business-people should care about some of them?

The question might be examined from the other side – how do things that are important to the business side relate to things happening on the technology side. Put yet another way, is the whole organization in-sync and harmonious, or is the left hand interfering with the right. Finding consistency of vision and values across disciplines within an organization can be very difficult, but there are good examples of business’ values being reflected in engineering practices.

Kaizen, for example, is an increasingly common business watch-word. It’s philosophies of waste-reduction, orderliness, and continuous improvement have radically affected Toyota’s much-vaunted production system, for example. This philosophy, and other principles have influenced Total Quality Management, Lean analysis, Six Sigma, and other value-oriented business management practices. Kaizen is often translated as meaning a continuous improvement in small increments, and in practice is almost a micro-quality-control. The idea that a single defect can bring a production line to a halt, so that the defect is caught early and fixed when it is cheap springs from this approach. (Kaizen is much more than this, and often also refers to a short high-value problem-solving session that uses related principles. Kaizen, in this context, refers to the process philosophy and the practice thereof, cf. Kaizen.)

Continuous integration is a software engineering practice that is quite similar. When combined with test-driven development, it forms a kind of Toyota-style production line scenario. Continuous integration basically means that all changes to the software are integrated with the rest of the software as soon as the developer submits the change to the central repository. At that point, or at very near intervals, the whole system is run through unit and integration tests to see that it is still healthy. If a defect arises, either through a mistaken submission, or the submission of something that breaks something else, the system alerts the developer, and possibly relevant management. The system “goes red”, as the jargon goes, and the developer rapidly fixes whatever is out-of-sync. This is quite analogous to Toyota’s “stop the production-line” approach to quality management.

Assigning power so close to the ground can be frightening to both executives and technical employees alike. This is understandable. People used to controlling everything often find it hard to delegate to the “shop floor”, and people who are used to possessing little power are often afraid to wield it once it is granted. Both the arenas of technology and business, however, have established, through volumes of evidence, that defects caught early can be orders of magnitude cheaper to fix. Toyota leapt ahead in its reputation of quality very soon after implementing such a system. Businesses that use Kaizen or other related approaches to quality management and process evaluation on the business side can see these principles at work in their software development organizations.

As with most good things – simple principles, broadly applied to specific disciplines, work to the overall benefit of the organization. They provide confidence and common vision and value across disparate specialties. Business stakeholders who make these high-level decisions can then have increased confidence that what they’re defining, marketing, and selling won’t fail them in execution in the customers’ hands.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Big Visible Charts

Ron Jeffries recently posted something to the scrum developers list, where he made reference to his article on Big Visible Charts. It’s a good article, and looks at a variety of presentations of metrics in an agile project. It took me back to Edward Tufte‘s Envisioning Information. (His other good books on the subject include Visual Explanations: Images and Quantities, Evidence and Narrative and The Visual Display of Quantitative Information). It’s often very hard for agile workers to communicate status to people without too much reliance on “inside” jargon. Some of these principles and presentation mechanisms are quite helpful. Also of note is Marty Andrew’s site, also on Big Visible Charts which contains many other and useful presentation approaches.

As practitioners and advocates of agile approaches, or as people considering the use of agile, the presentation of real status of an effort is crucial. The more of, and the more creative solutions that we can use to present this, the better we can communicate the real business value that the stakeholders are receiving.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail