The Rules of Scrum: I do not track my hours or my “actual” time on tasks

Scrum considers tracking the time individuals spend on individual tasks to be wasteful effort.  Scrum is only concerned about time when it comes to the time boxes of the Sprint and Sprint meetings.  Scrum also supports sustainable development, which implies working sustainable hours.  When it comes to the completion of tasks, the team is committed to delivering value.  The tasks in the Sprint Backlog represent the remaining tasks that the team has identified for delivering on its committed increment of potentially shippable value for the current Sprint.  The tasks themselves have no value and therefore should not be tracked.  Scrum is only concerned about burning down to value delivered.  Time spent on individual tasks is irrelevant.  What if I track my hours or my “actual” time on tasks?  This promotes a culture of “bums in seats” – that it is more important for people to be busy at any cost instead of getting valuable, high-quality results.  This also promotes bad habits such as forcing work into billable hours even though it is not.  Overall, time tracking in a Scrum environment does much harm and can undermine the entire framework.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

What are Ways to Measure Productivity?

One of the most difficult aspects of doing Agile, is to find reasonable, do-able, agreeable ways to measure productivity.  Many organizations don’t measure productivity or only use very indirect indicators.

What do you do to measure productivity and how is it working for you?  If you don’t measure productivity, why not?

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Report average velocity and fail 50% of the time

The question of “expected velocity” and long-term planning has come up at more than one client. A recent client conversation got me thinking, however, questioning how to interpret velocity when estimating and plotting a roadmap based on a current backlog of features. Assume, for a moment, a backlog of story-pointed features, and 10 good iterations (consistent team, no odd occurrences that would affect velocity). Mathematically average velocity (well, a mean really) is a 50/50 proposition for any subsequent iteration. Some organizations don’t find this level of confidence acceptable. What velocity should be reported as expected for iteration/sprint planning and roadmap forecasting, and how should it be used?

Context

Interpreting velocity, before anything else, requires some context. An agile organization that sees estimates as hypothetical might find this article is of less use. In fact, a good question is whether estimation is even a value-added activity. For this post assume an organization that sees strong value in estimation and planning.

Culture

The biggest piece of context is to know the organizational culture. This is important in two respects, and both of these cultural factors are important because they impact how Velocity is understood within the organization.

What is Failure?

First is the meaning of failure in the organization. Is failure to deliver what was committed to by the planned date considered a failure of the team, or is it simply a fact to be understood and accounted for in future planning? Even in Agile organizations, the former is often true and a hard habit to break. If not delivering to expectations is considered failure and has negative consequences, then that means that estimation is being treated not as estimation, but as prediction and contract. Velocity is therefore a commitment, and should therefore be used conservatively.

Consistency or Speed?

The second item to know is whether consistency and predictability of delivery is of a higher strategic value than the actual rate of delivery. This is often un-stated. Usually people want fast and consistent delivery. The truth is that you can get consistent, or fast software development, or a balance between the two. Lack of trust is usually a strong motivation to encourage consistency over speed, or a history of quality problems, etc. In this case, as well, Velocity is more of a boundary than an indicator.

Emotional Loading in Estimation (or why not Low-ball?)

If estimation is seen as binding, contractual, or limiting, then additional emotions get overloaded. Trust, promise, and betrayal are words used in such organizational cultures. Distrust is usually a strong factor, especially between silos (business vs. technology, company vs. project management vs. customer, etc.). So when people are asked to give estimates, even using agile-friendly mechanisms such as story points, there is usually a process of cementing that estimate into a part of an accountability model, so estimates start to get conservative. People are then accused of low-balling, others are accused of irrational expectations… we’ve all seen this. The language clearly becomes one of contention and blame. Even the term low-balling is often an outright pejorative term for estimating too conservatively.

This doesn’t happen only in agile environments, and project managers in traditional PMBOK frameworks have long factored risk into “contingency budgets”. Interestingly, however, if a Project Manager were to factor risk into the task estimates, they’d be “low-balling capacity,” yet if they were to factor it out and layer it on top of the project work, it’s “contingency budgeting” (At least in a few experiences I’ve had). Either way, someone’s adding a factor for uncertainty, based on the need to predict conservatively or liberally or somewhere in between.

That’s the point of the article: how can Agile projects use velocity to estimate as conservatively (or liberally) as is appropriate?

An average is a 50% chance to succeed (or fail)

Velocity is not a constant. It’s a set of instantaneous values on a curve, with instances being iterations. That means that it varies, and is therefore only meaningful statistically. So how do you reasonably use velocity statistically, and improve confidence? One way is to stop delivering against “average” velocity.

A lot of coaches use average velocity over the previous N iterations. This is not helpful for all sorts of reasons, if estimation is a commitment. By definition, average (well, actually a mean, but they’re close) is a 50/50 proposition. If you report the average team velocity (assuming it’s accurate), then about half the time the team will be under and about half the time the team will be over, statistically. So basically an average is a crap shoot, when taken in any given instance. It’s can only be good in the long run. For this to work, the long-haul has to include permission to fail and a lot of trust. Teams need to be able to go miss dates but will sometimes exceed dates and it should all wash out in the end. In organizations such as I’m describing, that trust isn’t there, so. Additionally, if the language of commitment is around meeting instantaneous iteration commitments (as opposed to delivering high-quality customer value as quickly as is sustain-ably possible) then you aren’t playing the long-game, you’re playing a very short-game.

Simulate Velocity, not work

In a PMI training course I took when I was at Sun Microsystems, we were nicely informed that two point estimates of tasks are a perfect way to fail half the time, per the above logic. One point estimates are just idiotic. Three point estimates were better. We simulated with a monte-carlo algorithm and found a curve and a distribution, and then determined a confidence level yadda yadda. Well, we’re trying to avoid wasting a lot of time estimating up-front, but one way to start representing velocity properly is to do the same kind of statistical modelling done in traditional product management, only simulate velocity, not work items.

In this approach, you take the last N iterations (say 10). Determine the maximum velocity (optimistic) and the minimum velocity (pessimistic), and then the mode (the velocity value that seems to occur most frequently). Then you do monte-carlo simulation so you get a statistical pattern. Now, you actually can determine an answer based on confidence. If you want to be right with an 80% confidence, you pick a velocity where 80% of the simulated runs were successful. (Note – there are a paucity of excel templates to do this math automatically, and often they are for sale. It would be nice to have a few functions with arbitrary distributions based on min-max-mode to help this along.)

It’s not perfect, and it’s a potentially huge amount of administrative overhead. Elsewhere I’ve referenced blogs that entirely oppose any estimation at all, but if you are gong to, then working statistically with simulation is the only way to take small sample numbers meaningful.

Commitment Velocity: Low-Ball as a policy.

Another approach, one perhaps controversial, but taught by some Scrum trainers is to pick the lowest historical delivered velocity. This is a commitment-based approach, on the assumption that building trust around consistent delivery is critical to building sound relationships where product owners and teams can safely state their needs and get things done with a minimum of contractual behaviour. By taking the minimum, you force a low-ball capacity, which means you can have high-confidence of success after a few iterations. You have, likely, after a while, some spare time on your hands. Teams can then choose to pull more work in (without adjusting their commitment velocity), work on “technical debt”, improve their skills, etc. A team could raise their commitment velocity in certain inflection points in the project. A new team member is added that provides a necessary skill not previously available, and after a few iterations the team is consistently hitting a higher number, but this is a careful process to ensure that they are committing, and if they don’t make their new number, it goes down to what they got accomplished.

Indemnify teams’ learning

An arguably healthier option, if you have built enough trust, is to simply indemnify a team from failing to meet the estimate. Since you’re doing mathematics on actuals to generate an expected future number, everyone can acknowledge that past behaviour is no guarantee of future behaviour, and simply use it for capacity planning. In this case, estimation is actually estimation, not commitment or contract. The team is expected to be ahead sometimes, and behind sometimes. The upside of this is that a lot of extra time isn’t spent playing with fictional numbers. Teams are spending their efforts on delivery as quickly-yet-sustain-ably as they can, and the organization treats them as trusted professionals in this. The temptation to assume you can predict the future is seen as folly, and the estimates are used to guide overall direction, not to make outward customer commitments.

Don’t be mindless

There may be other approaches, I’m sure. The agile community is certainly not short of people who love this topic and can talk for hours on “proper” estimation. The point of this post is merely to point out some options, and ask you to look at your organizational culture, team culture, customer culture, the meaning of terms like commitment, failure, success, consistency, speed, etc. As you understand the culture, balance consistency vs. speed, trust, and other factors to choose a method of estimation that meets your goals. Don’t do estimation based on your own, internal cultural assumptions, as you may have developed or been taught techniques that are useful when and where they were taught, but may no longer be so. Or maybe they weren’t so useful then either. Regardless, this because estimation cuts at the heart of the dialogue between producer and consumer, and establishes parameters for that discussion, it’s critical that you think your choice through.

[Christian also blogs at http://www.geekinasuit.com/]

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

ANN: Agile Software Engineering Practices training by Isráfíl Consulting

Isráfíl Consulting is finally prepared for the first series of its Agile Software Engineering Practices training courses. This series is offered in partnership with Berteig Consulting who are graciously hosting the registration process. Their team has also helped greatly in shaping the presentation style and structure of the course. The initial run will be in Ottawa, Toronto (Markham), and Kitchener/Waterloo.   

Topics covered will include Test Driven Development (TDD), testability, supportive infrastructure such as build and continuous integration, team metrics, incremental design and evolutionary architecture, dependency injection, and so much more. (This course won’t present the planning side of XP, but covers many other aspects common to XP projects) It makes a great complement for training in Agile Processes such as XP, Scrum, or OpenAgile. The overview slide presentation is available for free download from the Isráfíl web site.

The courses are scheduled for:

The course is $1250 CAD per student, and participants receive a transferrable discount of $100 CAD for other training with Berteig Consulting as a part of our ongoing partnership. I initially prototyped this course in Ottawa this December, and am very excited to see this through in several locales. Class size is limited to 15, so we can keep the instruction style more involved. The above schedules are linked to Berteig Consulting’s course system and have registration links at the bottom of the description. Locations are TBD, but will be updated at the above links as soon as they’re finalized.

A further series is planned for several US cities in March, and we’ll be sure to announce them as well.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Yet another misunderstanding of TDD, testing, and code coverage

I was vaguely annoyed to see this blog article featured in JavaLobby’s recent mailout. Not because Kevin Pang doesn’t make some good points about the limits of code coverage, but because his title is needlessly controversial. And, because JavaLobby is engaging in some agile-baiting by publishing it without some editorial restraint.

In asking the question, “Is code coverage all that useful,” he asserts at the beginning of his article that Test Driven Development (TDD) proponents “often tend to push code coverage as a useful metric for gauging how well tested an application is.” This statement is true, but the remainder of the blog post takes apart code coverage as a valid “one true metric,” a claim that TDD proponents don’t make, except in Kevin’s interpretation.

He further asserts that “100% code coverage has long been the ultimate goal of testing fanatics.” This isn’t true. High code coverage is a desired attribute of a well tested system, but the goal is to have a fully and sufficiently tested system. Code coverage is indicative, but not proof, of a well-tested system. How do I mean that? Any system whose authors have taken the time to sufficiently test it such that it gets > 95% code coverage is likely (in my experience) thinking through how to test their system in order to fully express its happy paths, edge cases, etc. However, the code coverage here is a symptom, not a cause, of a well-tested system. And the metric can be gamed. Actually, when imposed as a management quality criterion, it usually is gamed. Good metrics should confirm a result obtained by other means, or provide leading indicators. Few numeric measurements are subtle enough to really drive system development.

Having said that, I have used code-coverage in this way, but in context, as I’ll mention later in this post.

Kevin provides example code similar to the following:

String foo(boolean condition) {
    if (condition)
        return "true";
    else
        return "false";
}

… and talks about how if the unit tests are only testing the true path, then this is only working on 50% coverage. Good so far. But then he goes on to express that “code coverage only tells us what was executed by our unit tests, not what executed correctly.” He is carefully telling us that a unit test executing a line doesn’t guarantee that the line is working as intended. Um… that’s obvious. And if the tests didn’t pass correctly, then the line should not be considered covered. It seems there are some unclear assumptions on how testing needs to work, so let me get some assertions out of the way…

  1. Code coverage is only meaningful in the context of well-written tests. It doesn’t save you from crappy tests.
  2. Code coverage should only be measured on a line/branch if the covering tests are passing.
  3. Code coverage suggests insufficiency, but doesn’t guarantee sufficiency.
  4. Test-driven code will likely have the symptom of nearly perfect coverage.
  5. Test-driven code will be sufficiently tested, because the author wrote all the tests that form, in full, the requirements/spec of that code.
  6. Perfectly covered code will not necessarily be sufficiently tested.

What I’m driving at is that Kevin is arguing against something entirely different than that which TDD proponents argue. He’s arguing against a common misunderstanding of how TDD works. On point 1 he and I are in agreement. Many of his commentators mention #3 (and he states it in various ways himself). His description of what code coverage doesn’t give you is absurd when you take #2 into account (we assume that a line of covered code is only covered if the covering test is passing). But most importantly – “TDD proponents” would, in my experience, find this whole line of explanation rather irrelevant, as it is an argument against code-coverage as a single metric for code quality, and they would attempt to achieve code quality through thoroughness of testing by driving the development through tests. TDD is a design methodology, not a testing methodology. You just get tests as side-effect artifacts of the approach. Useful in their own right? Sure, but it’s only sort of the point. It isn’t just writing the tests-first.

In other words – TDD implies high or perfect coverage. But the inverse is not necessarily true.

How do you achieve thoroughness by driving your development with tests? You imagine the functionality you need next (your next increment of useful change), and you write or modify your tests to “require” the new piece of functionality. They you write it, then you go green. Code coverage doesn’t enter into it, because you should have near perfect coverage at all times by implication, because every new piece of functionality you develop is preceded by tests which test its main paths and error states, upper and lower bounds, etc. Code coverage in this model is a great way to notice that you screwed up and missed something, but nothing else.

So, is code-coverage useful? Heck yeah! I’ve used coverage to discover lots of waste in my system. I’ve removed whole sets of APIs that were “just in case I need them” APIs, because they become rote (lots of accessors/mutators that are not called in normal operations). Is code coverage the only way I would find them? No. If I’m dealing with a system that wasn’t driven with tests, or was poorly tested in general, I may use coverage as a quick health meter, but probably not. Going from zero to 90% on legacy code is likely to be less valuable than just re-writing whole subsystems using TDD… and often more expensive.

Regardless, while Kevin is formally asking “is code coverage useful?” he’s really asking (rhetorically) is it reasonable to worship code coverage as the primary metric. But if no one’s asserting the positive, why is he questioning it? He may be dealing with a lot of people with misunderstandings of how TDD works. He could be dealing with metrics bigots. He could be dealing with management-imposed-metrics initiatives which often fail. It might be a pet peeve or he’s annoyed with TDD and this is a great way to do some agile-baiting of his own. I don’t know him, so I can’t say. His comments seem reasonable, so I assume no ill intent. But the answer to his rhetorical question is “yes, but in context.” Not surprising, since most rhetorically asked questions are answerable in this fashion. Hopefully it’s a bit clearer where it’s useful (and where/how) it’s not.

(This article is a cross-post from “Geek in a Suit”)

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Finally – a solid metric for code quality.

Bob C. Martin (Uncle Bob to you and me) suggested, in his “quintessence” keynote at the Agile2008 conference that he had the perfect metric for code quality. Cyclomatic complexity and others were interesting mostly to those who invented them, etc. His answer was brilliant, and was easily measured during code reviews:

WTFs per minute

I love it. All you need is a counter and a stop-watch. Start code-review and start watch and start clicking anytime you see code that makes you say or think “What the F???”. This dovetails nicely with his original recommendation for a new statement in the agile manifesto: “Craftsmanship over Crap”.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Measuring Process Improvements – Cycle Time?

One of the challenges with agile methods is to get a clear perspective on how to measure process improvements. I recently had a brief discussion with a C-level executive at a small organization about this. His concern was that cycle time was meaningless because it depended so much upon the size of the work package. So how do we use cycle time as a meaningful measurement? What else can we use to measure process improvement?

Let’s look at the difference in measuring cycle time in an agile vs. non-agile environment. Then we’ll get to other measurements.

Cycle Time , Waterfall and Agile

First, let’s define cycle time. From iSixSigma we have:

Cycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work is spent waiting to take the next action.

This definition is important because it gives us a clue about the potential difference between a waterfall vs. agile method of delivering value. Let’s imagine the typical process used in a waterfall environment. The following are the high-level steps:

  1. Customer / User / Stakeholder sees a need, validates it and submits a request to have that need fulfilled. This is when we start the clock on cycle time.
  2. The fulfillment organization (IT, Product Development, R&D) puts the request in a queue, backlog or requirements management system.
  3. Along with other requests, the fulfillment organization schedules the work on the request, usually by creating a project to fulfill it and other related requests. The project is estimated at a high level, the current status of in-flight projects is noted, and the new project is prioritized relative to other projects.
  4. At some point, based on the schedule and the reality of the work on other projects, the project containing our customer’s request is started. Here, “started” means that detailed requirements are gathered.
  5. After sufficient requirements are gathered, a detailed technical analysis is done including architecture, high-level design, risk analysis, etc.
  6. Development begins. (Note: many people mistakenly start measuring cycle time here.)
  7. Developers and testers work to validate the results of development and fix any problems discovered.
  8. Final acceptance testing is done.
  9. The results of the project are deployed to users, sold to the client, or in some other way passed back to the original requestor. This is when we stop the clock on cycle time.

So from the start of the customer request formally submitted to the time that the fulfillment of that request is made is our true cycle time. There are a few important things to note here. First, there is a queue of work based on requests made but not yet scheduled. There is another queue for work scheduled but not yet started. We know that if we can reduce the size of these queues, we can improve cycle time in a general sense. Second, we know that most organizations of any significant size will have different queues based on the urgency of the request. For example, a high severity bug discovered in the production system of a company’s largest client will be treated differently than a wish list item for a small not-yet-client. These two requests won’t even go in the same queue: the high priority problem will be quickly escalated to a support or development team that can work on it immediately. Third, it is tempting for the development group to measure their local cycle time. This is a Really Bad Idea since it leads to sub-optimizing behaviors. For example, it is easy for the development team to improve their cycle time by sacrificing quality… but this just causes the QA cycle time to increase, and probably the overall cycle time (true cycle time) is affected more than the local improvement in the development group’s cycle time.

Now let’s look at the steps that occur in an ideal agile environment:

  1. As before, the Customer / User / Stakeholder sees a need, validates it and submits a request to have that need fulfilled.
  2. That request is immediately placed in a ready state for the next iteration (cycle, sprint) of a delivery team. Elapsed time: maximum one month.
  3. Team completes the request including all work to actually deliver/deploy and work is delivered to the stakeholder at the end of the iteration. Elapsed time: maximum two months.

So the ideal method of doing agile has a maximum cycle time of two months to deliver from the time a request is made… how many teams are doing this? Not many.

The ideal is extremely difficult to accomplish. Getting to that state requires that the development organization catches up to the business side so that there are zero pending requests at the start of each iteration. It also requires that the business side users and stakeholders are able to articulate their requests so that they are small, and appropriately detailed for the team doing the work.

A realistic agile implementation actually is a lot more messy. Depending on the type of request, the cycle time for a piece of work can vary widely. Some low priority items may take years even in an agile environment. A low priority request is made and approved but then never quite makes it into a project… and then once in a project never quite makes it to the top of the team’s product backlog. This is interesting to look at sometimes, but it points out another important aspect of measuring cycle time: mostly we care about average cycle time (or some other statistically interesting aggregate measure).

The predominant factor in most organizations’ cycle time is the number and size of the queues they use as work is processed. In most organizations there are several queues and most of them contain large numbers of requests or bits of work in process. Queues represent huge amounts of waste. It is easy to see that queue size and cycle time are closely related: the more items in a queue, the longer the cycle time.

This leads to a simple conclusion: regardless of lifecycle approach, reducing the size of an organization’s queues is one of the easiest ways to reduce cycle time. What are some common queues? There are often queues of projects, queues of enhancement requests, queues of defects to be fixed, queues of features, queues of tasks, queues of email (large inboxes), queues of approval requests, queues of production database changes. The number of queues increases the more an organization is oriented around functional groups, and the number of queues decreases the more an organization arranges work to be handled by cross-functional teams.

Cycle Time and Work Package Size

This is where queueing theory and agile methods intersect really well. Cycle time is related to the load on your system, in particular your units of work processing. In most organizations, teams are created to handle work. The more work given to a team simultaneously, the higher their utilization level. Many organizations like high utilization levels because it gives them a guarantee that people are doing valuable work all the time that they are paid to work. This is a completely false benefit and in fact is extremely destructive to overall productivity. From queueing theory we know that the cycle time for a piece of work increases exponentially to the utilization level. We see this whenever we over-load a server… but for some reason we fail to see this when we overload a person or a team or an organization even though it still happens.

Cycle time is also related to the variability in the size of the work packages. Low variability means that the exponential factor related to load is low, and high variability means that the exponential factor is high. In other words, if you have a highway that only allows motorbikes, you can have a very high load without getting bad traffic jams. On the other hand, if you have a highway that allows anything on it, you get traffic jams even with low levels of load. This is why HOV or commuter lanes and the left lane in multi-lane highways don’t typically allow transport trucks and buses. This result from queueing theory is not intuitively obvious so it is even harder for us to apply to software development.

But apply these two ideas, load and work size variability, we must if we wish to create a high performance development organization. The simplest way to do this is to have a single team work on a single project at a time and use iterations to ensure that the work being done is always exactly the same size – the size of the iteration.

Improving Cycle Time

It is possible to have very short iterations and still have a long cycle time. Many organizations make a few common mistakes with agile that cause this. If the work done inside each iteration is restricted to pure development work and everything else is done outside the iterations, then cycle time likely stays long. A common example of this is having the QA folks remain separate from the development team and do their work after a development team releases their work.

There is really only one way to avoid this: have a comprehensive definition of “done” that is met by the team every single iteration. This ensures that all work from idea to release for a given customer request is done inside a single iteration. A side effect of this is that all the pieces of work need to be small. It also gets rid of all the queues except one: the queue of ideas approved for delivery. With a single queue to manage, it becomes easy to measure cycle time, and therefore easy to improve it.

Improving cycle time can now be done in a few ways:

  1. Put a cap on the number of items in the work queue. Since cycle time is directly related to the size of the queues in a system, this is a sure way of putting a maximum on cycle time.
  2. Go through all existing requests and throw as many away as possible. This can be tough to do, but if you are able to do a cost benefit analysis, you will typically find that older items in the queue are no longer worth while.
  3. Provide more stringent gating functions for allowing requests onto the queue. The few items added, the faster the size of the queue is reduced.
  4. And of course, increase the performance of your team(s) so that they go through items on the queue more quickly.

Productivity and Cycle Time

Once you have control of cycle time, it is possible to make reasonable measurements of productivity and two more metrics become extremely important (not that they weren’t important before, but they are easier to work with now). The first is Return on Investment (ROI) and the second is customer satisfaction.

ROI is in its simplest form a measure of how much benefit there is to doing something as compared to the cost of doing it. It takes into account the importance of time and timing, the importance of other options you may have, and of course, hopefully takes into account the business reality of your work. It also takes into account costs.

In software development, the primary cost is the cost of the staff doing the work, and the time factor is your cycle time (Ah! that’s where we use it). If you have a consistent team working on iterations that are always the same size and if you have little or no work being done outside of the iterations, it is very easy to calculate ROI in a useful way. Simply measure how much value a given iteration worth of work will generate and divide by the cost of the team for an iterations (and if the team is not yet doing work as it comes in, take into account the time value of money since the work might not be done for several iterations). Now, productivity is simply a measure of the Return for each Team-Iteration. Dollars/iteration. Simple. If the team’s productivity goes down, you can ask some really simple questions:

  • Did the expected return of the work go down? If so, is there more valuable work the team should be doing? This becomes an opportunity for product improvement.
  • If not, what caused the team to get less done? Was the work harder than expected? Was there a skill gap? Was there an organizational obstacle that was revealed? Was someone sick? This becomes an opportunity for process and team improvement.

Customer satisfaction can be measured in many ways. If you have already started using agile practices, there is a good chance that your customers will already be more satisfied than they were before. This will show up informally through word-of-mouth. However, it is good to have a more systematic way of measuring customer satisfaction. One of the simplest and most commonly used methods of measuring customer satisfaction is the Net Promoter Score. From WikiPedia:

Companies obtain their Net Promoter Score by asking customers a single question (usually, “How likely is it that you would recommend us to a friend or colleague?”). Based on their responses, customers can be categorized into one of three groups: Promoters, Passives, and Detractors. In the net promoter framework, Promoters are viewed as valuable assets that drive profitable growth because of their repeat/increased purchases, longevity and referrals, while Detractors are seen as liabilities that destroy profitable growth because of their complaints, reduced purchases/defection and negative word-of-mouth. Companies calculate their Net Promoter Score by subtracting their % Detractors from their % Promoters.

The Net Promoter Score is closely linked to quality including the hard-to-measure parts of quality like responsiveness, ease of use, and fitness for purpose.

Cycle time also affects customer satisfaction. The faster you can respond to requests by customer, users or other stakeholders, the more likely they are to be satisfied. This happens for two reasons: fast response time means that solutions are more likely to still be useful and correct when actually delivered, and it also gives more opportunities for feedback.

In fact, if we look at these three measures, cycle time, ROI and customer satisfaction, we see that they form a mutually supporting and cross-checking system of ensuring productivity and effectiveness. Measuring anything else muddies the waters and can cause sub-optimal behaviors. The real challenge for most teams is realizing that all their local measures of performance and effectiveness may actually be causing harm (unintentionally) because they draw the team’s attention away from the three organizationally important measures.

Cycle time is the measure that is most closely related to process improvements, but ROI and customer satisfaction should also be used to ensure that process improvements don’t accidentally harm the organization.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Benefits: Early Return on Investment

We wouldn’t do agile if we didn’t think it was better in some way. More and more, I am seeing the adoption of agile methods being driven by business management (rather then engineering). There is a clear reason for this: agile methods offer the possibility of early return on investment compared to other methods of working. This benefit is only one of five essential benefits of agile, but it is one of the most practical and easiest to measure. Therefore it is important to clearly understand how agile methods can deliver this benefit… and how they can fail to do so!

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Management Tools

Ah, tools…

For agile training and consulting, I always recommend starting with the standard physical tools of note cards on a wall plus a whiteboard and digital camera. However, I am still interested in what tools people have used when they have found that note cards are not sufficient. Here is a list of the tools that I am aware of, as well as a link to a survey…

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Estimation and Pairing

I just read a recent article on PMHut called “Schedule Questions: Pair Programming and the PNR Curve“. There is much in this article that is important for agile project management… and much that should be avoided at all costs!

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Strategic Plan

Okay, this is only marginally related to agile, but I thought it was interesting nevertheless: How to Write a Detailed Strategic Plan. The main connection to Agile Work, is that you need to have a clear performance goal in mind towards which you are working. This may be a great way to clarify your thoughts about such a goal.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

First Interation Ending

My first iteration using Agile Work for my business development has come to a close. Here is what I did for a “demo” and retrospective.

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Technical Debt

Last night I was thinking more about the analogy of technical debt. In this analogy, design and quality flaws in a team’s work become a “debt” that must eventually be paid back. This analogy is fantastic because it can be taken just a little bit further to understand just how bad defects are…

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

The Case for Context Switching

Recently, Dimitri Zimine wrote an excellent little story about context switching. Joel Spolsky writes in “From the ‘You Call this Agile’ Department“:

Dmitri is only looking at one side of the cost/benefit equation. He’s laid out a very convincing argument why Sarah should not interrupt her carefully planned two week iteration, but he hasn’t even mentioned arguments for the other side: the important sale that will be lost.

Okay… I’ll bite.

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

The Seven Core Practices of Agile Work

Agile Work consists of seven core practices. These practices form a solid starting point for any person, team or community that wishes to follow the Middle Way to Excellence.

Continue reading

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail