Tag Archives: Metrics

Towards a Culture of Leadership: 10 Things Real Leaders Do (and So Can You)

This article is adapted from a session proposal to Toronto Agile Conference 2018.

Leadership occurs as conscious choice carried out as actions.

Everyone has the ability to carry out acts leadership. Therefore, everyone is a potential leader.

For leadership to be appropriate and effective, acts of leadership need to be tuned to the receptivity of those whose behaviour the aspiring leader seeks to influence. Tuning leadership requires the ability to perceive and discern meaningful signals from people and, more importantly, the system and environment in which they work.

As leaders, the choices we make and the actions we carry out are organic with our environment. That is, leaders are influenced by their environments (often in ways that are not easily perceived), and on the other hand influence their environments in ways that can have a powerful impact on business performance, organizational structures and the well-being of people. Leaders who are conscious of this bidirectional dynamic can greatly improve their ability to sense and respond to the needs of their customers, their organizations and the people with whom they interact in their work. The following list is one way of describing the set of capabilities that such leaders can develop over time.

  1. Create Identity: Real leaders understand that identity rules. They work with the reality that “Who?” comes first (“Who are we?”), then “Why?” (“Why do we do what we do?”).
  2. Focus on Customers: Real leaders help everyone in their organization focus on understanding and fulfilling the needs of customers. This is, ultimately, how “Why?” is answered.
  3. Cultivate a Service Orientation: Real leaders design and evolve transparent systems for serving the needs of customers. A leader’s effectiveness in this dimension can be gauged both by the degree of customer satisfaction with deliverables and to the extent which those working in the system are able to self-organize around the work.
  4. Limit Work-In-Progress: Real leaders know the limits of the capacity of systems and never allow them to become overburdened. They understand that overburdened systems also mean overburdened people and dissatisfied customers.
  5. Manage Flow: Real leaders leverage transparency and sustainability to manage the flow of customer-recognizable value through the stages of knowledge discovery of their services. The services facilitated by such leaders is populated with work items whose value is easily recognizable by its customers and the delivery capability of the service is timely and predictable (trustworthy).
  6. Let People Self-Organize: As per #3 above, when people doing the work of providing value to customers can be observed as self-organizing, this is a strong indication that there is a real leader doing actions 1-5 (above).
  7. Measure the Fitness of Services (Never People): Real leaders never measure the performance of people, whether individuals, teams or any other organization structure. Rather, real leaders, practicing actions 1-6 (above) understand that the only true metrics are those that provide signals about customers’ purposes and the fitness of services for such purposes. Performance evaluation of people is a management disease that real leaders avoid like the plague.
  8. Foster a Culture of Learning: Once a real leader has established all of the above, people involved in the work no longer need be concerned with “safe boundaries”. They understand the nature of the enterprise and the risks it takes in order to pursue certain rewards. With this understanding and the transparency and clear limits of the system in which they work, they are able to take initiative, run experiments and carry out their own acts of leadership for the benefit of customers, the organization and the people working in it. Fear of failure finds no place in environments cultivated by real leaders. Rather, systematic cycles of learning take shape in which all can participate and contribute. Feedback loop cadences enable organic organizational structures to evolve naturally towards continuous improvement of fitness for purpose.
  9. Encourage Others to Act as Leaders: Perhaps the highest degree of leadership is when other people working with the “real leader” begin to emerge as real leaders themselves. At this level, it can be said that the culture of learning has naturally evolved into a culture of leadership.
  10. Stay Humble: Real leaders never think that they have it all figured out or that they have reached some higher state of consciousness that somehow makes them superior to others in any way. They are open and receptive to the contributions of others and always seek ways to improve themselves. Such humility also protects them from the inevitable manipulations of charlatans who will, form time to time, present them with mechanical formulas, magic potions, palm readings and crystal ball predictions. Real leaders keep both feet on the ground and are not susceptible to the stroking of their egos.

If you live in Toronto, and you would like to join a group of people who are thinking together about these ideas, please feel welcome to join the KanbanTO Meetup.

Register here for a LeanKanban University accredited leadership class with Travis.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Kanban: Real Scaled Agility for Your Enterprise

Your business is an ecosystem of interdependent services, a complex adaptive system.

A bunch of organizations I know started their journey of increasing their agility with Scrum. That didn’t solve all of their problems. Kanban enables organizations to evolve their service delivery systems towards mature business agility.

As addressed in How Kanban Saved Agile, pure Scrum is extremely rare. Scrumbut (the disparaging label that spawned from so many organizations reporting that they do Scrum, but…) on the other hand, is extremely common.

In order to not be Scrumbut, you need the following:
  • Cross-functional development team of 7 +/- 2 people—ALL skills needed to ship product is present on the team—there are no dependencies external to the team;
  • One source of demand with no capacity constraints—the Product Owner is the customer AND full-time member of the team;
  • Sprints are one month or less, begin with starting new demand from the Product Owner and end with the delivery of potentially shippable Product Increments, followed by a retrospective about how to do better next Sprint;
  • “Potentially Shippable” means that the decision about whether to actually ship is purely a business decision. All the technical work is done;
  • If all of the technical work required in order to ship isn’t done, then the Sprint is a failed Sprint;
  • The Scrum Master is a servant-leader and Scrum educator to the entire organization.

How many organizations do you know of with Scrum teams that meet all of the requirements above? I don’t know one.

So, what’s the solution? Give up on Scrum? Are we still getting benefits from Scrumbut? Alright, let’s stop it with the Scrumbut already. Let’s acknowledge what’s really going with real teams in the real world and call that Scrum. Let’s refer to the above  checklist as “Ideal Scrum”.

Agile scaling methods have become a popular risk hedging tactic for all the loose ends dangling around the real teams in the real world.

Here are some of the reasons for adding layers of scaling around Agile teams:

  • Teams are not fully cross-functional;
  • Teams have external and opaque depencies;
  • Many of these dependencies are shared services with multiple sources of demand and constrained capacity—often overburdened;
  • External dependencies can be other teams—demand from other teams shows up in their backlogs, prioritized by their own Product Owners;
  • Many dependencies don’t play by the same rules at all—some reside in a different part of the organization, with different structures and political forces;
  • The Product Owners are proxies for multiple stakeholders and customers and the Product Backlogs represent an array of multiple sources of demand, with different service level expectations, strategic origins, degrees of clarity, urgency and political forces pushing them into the deliver organization;
  • The Product Backlogs are made up primarily of solutions defined by stakeholders and translated by the pseudo-Product Owners as pseudo-user stories—how they get there is opaque, the “fuzzy front end”—and somewhere in here a fuzzy delivery commitment was already made;
  • The work of a Sprint includes all of the work that the non-cross-functional teams can get done—then whatever the teams get “done” is “delivered” (handed-off) to a subsequent set of teams or process steps (usually fairly well defined at an organizational level but outside of the teams’ influence);
  • Delivery decisions are made based on constraints imposed by legacy technology, systems and their gatekeepers (for historically good reasons);
  • The teams are “done” at the end of each Sprint, yet much work is still to be done before their “done” work is potentially shippable;
  • The Scrum Master’s are held collectively accountable for the collective deliverables of the teams and their ability to cross-team coordinate and integrate—accountability by committee translates into no one is actually responsible.
  • Middle managers are scrambling to pick up the pieces because they are actually accountable for delivered results.

Generally speaking, the aim of Agile scaling methods is to apply larger Agile wrappers around clusters of Agile teams in order to re-establish some kind of hierarchical structure needed to manage the interdependencies described above. Whether its a Release Train or a Nexus, or whatever else, the idea is that there is an “Agile Team of Teams” managing the interdependencies of multiple, smaller teams. As long as the total number of people doesn’t grow beyond the Dunbar number (~150), the Dunbar-sized group is dedicated and cross-functional, there is a team managing the interdependencies within the Dunbar, there are no dependencies outside of the Dunbar and there is some cadence (1-3 months) of integrated delivery—it’s still “Agile”. All of this scaling out as far as a Dunbar (and only that far) allows the enterprise to still “be Agile”—Scaled Agile.

This is all supposed to be somehow more realistic than Ideal Scrum (with perhaps am overlay of Scrum of Scrums and a Scrum of Scrum of Scrums). It’s not. How many organizations do you know of that can afford to have ~150 people 100% dedicated to a single product? Perhaps today there is enough cash lying around, but soon enough the  economic impact will be untenable, if not unsustainable.

How does Kanban address this problem? Your business is a complex adaptive system. You introduce a Kanban system into it such that it is likely that the complex adaptive system is stimulated to improve. The Systems Thinking Approach to Introducing Kanban—STATIK—is how you can make such a transition more successful (@az1):
  1. Understand the purpose of the system—explicitly identify the services you provide, to whom you provide them and why;
  2. Understand the things about the delivery of the service that people are not happy about today—both those whose needs are addressed by the service and those doing the work of delivering the service;
  3. Define the sources of demand—what your customers care about and why;
  4. Describe the capability of your system to satisfy these demands;
  5. Map the workflow of items of customer-recognizable value (@fer_cuenca), beginning with a known customer need and ending with the need being met through stages of primary knowledge discovery (Scrum teams somewhere in the middle, towards the end)—focus on activities, not looping value streams;
  6. Discover classes of service—there are patterns to how different kinds of work flow through your system (they are not arbitrarily decided by pseudo-Product Owners), what are they? Group them, they are classes of service and knowing them enables powerful risk management;
  7. With all of the above as an input, design the Kanban system for the service;
  8. Learn how to do steps 2-7 and start applying it directly to your own context in a Kanban System Design class;
  9. Socialize and rollout (learn how by participating in a Kanban Coaching Professional Masterclass);
  10. Implement feedback-loop Cadences for continuous evolution—learn the 7 Kanban Cadences and begin applying to your own context in a Kanban Management Professional class;
  11. Repeat with all of the interdependent services in your organization—every “dependency” is a service—Kanban all of them with STATIK and begin implementing the Cadences.

Don’t get hung up on teams, roles, your latest reorg, how people will
respond to another “change”, who’s in, who’s out, etc. These are all part of the service as it is now—your current capability. Initially, no changes are required at this level. The kanban system will operate at a higher level of scale. Through feedback-loop cadences, it will evolve to be fit for the purpose of your customers without a traumatic and expensive reorg.  Who is responsible for this? Identify this person. If you are the one thinking about this problem, there is a good chance that it’s you. Whoever it is, this is the manager of the service; take responsibility, do the work and make life better for everyone.

For more information about LeanKanban University Certified Kanban courses provided by Berteig, please go to www.worldmindware.com/kanban. Some spots are still available for our classes in Toronto, June 12-16.

For Agilists who have read this far and still don’t get it, start here:

14 Things Every Agilist Should Know About Kanban

The story below may be familiar to some:

Our IT group started with Scrum. Scores of people got trained. Most of our Project Managers became “Certified” Scrum Masters. Most of our Business Analysts became “Certified” Product Owners. We purged some people who we knew would never make the transition. We reorganized everyone else into cross-functional teams – mostly developers and testers. But now they are Scrum Developers. We tried to send them for “Certified” Scrum Developer training but hardly anyone of them signed up. So they are Just Scrum Developers. But we still call them developers and testers. Because that’s still how they mostly function—silos within “cross functional teams”, many tales of two cities rather than just one.

After the Scrum teams had been up and running for a while and we were able to establish some metrics to show the business how Agile we were (since they didn’t believe us based on business results), we had a really great dashboard showing us how many Scrum teams we had, how many Kanban teams, how many DevOps, how many people had been trained. We even knew the average story point velocity of each team.

The business didn’t get it. They were worried that Agile wasn’t going to solve their problem of making commitments to customers and looking bad because we still weren’t able to deliver “on time”.

As IT leadership, we were really in the hot seat. We started to talk about why the transformation wasn’t going as it should. We knew better than to bring the Agile coaches into the boardroom. They were part of the problem and needed to be kept at arms length from those of us who were making important decisions. Besides, their Zen talk about “why?” was really getting old fast. Some thought it was because we didn’t have the right technology. Others were convinced it was because we didn’t have the right people. After all, we didn’t go out and higher experienced (above-average) Scrum Masters and Product Owners, instead we just retrained our own people. Clearly that wasn’t working.

We started with improving the Scrum Masters. We went out and hired a few with impressive resumes. We developed some Scrum Master KPIs (HR jumped all over this one). Then one day we had a collective flash of brilliance—since the ScrumMasters are the servant leaders of teams, we will make them responsible for collecting and reporting team metrics and this will tell us how well the teams are doing and how they need to improve. This surely would be the key to improving the performance of IT and get us on a better footing with the business.

But we didn’t get the response we were hoping for. The ScrumMasters soon complained that the metrics of the teams were impacted by dependencies that they had no influence over. When we pushed harder and shamed them publicly for failing to produce meaningful metrics, they tried harder, but they weren’t able to do it. Some began disengage. “This is not the job I signed up for” became their new mantra. This was puzzling. We were empowering them and they were recoiling. Maybe we didn’t get the right Scrum Masters after all. Maybe we needed to go out and find people who could think and act effectively beyond the confines of their own teams. Or…


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Transformation Metrics

TL;DR

When asked to provide metrics to assess “how well” an Agile transformation is going, re-frame the discussion around measuring changes in the impact the IT organization is having (or not) on it’s Business environment, and define a small set of “fitness for purpose” metrics.

The Inevitable Question about Agile Transformation Metrics

Sooner or later, as an IT organization embarks on a transformation towards Agile mindset and practices, someone will be asked to provide “hard evidence” that the effort is paying off, and the conclusion will be that metrics is the vehicle to satisfy that request. What are your Agile transformation metrics?

It’s been my experience that this request usually leads to a discussion about measuring the specific Agile initiatives the IT organization has launched. In organizations where the emphasis has been around engineering disciplines, such metrics might be things like unit test code coverage, or integration build times. If the focus  was around teams and process, then counting number of teams “converted” to Scrum, or people sent to Scrum Master training may appear as the choice.

While those measurement might be useful indicators in some context, they have two problems. First, they are akin to measuring the performance of the car engine without looking outside the window; the engine might be performing well, but if the car doesn’t have the wheels attached, we’re going nowhere. More importantly, though, these figures are usually meaningless for Business stakeholders, who are the ones usually asking for them in the first place.  Agile transformation metrics need to be meaningful to the Business.

Re-framing the Agile Transformation Metrics Question

Agile transformation efforts can be very costly exercises, therefore it is legitimate to ask about the results of such endeavour. The important thing to realize, though, is that this question is really equivalent to another question: “is the IT organization improving its impact on its Business environment.” Another way to put it is, borrowing from the terminology used by the Kanban community: “is the IT organization becoming more and more fit for purpose?” Answering this question, of course, requires a clear understanding of what is that the Business expects from its interactions with IT.

The IT organization can be seen as providing various services to customers. Arguably, if IT has decided to “transform” in some way (perhaps by moving towards an Agile mindset), it’s doing so to improve its impact on those customers, so this is what needs to be measured to know “how the transformation” is going.

Some of those customers are different areas of the organization (like Finance, or HR.) But it doesn’t stop there, because the Business’ engagement with IT doesn’t have value for its own sake. Ultimately, the Business is using IT as a way to optimize its operations so that it can provide external customers with more effective products and services. Moreover, IT is these days the direct channel through which those products and services are delivered to external customers (for example, through web sites and mobile applications.) Therefore, the concept of “fitness for purpose” of the IT organization can be extended to consider the fitness for purpose of the Business respect the external customers it intends to serve.

Defining the “Agile” Transformation Metrics

Measuring “agile transformation success” really means measuring the success of the exchanges between IT and the Business, and between the Business and its external customers.  Measuring the internal processes and practices that IT puts in place as part of that “transformation” is beside the point. This implies starting with a careful definition of the boundaries that delineate the exchanges to be measured. There might be more to external customer fitness for purpose than IT operations, for example, and that needs to be considered when defining Agile transformation metrics, especially if we’re later going to be drawing causation conclusions.

Defining Agile transformation metrics will be, of course,  a highly contextual exercise because every business organization is different.  But we can, however, draw again from the Kanban community for some general guidelines on what to look for. Their thought leaders talk about classifying metrics into 3 categories: fitness for purpose metrics, health indicators and improvement drivers.  Using this framework, when talking about “agile transformation metrics” we are referring mainly to the first category, and perhaps a bit to the second. Based on those, improvement initiatives can be put in place, and perhaps driven with metrics belonging to the third category.

A fitness for purpose metric (also known as KPI) is an indicator of something a customer will care about. This is a key distinction: if the metric is not easily recognizable and meaningful for the customer, then it’s not a KPI. Another key characteristic is that a minimum threshold for its value can be defined: if the metric goes below the threshold, the Business is putting the relation with its customers at risk (perhaps they will walk away, initiate legal actions, etc.). In other words, the Business is no longer “fit for purpose”. We can then measure the effectiveness of the “agile transformation” by analyzing how KPI values over time compare to their respective thresholds. A typical KPI is delivery time, measured from the moment a customer request is accepted and committed to, until the moment it’s delivered to production.  This is usually a good Agile transformation metric.

Health indicators are metrics that are inwards facing. Customers don’t really care about them (or even understand), but they indicate how a given aspect of the system is operating. The key characteristic is that they are not directly actionable; they only provide information that needs to be analyzed and put in context. As the value of a health indicator changes, we can draw some conclusions about how the system works, or explain why something is happening (or not), but it doesn’t necessarily leads to concrete action. Defect count is an example of this. Customers will certainly care about quality of the product, and we can make inferences about that quality by looking at how many defects we have, but the absolute number of defects will not necessarily make the product more or less fit for purpose. It may happen that customers consider the current quality to be “good enough”, irrespective of the number of defects.

Finally, improvement driver metrics are metrics put in place to influence behaviour towards a particular change. Their key characteristic is that they are temporary: we set a target on them and once the target is achieved, the metric is no longer necessary. The reason for this is related to the unintended behaviours that a metric might encourage in people, which may lead to locally optimizing the metric at the expense of other aspects, leading to global sub-optimization of the system. An example is unit testing code coverage: if we have determined that a given service is not fit for purpose and the cause is related to poor unit test coverage, then establishing a target for minimum coverage may influence developers to work on adding tests to reverse the situation.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

When the team says, “we need detailed requirements before we can estimate”…

The next time the team says “we can’t estimate without better requirements” what they actually mean is, “this is crazy, but hey…if you think you can accurately predict all the exact requirements and you can guarantee that nobody in this company will change their minds about those requirements between this moment and forever, then we’ll give you an estimate to hang your hat/noose on.”

Every group responsible for the creation and delivery of software (or any complex/creative product for that matter) will experience dissonance between the need to plan and the need to obey the laws of nature which prevent us from travelling through time and future-telling.  Business leaders have to finance the development of product; creative and technical leaders have to solve complex problems amidst dynamic, unpredictable, circumstances.  These conditions manifest as a dichotomy which is difficult to mediate (at best) and/or downright toxic (at worst).

On one hand, a common sentiment among project managers is: “The problem I have with the release planning stage is that without clear requirements, the developers don’t like to give estimates, even with story points.”

On the other hand, a common sentiment among developers is: “Stakeholders don’t understand what they’re asking for, if they knew the complexity of our technology they wouldn’t be asking those questions.”

If developers don’t like to provide estimates, it is likely because others in the organization have used their estimates as though they are accurate predictions of future. Thus, when said estimates turn out to be inaccurate they are used as punitive metrics in conversations about “commitment” and “velocity” and “accountability”.

Point of order: NOBODY CAN PREDICT THE FUTURE.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Measurements Towards Continuous Delivery

I was asked yesterday what measurements a team could start to take to track their progress towards continuous delivery. Here are some initial thoughts.

Lead time per work item to production

Lead time starts the moment we have enough information that we could start the work (ie it’s “ready”). Most teams that measure lead time will stop the clock when that item reaches the teams definition of “done” which may or may not mean that the work is in production. In this case, we want to explicitly keep tracking the time until it really is in production.
Note that when we’re talking about continuous delivery, we make the distinction between deploy and release. Deploy is when we’ve pushed it to the production environment and release is when we turn it on. This measurement stops at the end of deploy.

Cycle time to “done”

If the lead time above is excessively long then we might want to track just cycle time. Cycle time starts when we begin working on the item and stops when we reach “done”.
When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long. Measuring cycle time to “done” can be a good intermediate measurement while we work on reducing lead time to production.

Escaped defects

If a bug is discovered after the team said the work was done then we want to track that. Prior to hitting “done”, it’s not really a bug – it’s just unfinished work.
Shipping buggy code is bad and this should be obvious. Continuously delivering buggy code is worse. Let’s get the code in good shape before we start pushing deploys out regularly.

Defect fix times

How old is the oldest reported bug? I’ve seen teams that had bug lists that went on for pages and where the oldest were measured in years. Really successful teams fix bugs as fast as they appear.

Total regression test time

Track the total time it takes to do a full regression test. This includes both manual and automated tests. Teams that have primarily manual tests will measure this in weeks or months. Teams that have primarily automated tests will measure this in minutes or hours.
This is important because we would like to do a full regression test prior to any production deploy. Not doing that regression test introduces risk to the deployment. We can’t turn on continuous delivery if the risk is too high.

Time the build can be broken

How long can your continuous integration build be broken before it’s fixed? We all make mistakes. Sometimes something gets checked in that breaks the build. The question is how important is it to the team to get that build fixed? Does the team drop everything else to get it fixed or do they let it stay broken for days at a time?

Continuous delivery isn’t possible with a broken build.

Number of branches in version control

By the time you’ll be ready to turn on continuous delivery, you’ll only have one branch. Measuring how many you have now and tracking that over time will give you some indication of where you stand.

If your code isn’t in version control at all then stop taking measurements and just fix that one right now. I’m aware of teams in 2015 that still aren’t using version control and you’ll never get to continuous delivery that way.

Production outages during deployment

If your production deployments require taking the system offline then measure how much time it’s offline. If you achieve zero-downtime deploys then stop measuring this one.  Some applications such as batch processes may never require zero-downtime deploys. Interactive applications like webapps absolutely do.

I don’t suggest starting with everything at once. Pick one or two measurements and start there.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Rules of Scrum: I do not track my hours or my “actual” time on tasks

Scrum considers tracking the time individuals spend on individual tasks to be wasteful effort.  Scrum is only concerned about time when it comes to the time boxes of the Sprint and Sprint meetings.  Scrum also supports sustainable development, which implies working sustainable hours.  When it comes to the completion of tasks, the team is committed to delivering value.  The tasks in the Sprint Backlog represent the remaining tasks that the team has identified for delivering on its committed increment of potentially shippable value for the current Sprint.  The tasks themselves have no value and therefore should not be tracked.  Scrum is only concerned about burning down to value delivered.  Time spent on individual tasks is irrelevant.  What if I track my hours or my “actual” time on tasks?  This promotes a culture of “bums in seats” – that it is more important for people to be busy at any cost instead of getting valuable, high-quality results.  This also promotes bad habits such as forcing work into billable hours even though it is not.  Overall, time tracking in a Scrum environment does much harm and can undermine the entire framework.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

What are Ways to Measure Productivity?

One of the most difficult aspects of doing Agile, is to find reasonable, do-able, agreeable ways to measure productivity.  Many organizations don’t measure productivity or only use very indirect indicators.

What do you do to measure productivity and how is it working for you?  If you don’t measure productivity, why not?


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Report average velocity and fail 50% of the time

The question of “expected velocity” and long-term planning has come up at more than one client. A recent client conversation got me thinking, however, questioning how to interpret velocity when estimating and plotting a roadmap based on a current backlog of features. Assume, for a moment, a backlog of story-pointed features, and 10 good iterations (consistent team, no odd occurrences that would affect velocity). Mathematically average velocity (well, a mean really) is a 50/50 proposition for any subsequent iteration. Some organizations don’t find this level of confidence acceptable. What velocity should be reported as expected for iteration/sprint planning and roadmap forecasting, and how should it be used?

Context

Interpreting velocity, before anything else, requires some context. An agile organization that sees estimates as hypothetical might find this article is of less use. In fact, a good question is whether estimation is even a value-added activity. For this post assume an organization that sees strong value in estimation and planning.

Culture

The biggest piece of context is to know the organizational culture. This is important in two respects, and both of these cultural factors are important because they impact how Velocity is understood within the organization.

What is Failure?

First is the meaning of failure in the organization. Is failure to deliver what was committed to by the planned date considered a failure of the team, or is it simply a fact to be understood and accounted for in future planning? Even in Agile organizations, the former is often true and a hard habit to break. If not delivering to expectations is considered failure and has negative consequences, then that means that estimation is being treated not as estimation, but as prediction and contract. Velocity is therefore a commitment, and should therefore be used conservatively.

Consistency or Speed?

The second item to know is whether consistency and predictability of delivery is of a higher strategic value than the actual rate of delivery. This is often un-stated. Usually people want fast and consistent delivery. The truth is that you can get consistent, or fast software development, or a balance between the two. Lack of trust is usually a strong motivation to encourage consistency over speed, or a history of quality problems, etc. In this case, as well, Velocity is more of a boundary than an indicator.

Emotional Loading in Estimation (or why not Low-ball?)

If estimation is seen as binding, contractual, or limiting, then additional emotions get overloaded. Trust, promise, and betrayal are words used in such organizational cultures. Distrust is usually a strong factor, especially between silos (business vs. technology, company vs. project management vs. customer, etc.). So when people are asked to give estimates, even using agile-friendly mechanisms such as story points, there is usually a process of cementing that estimate into a part of an accountability model, so estimates start to get conservative. People are then accused of low-balling, others are accused of irrational expectations… we’ve all seen this. The language clearly becomes one of contention and blame. Even the term low-balling is often an outright pejorative term for estimating too conservatively.

This doesn’t happen only in agile environments, and project managers in traditional PMBOK frameworks have long factored risk into “contingency budgets”. Interestingly, however, if a Project Manager were to factor risk into the task estimates, they’d be “low-balling capacity,” yet if they were to factor it out and layer it on top of the project work, it’s “contingency budgeting” (At least in a few experiences I’ve had). Either way, someone’s adding a factor for uncertainty, based on the need to predict conservatively or liberally or somewhere in between.

That’s the point of the article: how can Agile projects use velocity to estimate as conservatively (or liberally) as is appropriate?

An average is a 50% chance to succeed (or fail)

Velocity is not a constant. It’s a set of instantaneous values on a curve, with instances being iterations. That means that it varies, and is therefore only meaningful statistically. So how do you reasonably use velocity statistically, and improve confidence? One way is to stop delivering against “average” velocity.

A lot of coaches use average velocity over the previous N iterations. This is not helpful for all sorts of reasons, if estimation is a commitment. By definition, average (well, actually a mean, but they’re close) is a 50/50 proposition. If you report the average team velocity (assuming it’s accurate), then about half the time the team will be under and about half the time the team will be over, statistically. So basically an average is a crap shoot, when taken in any given instance. It’s can only be good in the long run. For this to work, the long-haul has to include permission to fail and a lot of trust. Teams need to be able to go miss dates but will sometimes exceed dates and it should all wash out in the end. In organizations such as I’m describing, that trust isn’t there, so. Additionally, if the language of commitment is around meeting instantaneous iteration commitments (as opposed to delivering high-quality customer value as quickly as is sustain-ably possible) then you aren’t playing the long-game, you’re playing a very short-game.

Simulate Velocity, not work

In a PMI training course I took when I was at Sun Microsystems, we were nicely informed that two point estimates of tasks are a perfect way to fail half the time, per the above logic. One point estimates are just idiotic. Three point estimates were better. We simulated with a monte-carlo algorithm and found a curve and a distribution, and then determined a confidence level yadda yadda. Well, we’re trying to avoid wasting a lot of time estimating up-front, but one way to start representing velocity properly is to do the same kind of statistical modelling done in traditional product management, only simulate velocity, not work items.

In this approach, you take the last N iterations (say 10). Determine the maximum velocity (optimistic) and the minimum velocity (pessimistic), and then the mode (the velocity value that seems to occur most frequently). Then you do monte-carlo simulation so you get a statistical pattern. Now, you actually can determine an answer based on confidence. If you want to be right with an 80% confidence, you pick a velocity where 80% of the simulated runs were successful. (Note – there are a paucity of excel templates to do this math automatically, and often they are for sale. It would be nice to have a few functions with arbitrary distributions based on min-max-mode to help this along.)

It’s not perfect, and it’s a potentially huge amount of administrative overhead. Elsewhere I’ve referenced blogs that entirely oppose any estimation at all, but if you are gong to, then working statistically with simulation is the only way to take small sample numbers meaningful.

Commitment Velocity: Low-Ball as a policy.

Another approach, one perhaps controversial, but taught by some Scrum trainers is to pick the lowest historical delivered velocity. This is a commitment-based approach, on the assumption that building trust around consistent delivery is critical to building sound relationships where product owners and teams can safely state their needs and get things done with a minimum of contractual behaviour. By taking the minimum, you force a low-ball capacity, which means you can have high-confidence of success after a few iterations. You have, likely, after a while, some spare time on your hands. Teams can then choose to pull more work in (without adjusting their commitment velocity), work on “technical debt”, improve their skills, etc. A team could raise their commitment velocity in certain inflection points in the project. A new team member is added that provides a necessary skill not previously available, and after a few iterations the team is consistently hitting a higher number, but this is a careful process to ensure that they are committing, and if they don’t make their new number, it goes down to what they got accomplished.

Indemnify teams’ learning

An arguably healthier option, if you have built enough trust, is to simply indemnify a team from failing to meet the estimate. Since you’re doing mathematics on actuals to generate an expected future number, everyone can acknowledge that past behaviour is no guarantee of future behaviour, and simply use it for capacity planning. In this case, estimation is actually estimation, not commitment or contract. The team is expected to be ahead sometimes, and behind sometimes. The upside of this is that a lot of extra time isn’t spent playing with fictional numbers. Teams are spending their efforts on delivery as quickly-yet-sustain-ably as they can, and the organization treats them as trusted professionals in this. The temptation to assume you can predict the future is seen as folly, and the estimates are used to guide overall direction, not to make outward customer commitments.

Don’t be mindless

There may be other approaches, I’m sure. The agile community is certainly not short of people who love this topic and can talk for hours on “proper” estimation. The point of this post is merely to point out some options, and ask you to look at your organizational culture, team culture, customer culture, the meaning of terms like commitment, failure, success, consistency, speed, etc. As you understand the culture, balance consistency vs. speed, trust, and other factors to choose a method of estimation that meets your goals. Don’t do estimation based on your own, internal cultural assumptions, as you may have developed or been taught techniques that are useful when and where they were taught, but may no longer be so. Or maybe they weren’t so useful then either. Regardless, this because estimation cuts at the heart of the dialogue between producer and consumer, and establishes parameters for that discussion, it’s critical that you think your choice through.

[Christian also blogs at http://www.geekinasuit.com/]


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

ANN: Agile Software Engineering Practices training by Isráfíl Consulting

Isráfíl Consulting is finally prepared for the first series of its Agile Software Engineering Practices training courses. This series is offered in partnership with Berteig Consulting who are graciously hosting the registration process. Their team has also helped greatly in shaping the presentation style and structure of the course. The initial run will be in Ottawa, Toronto (Markham), and Kitchener/Waterloo.   

Topics covered will include Test Driven Development (TDD), testability, supportive infrastructure such as build and continuous integration, team metrics, incremental design and evolutionary architecture, dependency injection, and so much more. (This course won’t present the planning side of XP, but covers many other aspects common to XP projects) It makes a great complement for training in Agile Processes such as XP, Scrum, or OpenAgile. The overview slide presentation is available for free download from the Isráfíl web site.

The courses are scheduled for:

The course is $1250 CAD per student, and participants receive a transferrable discount of $100 CAD for other training with Berteig Consulting as a part of our ongoing partnership. I initially prototyped this course in Ottawa this December, and am very excited to see this through in several locales. Class size is limited to 15, so we can keep the instruction style more involved. The above schedules are linked to Berteig Consulting’s course system and have registration links at the bottom of the description. Locations are TBD, but will be updated at the above links as soon as they’re finalized.

A further series is planned for several US cities in March, and we’ll be sure to announce them as well.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Yet another misunderstanding of TDD, testing, and code coverage

I was vaguely annoyed to see this blog article featured in JavaLobby’s recent mailout. Not because Kevin Pang doesn’t make some good points about the limits of code coverage, but because his title is needlessly controversial. And, because JavaLobby is engaging in some agile-baiting by publishing it without some editorial restraint.

In asking the question, “Is code coverage all that useful,” he asserts at the beginning of his article that Test Driven Development (TDD) proponents “often tend to push code coverage as a useful metric for gauging how well tested an application is.” This statement is true, but the remainder of the blog post takes apart code coverage as a valid “one true metric,” a claim that TDD proponents don’t make, except in Kevin’s interpretation.

He further asserts that “100% code coverage has long been the ultimate goal of testing fanatics.” This isn’t true. High code coverage is a desired attribute of a well tested system, but the goal is to have a fully and sufficiently tested system. Code coverage is indicative, but not proof, of a well-tested system. How do I mean that? Any system whose authors have taken the time to sufficiently test it such that it gets > 95% code coverage is likely (in my experience) thinking through how to test their system in order to fully express its happy paths, edge cases, etc. However, the code coverage here is a symptom, not a cause, of a well-tested system. And the metric can be gamed. Actually, when imposed as a management quality criterion, it usually is gamed. Good metrics should confirm a result obtained by other means, or provide leading indicators. Few numeric measurements are subtle enough to really drive system development.

Having said that, I have used code-coverage in this way, but in context, as I’ll mention later in this post.

Kevin provides example code similar to the following:

String foo(boolean condition) {
    if (condition)
        return "true";
    else
        return "false";
}

… and talks about how if the unit tests are only testing the true path, then this is only working on 50% coverage. Good so far. But then he goes on to express that “code coverage only tells us what was executed by our unit tests, not what executed correctly.” He is carefully telling us that a unit test executing a line doesn’t guarantee that the line is working as intended. Um… that’s obvious. And if the tests didn’t pass correctly, then the line should not be considered covered. It seems there are some unclear assumptions on how testing needs to work, so let me get some assertions out of the way…

  1. Code coverage is only meaningful in the context of well-written tests. It doesn’t save you from crappy tests.
  2. Code coverage should only be measured on a line/branch if the covering tests are passing.
  3. Code coverage suggests insufficiency, but doesn’t guarantee sufficiency.
  4. Test-driven code will likely have the symptom of nearly perfect coverage.
  5. Test-driven code will be sufficiently tested, because the author wrote all the tests that form, in full, the requirements/spec of that code.
  6. Perfectly covered code will not necessarily be sufficiently tested.

What I’m driving at is that Kevin is arguing against something entirely different than that which TDD proponents argue. He’s arguing against a common misunderstanding of how TDD works. On point 1 he and I are in agreement. Many of his commentators mention #3 (and he states it in various ways himself). His description of what code coverage doesn’t give you is absurd when you take #2 into account (we assume that a line of covered code is only covered if the covering test is passing). But most importantly – “TDD proponents” would, in my experience, find this whole line of explanation rather irrelevant, as it is an argument against code-coverage as a single metric for code quality, and they would attempt to achieve code quality through thoroughness of testing by driving the development through tests. TDD is a design methodology, not a testing methodology. You just get tests as side-effect artifacts of the approach. Useful in their own right? Sure, but it’s only sort of the point. It isn’t just writing the tests-first.

In other words – TDD implies high or perfect coverage. But the inverse is not necessarily true.

How do you achieve thoroughness by driving your development with tests? You imagine the functionality you need next (your next increment of useful change), and you write or modify your tests to “require” the new piece of functionality. They you write it, then you go green. Code coverage doesn’t enter into it, because you should have near perfect coverage at all times by implication, because every new piece of functionality you develop is preceded by tests which test its main paths and error states, upper and lower bounds, etc. Code coverage in this model is a great way to notice that you screwed up and missed something, but nothing else.

So, is code-coverage useful? Heck yeah! I’ve used coverage to discover lots of waste in my system. I’ve removed whole sets of APIs that were “just in case I need them” APIs, because they become rote (lots of accessors/mutators that are not called in normal operations). Is code coverage the only way I would find them? No. If I’m dealing with a system that wasn’t driven with tests, or was poorly tested in general, I may use coverage as a quick health meter, but probably not. Going from zero to 90% on legacy code is likely to be less valuable than just re-writing whole subsystems using TDD… and often more expensive.

Regardless, while Kevin is formally asking “is code coverage useful?” he’s really asking (rhetorically) is it reasonable to worship code coverage as the primary metric. But if no one’s asserting the positive, why is he questioning it? He may be dealing with a lot of people with misunderstandings of how TDD works. He could be dealing with metrics bigots. He could be dealing with management-imposed-metrics initiatives which often fail. It might be a pet peeve or he’s annoyed with TDD and this is a great way to do some agile-baiting of his own. I don’t know him, so I can’t say. His comments seem reasonable, so I assume no ill intent. But the answer to his rhetorical question is “yes, but in context.” Not surprising, since most rhetorically asked questions are answerable in this fashion. Hopefully it’s a bit clearer where it’s useful (and where/how) it’s not.

(This article is a cross-post from “Geek in a Suit”)


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Finally – a solid metric for code quality.

Bob C. Martin (Uncle Bob to you and me) suggested, in his “quintessence” keynote at the Agile2008 conference that he had the perfect metric for code quality. Cyclomatic complexity and others were interesting mostly to those who invented them, etc. His answer was brilliant, and was easily measured during code reviews:

WTFs per minute

I love it. All you need is a counter and a stop-watch. Start code-review and start watch and start clicking anytime you see code that makes you say or think “What the F???”. This dovetails nicely with his original recommendation for a new statement in the agile manifesto: “Craftsmanship over Crap”.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Measuring Process Improvements – Cycle Time?

One of the challenges with agile methods is to get a clear perspective on how to measure process improvements. I recently had a brief discussion with a C-level executive at a small organization about this. His concern was that cycle time was meaningless because it depended so much upon the size of the work package. So how do we use cycle time as a meaningful measurement? What else can we use to measure process improvement?

Let’s look at the difference in measuring cycle time in an agile vs. non-agile environment. Then we’ll get to other measurements.

Cycle Time , Waterfall and Agile

First, let’s define cycle time. From iSixSigma we have:

Cycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work is spent waiting to take the next action.

This definition is important because it gives us a clue about the potential difference between a waterfall vs. agile method of delivering value. Let’s imagine the typical process used in a waterfall environment. The following are the high-level steps:

  1. Customer / User / Stakeholder sees a need, validates it and submits a request to have that need fulfilled. This is when we start the clock on cycle time.
  2. The fulfillment organization (IT, Product Development, R&D) puts the request in a queue, backlog or requirements management system.
  3. Along with other requests, the fulfillment organization schedules the work on the request, usually by creating a project to fulfill it and other related requests. The project is estimated at a high level, the current status of in-flight projects is noted, and the new project is prioritized relative to other projects.
  4. At some point, based on the schedule and the reality of the work on other projects, the project containing our customer’s request is started. Here, “started” means that detailed requirements are gathered.
  5. After sufficient requirements are gathered, a detailed technical analysis is done including architecture, high-level design, risk analysis, etc.
  6. Development begins. (Note: many people mistakenly start measuring cycle time here.)
  7. Developers and testers work to validate the results of development and fix any problems discovered.
  8. Final acceptance testing is done.
  9. The results of the project are deployed to users, sold to the client, or in some other way passed back to the original requestor. This is when we stop the clock on cycle time.

So from the start of the customer request formally submitted to the time that the fulfillment of that request is made is our true cycle time. There are a few important things to note here. First, there is a queue of work based on requests made but not yet scheduled. There is another queue for work scheduled but not yet started. We know that if we can reduce the size of these queues, we can improve cycle time in a general sense. Second, we know that most organizations of any significant size will have different queues based on the urgency of the request. For example, a high severity bug discovered in the production system of a company’s largest client will be treated differently than a wish list item for a small not-yet-client. These two requests won’t even go in the same queue: the high priority problem will be quickly escalated to a support or development team that can work on it immediately. Third, it is tempting for the development group to measure their local cycle time. This is a Really Bad Idea since it leads to sub-optimizing behaviors. For example, it is easy for the development team to improve their cycle time by sacrificing quality… but this just causes the QA cycle time to increase, and probably the overall cycle time (true cycle time) is affected more than the local improvement in the development group’s cycle time.

Now let’s look at the steps that occur in an ideal agile environment:

  1. As before, the Customer / User / Stakeholder sees a need, validates it and submits a request to have that need fulfilled.
  2. That request is immediately placed in a ready state for the next iteration (cycle, sprint) of a delivery team. Elapsed time: maximum one month.
  3. Team completes the request including all work to actually deliver/deploy and work is delivered to the stakeholder at the end of the iteration. Elapsed time: maximum two months.

So the ideal method of doing agile has a maximum cycle time of two months to deliver from the time a request is made… how many teams are doing this? Not many.

The ideal is extremely difficult to accomplish. Getting to that state requires that the development organization catches up to the business side so that there are zero pending requests at the start of each iteration. It also requires that the business side users and stakeholders are able to articulate their requests so that they are small, and appropriately detailed for the team doing the work.

A realistic agile implementation actually is a lot more messy. Depending on the type of request, the cycle time for a piece of work can vary widely. Some low priority items may take years even in an agile environment. A low priority request is made and approved but then never quite makes it into a project… and then once in a project never quite makes it to the top of the team’s product backlog. This is interesting to look at sometimes, but it points out another important aspect of measuring cycle time: mostly we care about average cycle time (or some other statistically interesting aggregate measure).

The predominant factor in most organizations’ cycle time is the number and size of the queues they use as work is processed. In most organizations there are several queues and most of them contain large numbers of requests or bits of work in process. Queues represent huge amounts of waste. It is easy to see that queue size and cycle time are closely related: the more items in a queue, the longer the cycle time.

This leads to a simple conclusion: regardless of lifecycle approach, reducing the size of an organization’s queues is one of the easiest ways to reduce cycle time. What are some common queues? There are often queues of projects, queues of enhancement requests, queues of defects to be fixed, queues of features, queues of tasks, queues of email (large inboxes), queues of approval requests, queues of production database changes. The number of queues increases the more an organization is oriented around functional groups, and the number of queues decreases the more an organization arranges work to be handled by cross-functional teams.

Cycle Time and Work Package Size

This is where queueing theory and agile methods intersect really well. Cycle time is related to the load on your system, in particular your units of work processing. In most organizations, teams are created to handle work. The more work given to a team simultaneously, the higher their utilization level. Many organizations like high utilization levels because it gives them a guarantee that people are doing valuable work all the time that they are paid to work. This is a completely false benefit and in fact is extremely destructive to overall productivity. From queueing theory we know that the cycle time for a piece of work increases exponentially to the utilization level. We see this whenever we over-load a server… but for some reason we fail to see this when we overload a person or a team or an organization even though it still happens.

Cycle time is also related to the variability in the size of the work packages. Low variability means that the exponential factor related to load is low, and high variability means that the exponential factor is high. In other words, if you have a highway that only allows motorbikes, you can have a very high load without getting bad traffic jams. On the other hand, if you have a highway that allows anything on it, you get traffic jams even with low levels of load. This is why HOV or commuter lanes and the left lane in multi-lane highways don’t typically allow transport trucks and buses. This result from queueing theory is not intuitively obvious so it is even harder for us to apply to software development.

But apply these two ideas, load and work size variability, we must if we wish to create a high performance development organization. The simplest way to do this is to have a single team work on a single project at a time and use iterations to ensure that the work being done is always exactly the same size – the size of the iteration.

Improving Cycle Time

It is possible to have very short iterations and still have a long cycle time. Many organizations make a few common mistakes with agile that cause this. If the work done inside each iteration is restricted to pure development work and everything else is done outside the iterations, then cycle time likely stays long. A common example of this is having the QA folks remain separate from the development team and do their work after a development team releases their work.

There is really only one way to avoid this: have a comprehensive definition of “done” that is met by the team every single iteration. This ensures that all work from idea to release for a given customer request is done inside a single iteration. A side effect of this is that all the pieces of work need to be small. It also gets rid of all the queues except one: the queue of ideas approved for delivery. With a single queue to manage, it becomes easy to measure cycle time, and therefore easy to improve it.

Improving cycle time can now be done in a few ways:

  1. Put a cap on the number of items in the work queue. Since cycle time is directly related to the size of the queues in a system, this is a sure way of putting a maximum on cycle time.
  2. Go through all existing requests and throw as many away as possible. This can be tough to do, but if you are able to do a cost benefit analysis, you will typically find that older items in the queue are no longer worth while.
  3. Provide more stringent gating functions for allowing requests onto the queue. The few items added, the faster the size of the queue is reduced.
  4. And of course, increase the performance of your team(s) so that they go through items on the queue more quickly.

Productivity and Cycle Time

Once you have control of cycle time, it is possible to make reasonable measurements of productivity and two more metrics become extremely important (not that they weren’t important before, but they are easier to work with now). The first is Return on Investment (ROI) and the second is customer satisfaction.

ROI is in its simplest form a measure of how much benefit there is to doing something as compared to the cost of doing it. It takes into account the importance of time and timing, the importance of other options you may have, and of course, hopefully takes into account the business reality of your work. It also takes into account costs.

In software development, the primary cost is the cost of the staff doing the work, and the time factor is your cycle time (Ah! that’s where we use it). If you have a consistent team working on iterations that are always the same size and if you have little or no work being done outside of the iterations, it is very easy to calculate ROI in a useful way. Simply measure how much value a given iteration worth of work will generate and divide by the cost of the team for an iterations (and if the team is not yet doing work as it comes in, take into account the time value of money since the work might not be done for several iterations). Now, productivity is simply a measure of the Return for each Team-Iteration. Dollars/iteration. Simple. If the team’s productivity goes down, you can ask some really simple questions:

  • Did the expected return of the work go down? If so, is there more valuable work the team should be doing? This becomes an opportunity for product improvement.
  • If not, what caused the team to get less done? Was the work harder than expected? Was there a skill gap? Was there an organizational obstacle that was revealed? Was someone sick? This becomes an opportunity for process and team improvement.

Customer satisfaction can be measured in many ways. If you have already started using agile practices, there is a good chance that your customers will already be more satisfied than they were before. This will show up informally through word-of-mouth. However, it is good to have a more systematic way of measuring customer satisfaction. One of the simplest and most commonly used methods of measuring customer satisfaction is the Net Promoter Score. From WikiPedia:

Companies obtain their Net Promoter Score by asking customers a single question (usually, “How likely is it that you would recommend us to a friend or colleague?”). Based on their responses, customers can be categorized into one of three groups: Promoters, Passives, and Detractors. In the net promoter framework, Promoters are viewed as valuable assets that drive profitable growth because of their repeat/increased purchases, longevity and referrals, while Detractors are seen as liabilities that destroy profitable growth because of their complaints, reduced purchases/defection and negative word-of-mouth. Companies calculate their Net Promoter Score by subtracting their % Detractors from their % Promoters.

The Net Promoter Score is closely linked to quality including the hard-to-measure parts of quality like responsiveness, ease of use, and fitness for purpose.

Cycle time also affects customer satisfaction. The faster you can respond to requests by customer, users or other stakeholders, the more likely they are to be satisfied. This happens for two reasons: fast response time means that solutions are more likely to still be useful and correct when actually delivered, and it also gives more opportunities for feedback.

In fact, if we look at these three measures, cycle time, ROI and customer satisfaction, we see that they form a mutually supporting and cross-checking system of ensuring productivity and effectiveness. Measuring anything else muddies the waters and can cause sub-optimal behaviors. The real challenge for most teams is realizing that all their local measures of performance and effectiveness may actually be causing harm (unintentionally) because they draw the team’s attention away from the three organizationally important measures.

Cycle time is the measure that is most closely related to process improvements, but ROI and customer satisfaction should also be used to ensure that process improvements don’t accidentally harm the organization.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Benefits: Early Return on Investment

We wouldn’t do agile if we didn’t think it was better in some way. More and more, I am seeing the adoption of agile methods being driven by business management (rather then engineering). There is a clear reason for this: agile methods offer the possibility of early return on investment compared to other methods of working. This benefit is only one of five essential benefits of agile, but it is one of the most practical and easiest to measure. Therefore it is important to clearly understand how agile methods can deliver this benefit… and how they can fail to do so!

Continue reading Agile Benefits: Early Return on Investment


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Management Tools

Ah, tools…

For agile training and consulting, I always recommend starting with the standard physical tools of note cards on a wall plus a whiteboard and digital camera. However, I am still interested in what tools people have used when they have found that note cards are not sufficient. Here is a list of the tools that I am aware of, as well as a link to a survey…

Continue reading Agile Management Tools


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile Estimation and Pairing

I just read a recent article on PMHut called “Schedule Questions: Pair Programming and the PNR Curve“. There is much in this article that is important for agile project management… and much that should be avoided at all costs!

Estimating Projects

The bulk of the article is an explanation of the PNR curve (Putnam Norden Rayleigh curve). The explanation is sufficient and can be checked against other web articles about software estimation. The basic idea of the PNR curve is to find an ideal number of people with which to staff a project based on an estimate of overall project size (e.g. 72 man-months).

Take a moment to go read Fantasy Estimation (the comments here provide good additional material).

It should be clear now that any initial sizing estimate based on units such as man-months is ridiculous. So what is our alternative? What about an agile approach to estimation and planning?

Agile Estimation

Well, at a minimum, in an agile approach to estimation, we need to resolve some of the problems identified in “Fantasy Estimation”. So, we:

  1. Get the whole team that will be doing the work to do the estimation
  2. We weight estimates based on team maturity, knowledge of tools, knowledge of the business domain and the physical proximity of team members to each other
  3. We adjust our estimates of work remaining based on our results for work completed

The great thing about an agile approach is that it allows all these things to be done and to be done well. Scrum has a specific set of “Drag Factors” that can be used to weight the team’s estimates. And using a team’s past estimates vs. “actuals” is possible on an iteration by iteration basis… and this is possible because every iteration the team is doing the “same thing”: delivering an increment of valuable results (working software, edited video, organizational change, whatever is contributing to the project goals).

The other thing that agile methods allow is to make a distinction between estimation for planning and estimation for commitment.

So. It’s cool to learn about the PNR Staffing curve and the thinking behind it. It’s a little scary to see it being applied in an agile context when we have tools that are much better.

Pairing and Estimation

At the end of the PMHut article, the author, Mike asks a question:

What about pair programming? When using pairs on a team we could either:

  1. Continue as is, use the model to determine optimal team size and then encourage pairing to increase efficiency.
  2. Treat a pair as one hyper-effective person, so count pairs not individuals and increase team sizes accordingly.

This second option seems counter intuitive to agile, we strive for small teams to reduce communication channels, so I’m not convinced by this idea. I would be very interested to hear people’s thoughts on agile staffing curves. So, lots more questions than answers in this post, please let me know if you have any research or thoughts on the matter.

I wrote in response:

Mike, I feel like the options you present in your question at the end actually miss the true point of pairing which has to do with communication. I have never seen pairing done in such a way that two people always work together as a pair. Rather, pairing is promiscuous: people switch pairs frequently throughout a day, iteration or project. This switching has two communication effects: 1) the human interaction and gradual diffusion of information as people switch pairs 2) helping everyone understand all parts of the work as a result of frequently working on many different things. From an estimation standpoint, I expect that neither of your two options is quite right. Rather, the third option is to continue as is with existing estimation and then encourage pairing to increase communication. Increased communication shows up in a number of different ways, not just efficiency: risk mitigation, accelerated individual and team learning etc.

I would like to expand on this just a little. To me, pair programming or pair work of any kind has always been able to provide a number of benefits to projects. Problem avoidance is one benefit (lower defect rates, better code quality through constant inspection, spotting each other). Better communication is another benefit. Increasing the Truck Factor is yet another benefit.

None of these benefits have any direct bearing on estimating a project at the start, particularly since most projects sacrifice quality for schedule. From an agile perspective, since planning estimation is not commitment, it actually doesn’t even really matter! Get a conservative estimate for you project using whatever method you like, then start working and get a commitment velocity and see what the team can really do.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail
Berteig
Upcoming Courses
View Full Course Schedule
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 4
2023
Details
Advanced Certified ScrumMaster® (ACSM)
Online
C$1795.00
Apr 5
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1895.00
Apr 11
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 14
2023
Details
Win as a Manager with Your New Agile Coach: ChatGPT
Online
C$0.00
Apr 14
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 17
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
Apr 18
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
Apr 19
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 21
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 25
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1895.00
Apr 26
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 28
2023
Details
Win as a Manager with Your New Agile Coach: ChatGPT
Online
C$0.00
Apr 28
2023
Details
Advanced Certified Scrum Product Owner® (ACSPO)
Online
C$1525.75
May 3
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 5
2023
Details
Kanban Systems Improvement® (KMPII)
Online
C$1610.75
May 10
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 12
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 12
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1100.75
May 16
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
May 16
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
May 17
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
May 17
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 19
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 19
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 26
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 26
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1610.75
Jun 7
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 9
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 9
2023
Details
Kanban System Design® (KMPI)
Online
C$1610.75
Jun 13
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
Jun 14
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 16
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 16
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
Jun 20
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
Jun 21
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Jun 21
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 23
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 23
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 30
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 30
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1610.75
Jul 5
2023
Details
Kanban Systems Improvement® (KMPII)
Online
C$1610.75
Jul 11
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
Jul 12
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Jul 19
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Aug 15
2023
Details