Tag Archives: agile

Agile Transformation Metrics

TL;DR

When asked to provide metrics to assess “how well” an Agile transformation is going, re-frame the discussion around measuring changes in the impact the IT organization is having (or not) on it’s Business environment, and define a small set of “fitness for purpose” metrics.

The Inevitable Question about Agile Transformation Metrics

Sooner or later, as an IT organization embarks on a transformation towards Agile mindset and practices, someone will be asked to provide “hard evidence” that the effort is paying off, and the conclusion will be that metrics is the vehicle to satisfy that request. What are your Agile transformation metrics?

It’s been my experience that this request usually leads to a discussion about measuring the specific Agile initiatives the IT organization has launched. In organizations where the emphasis has been around engineering disciplines, such metrics might be things like unit test code coverage, or integration build times. If the focus  was around teams and process, then counting number of teams “converted” to Scrum, or people sent to Scrum Master training may appear as the choice.

While those measurement might be useful indicators in some context, they have two problems. First, they are akin to measuring the performance of the car engine without looking outside the window; the engine might be performing well, but if the car doesn’t have the wheels attached, we’re going nowhere. More importantly, though, these figures are usually meaningless for Business stakeholders, who are the ones usually asking for them in the first place.  Agile transformation metrics need to be meaningful to the Business.

Re-framing the Agile Transformation Metrics Question

Agile transformation efforts can be very costly exercises, therefore it is legitimate to ask about the results of such endeavour. The important thing to realize, though, is that this question is really equivalent to another question: “is the IT organization improving its impact on its Business environment.” Another way to put it is, borrowing from the terminology used by the Kanban community: “is the IT organization becoming more and more fit for purpose?” Answering this question, of course, requires a clear understanding of what is that the Business expects from its interactions with IT.

The IT organization can be seen as providing various services to customers. Arguably, if IT has decided to “transform” in some way (perhaps by moving towards an Agile mindset), it’s doing so to improve its impact on those customers, so this is what needs to be measured to know “how the transformation” is going.

Some of those customers are different areas of the organization (like Finance, or HR.) But it doesn’t stop there, because the Business’ engagement with IT doesn’t have value for its own sake. Ultimately, the Business is using IT as a way to optimize its operations so that it can provide external customers with more effective products and services. Moreover, IT is these days the direct channel through which those products and services are delivered to external customers (for example, through web sites and mobile applications.) Therefore, the concept of “fitness for purpose” of the IT organization can be extended to consider the fitness for purpose of the Business respect the external customers it intends to serve.

Defining the “Agile” Transformation Metrics

Measuring “agile transformation success” really means measuring the success of the exchanges between IT and the Business, and between the Business and its external customers.  Measuring the internal processes and practices that IT puts in place as part of that “transformation” is beside the point. This implies starting with a careful definition of the boundaries that delineate the exchanges to be measured. There might be more to external customer fitness for purpose than IT operations, for example, and that needs to be considered when defining Agile transformation metrics, especially if we’re later going to be drawing causation conclusions.

Defining Agile transformation metrics will be, of course,  a highly contextual exercise because every business organization is different.  But we can, however, draw again from the Kanban community for some general guidelines on what to look for. Their thought leaders talk about classifying metrics into 3 categories: fitness for purpose metrics, health indicators and improvement drivers.  Using this framework, when talking about “agile transformation metrics” we are referring mainly to the first category, and perhaps a bit to the second. Based on those, improvement initiatives can be put in place, and perhaps driven with metrics belonging to the third category.

A fitness for purpose metric (also known as KPI) is an indicator of something a customer will care about. This is a key distinction: if the metric is not easily recognizable and meaningful for the customer, then it’s not a KPI. Another key characteristic is that a minimum threshold for its value can be defined: if the metric goes below the threshold, the Business is putting the relation with its customers at risk (perhaps they will walk away, initiate legal actions, etc.). In other words, the Business is no longer “fit for purpose”. We can then measure the effectiveness of the “agile transformation” by analyzing how KPI values over time compare to their respective thresholds. A typical KPI is delivery time, measured from the moment a customer request is accepted and committed to, until the moment it’s delivered to production.  This is usually a good Agile transformation metric.

Health indicators are metrics that are inwards facing. Customers don’t really care about them (or even understand), but they indicate how a given aspect of the system is operating. The key characteristic is that they are not directly actionable; they only provide information that needs to be analyzed and put in context. As the value of a health indicator changes, we can draw some conclusions about how the system works, or explain why something is happening (or not), but it doesn’t necessarily leads to concrete action. Defect count is an example of this. Customers will certainly care about quality of the product, and we can make inferences about that quality by looking at how many defects we have, but the absolute number of defects will not necessarily make the product more or less fit for purpose. It may happen that customers consider the current quality to be “good enough”, irrespective of the number of defects.

Finally, improvement driver metrics are metrics put in place to influence behaviour towards a particular change. Their key characteristic is that they are temporary: we set a target on them and once the target is achieved, the metric is no longer necessary. The reason for this is related to the unintended behaviours that a metric might encourage in people, which may lead to locally optimizing the metric at the expense of other aspects, leading to global sub-optimization of the system. An example is unit testing code coverage: if we have determined that a given service is not fit for purpose and the cause is related to poor unit test coverage, then establishing a target for minimum coverage may influence developers to work on adding tests to reverse the situation.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Link: About Development Managers

My colleague and friend Mike Caspar has posted another insightful article on his blog: Do not create unnecessary fear and animosity with Development Managers.  From the article:

As teams grow, they build confidence and seem to become self-contained, and almost insular to some. This is normal and is not a sign of dysfunction. This simply indicates that the team is starting to have self identity. They are starting to act as a team. This is a good sign…. As the teams become more effective, the Development Manager feels more loss.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Design Debt & UX Debt is Technical Debt

Hey! Let’s all work together, please.

Technical Debt is a term which captures sloppy code, unmaintainable architecture, clumsy user experience, cluttered visual layout, bloated feature-sets, etc.  My stance is that the term, Technical Debt, includes all the problems which occur when people defer professional discipline — regarding any/every technical practice such as product management, visual and UX design, or code.

I assert that the change we need to catalyze in the business community is larger than any one discipline and I am worried that I have seen an increase in blog articles in recent years about concepts like “Design Debt”, “UX Debt”, “Experience Debt” — these articles unfortunately are not helping and have served only to divide the community. They are divisive, not because we shouldn’t be discussing the discreet facets in which Technical Debt can manifest, but because authors often take a decidedly combative approach in their writing.  Take these phrases for example:

  • “Product Design Debt Versus Technical Debt” written by Andrew Chen
  • “User Experience Debt: Technical debt is only half the battle” written by Clinton Christian
  • “Design debt is more dangerous because…” written by James Engwall.

I agree with Andrew Chen that Product Design Debt is a problem — I just don’t like that he chose to impose a dichotomy where there is none.  Why must he argue one “versus” another?  Clinton Christian has implied that we’re in a “battle”.  James Engwall has compared the “danger” of Design Debt relative to Technical Debt.  These words are damaging, I argue, because they divert attention to symptoms and away from root causes.

The root cause of Technical Debt is that people forget this simple principle of the Agile Manifesto: “Continuous attention to technical excellence and good design enhances agility.”

The root solution to Technical Debt — all of its forms — is to help business leaders realize there is a difference between “incremental” development and “iterative” development so they may understand the ROI of refactoring.  No technical expert should ever have to justify the business case for feature-pruning, refreshing a user interface, refactoring code, prioritizing defects.  Every business leader should trust that their technical staff are disciplined and excellent.

Yes, please blog about UX Debt and Product Development Debt, etc.  But please do so in a way that encourages cohesion and unity within the Product Development community.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

When the team says, “we need detailed requirements before we can estimate”…

The next time the team says “we can’t estimate without better requirements” what they actually mean is, “this is crazy, but hey…if you think you can accurately predict all the exact requirements and you can guarantee that nobody in this company will change their minds about those requirements between this moment and forever, then we’ll give you an estimate to hang your hat/noose on.”

Every group responsible for the creation and delivery of software (or any complex/creative product for that matter) will experience dissonance between the need to plan and the need to obey the laws of nature which prevent us from travelling through time and future-telling.  Business leaders have to finance the development of product; creative and technical leaders have to solve complex problems amidst dynamic, unpredictable, circumstances.  These conditions manifest as a dichotomy which is difficult to mediate (at best) and/or downright toxic (at worst).

On one hand, a common sentiment among project managers is: “The problem I have with the release planning stage is that without clear requirements, the developers don’t like to give estimates, even with story points.”

On the other hand, a common sentiment among developers is: “Stakeholders don’t understand what they’re asking for, if they knew the complexity of our technology they wouldn’t be asking those questions.”

If developers don’t like to provide estimates, it is likely because others in the organization have used their estimates as though they are accurate predictions of future. Thus, when said estimates turn out to be inaccurate they are used as punitive metrics in conversations about “commitment” and “velocity” and “accountability”.

Point of order: NOBODY CAN PREDICT THE FUTURE.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Another Agile Article on Slashdot – Andy Hunt has Failed, not Agile

For reference, here is the link to the article on Slashdot called Is Agile Development a Failing Concept?

This article will generate lots of great discussion, but most of it will be ignorant.  My biggest problem with this is that one of the authors of the Agile Manifesto, Andy Hunt, has asserted that Agile just isn’t working out.  My opinion: Andy has failed to have the necessary patience for a decades-long cultural change.  This is a lot like a leader at Toyota saying that lean has failed because 50 years after they started doing it, not everyone is doing it properly yet.  One organization that I know of has been working on changing to Agile for over 10 years and they still aren’t “done”.  That’s actually okay.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

All Team Quality Standards Should Be Part of the Definition of “Done”

The other day a technology leader was asking questions as if he didn’t agree that things like pair programming and code review should be part of the Definition of “Done” because they are activities that don’t show up in a tangible way in the end product. But if these things are part of your quality standards, they should be included in the definition of “Done” because they inform the “right way” of getting things done. In other words, the Definition of “Done” is not merely a description of the “Done” state but also the way(s) of getting to “Done” – the “how” in terms of quality standards. In fact, if you look at most of any team’s definition of “Done”, a lot of it is QA activity, carried out either as a practice or as an operation that is automated. Every agile engineering practice is essentially a quality standard and as they become part of a team’s practice, should be included as part of the definition of “Done”. The leader’s question was “if we’re done and we didn’t do pair programming and pair programming is part of our definition of “Done”, then does that mean we’re not done?” Which is sort of a backwards question because if you are saying you’re done and you haven’t done pair programming, then by definition pair programming isn’t part of your definition of done. But there are teams out there who would never imagine themselves to be done without pair programming because pair programming is a standard that they see as being essential to delivering quality product.

Everything that a Scrum Development Team does to ensure quality should be part of their definition of “Done”. The definition of “Done” isn’t just a description of the final “Done” state of an increment of product. In fact, If that were true, then we should be asking why anything is part of the definition of “Done”. This is the whole problem that this artifact solves. If this were the case, the team could just say that they are done whenever they say they are done and never actually identify better ways of getting to done and establishing better standards. We could just say (and we did and still do), “there’s the software, it’s done,” the software itself being its own definition of “Done”.

On the contrary the definition of “Done” is what it means for something to be done properly. In other words, it is the artifact that captures the “better ways of developing software” that the team has uncovered and established as practice because their practices reflect their belief that “Continuous attention to technical excellence and good design enhances agility” (Manifesto for Agile Software Development). The definition of “Done” is essentially about integrity—what is done every Sprint in order to be Agile and get things done better. When we say that testing is part of our definition of “Done”, that is our way of saying that as a team we have a shared understanding that it is better to test something before we say that it is done than to say that it’s done without testing it because without testing it we are not confident that it is done to our standards of quality. Otherwise, we would be content in just writing a bunch of code, seeing that it “works” on a workstation or in the development environment and pushing it into production as a “Done” feature with a high chance that there are a bunch of bugs or that it may even break the build.

This is similar to saying a building is “Done” without an inspection (activity/practice) that it meets certain safety standards or for a surgeon to say that he or she has done a good enough job of performing a surgical operation without monitoring the vital signs of the patient (partly automated, partly a human activity). Of course, this is false. The same logic holds true when we add other activities (automated or otherwise) that reflect more stringent quality standards around our products. The definition of “Done”,therefore, is partly made up of a set of activities that make up the standard quality practices of a team.

Professions have standards. For example, it is a standard practice for a surgeon to wash his or her hands between performing surgical operations. At one time it wasn’t. Much like TDD or pair programming, it was discovered as a better way to get a job done. In this day and age, we would not say that a surgeon had done a good job if he or she failed to carry out this standard practice. It would be considered preposterous for someone to say that they don’t care whether or not surgeons wash their hands between operations as long as the results are good. If a dying patient said to a surgeon, “don’t waste time washing your hands just cut me open and get right to it,” of course this would be dismissed as an untenable request. Regardless of whether or not the results of the surgery were satisfactory to the patient, we would consider it preposterous that a surgeon would not wash his or her hands because we know that this is statistically extremely risky, even criminal behaviour. We just know better. Hand washing was discovered, recognized as a better way of working, formalized as a standard and is now understood by even the least knowledgable members of society as an obvious part of the definition of “Done” of surgery. Similarly, there are some teams that would not push anything to production without TDD and automated tests. This is a quality standard and is therefore part of their definition of “Done”, because they understand that manual testing alone is extremely risky. And then there are some teams with standards that would make it unthinkable for them to push a feature that has not been developed with pair programming. For these teams, pair programming is a quality standard practice and therefore part of their definition of “Done”.

“As Scrum Teams mature,” reads the Scrum Guide, “it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality.” What else is pair programming, or any other agile engineering practice, if it is not a part of a team’s criteria for higher quality? Is pair programming not a more stringent criteria than, say, traditional code review? Therefore, any standard, be it a practice or an automated operation, that exists as part of the criteria for higher quality should be included as part of the definition of “Done”. If it’s not part of what it means for an increment of product to be “Done”—that is “done right”—then why are you doing it?

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Measurements Towards Continuous Delivery

I was asked yesterday what measurements a team could start to take to track their progress towards continuous delivery. Here are some initial thoughts.

Lead time per work item to production

Lead time starts the moment we have enough information that we could start the work (ie it’s “ready”). Most teams that measure lead time will stop the clock when that item reaches the teams definition of “done” which may or may not mean that the work is in production. In this case, we want to explicitly keep tracking the time until it really is in production.
Note that when we’re talking about continuous delivery, we make the distinction between deploy and release. Deploy is when we’ve pushed it to the production environment and release is when we turn it on. This measurement stops at the end of deploy.

Cycle time to “done”

If the lead time above is excessively long then we might want to track just cycle time. Cycle time starts when we begin working on the item and stops when we reach “done”.
When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long. Measuring cycle time to “done” can be a good intermediate measurement while we work on reducing lead time to production.

Escaped defects

If a bug is discovered after the team said the work was done then we want to track that. Prior to hitting “done”, it’s not really a bug – it’s just unfinished work.
Shipping buggy code is bad and this should be obvious. Continuously delivering buggy code is worse. Let’s get the code in good shape before we start pushing deploys out regularly.

Defect fix times

How old is the oldest reported bug? I’ve seen teams that had bug lists that went on for pages and where the oldest were measured in years. Really successful teams fix bugs as fast as they appear.

Total regression test time

Track the total time it takes to do a full regression test. This includes both manual and automated tests. Teams that have primarily manual tests will measure this in weeks or months. Teams that have primarily automated tests will measure this in minutes or hours.
This is important because we would like to do a full regression test prior to any production deploy. Not doing that regression test introduces risk to the deployment. We can’t turn on continuous delivery if the risk is too high.

Time the build can be broken

How long can your continuous integration build be broken before it’s fixed? We all make mistakes. Sometimes something gets checked in that breaks the build. The question is how important is it to the team to get that build fixed? Does the team drop everything else to get it fixed or do they let it stay broken for days at a time?

Continuous delivery isn’t possible with a broken build.

Number of branches in version control

By the time you’ll be ready to turn on continuous delivery, you’ll only have one branch. Measuring how many you have now and tracking that over time will give you some indication of where you stand.

If your code isn’t in version control at all then stop taking measurements and just fix that one right now. I’m aware of teams in 2015 that still aren’t using version control and you’ll never get to continuous delivery that way.

Production outages during deployment

If your production deployments require taking the system offline then measure how much time it’s offline. If you achieve zero-downtime deploys then stop measuring this one.  Some applications such as batch processes may never require zero-downtime deploys. Interactive applications like webapps absolutely do.

I don’t suggest starting with everything at once. Pick one or two measurements and start there.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

An Evolution of the Scrum of Scrums

This is the story of how the Scrum of Scrums has evolved for a large program I’m helping out with at one of our clients.

Scrum of Scrum photo Continue reading An Evolution of the Scrum of Scrums

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

New Video: Kanban in One Minute – Do It Right

Another short video for your viewing pleasure!  Kanban in One Minute – Do It Right by Michael Badali.

https://www.youtube.com/watch?v=iDYJWKsdhcs

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Updated: The Best Agile Practices to Implement Now

This article which lists three high-ROI practices has been updated with new images and a few other minor fixes:

The Best Agile Practices to Implement Now

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Updated: Full-Day Product Owner Simulation

The Product Owner Simulation that I shared last summer has some minor updates based on a stronger emphasis on product vision.  In particular, two 5 minute exercises before and after the Product Box exercise help to frame the concept of product vision and make it stronger.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Cool: Scrum Your Wedding

Scrum has really come far!!!  Check out “Scrum Your Wedding“.  I love the ScrumMaster and Product Owner tips.  Here’s a good one:

SCRUM MASTER TIP – The stand-up is overkill in most wedding planning scenarios, but we do think it’s useful to ask the questions at least a few times per sprint, perhaps over email. It’s your job to make sure the questions are asked and answered.

They have taken the core ideas of Scrum and made some intelligent modifications to make it suitable for a fixed deadline event planning scenario.  Honestly, I think that the ideas presented here could be a great approach to doing Scrum on other similar fixed deadline projects.  Thanks to Hannah Kane and Julia Smith for creating a unique and useful resource!

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Link: Short Article About Pair Programming

Pair Programming Economics by Olaf Lewitz describes three activities in programming: typing, problem-solving and reading code. How does pair programming help? By making the balance between those three activities better.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Definition of DevOps and the Definition of “Done” – Quick Links

I was poling around for a good definition of DevOps and found a thoughtful article written by Ernest Mueller called What is DevOps?  Highly recommended reading as it includes lots of insight about the relationship between Agile and DevOps.  FWIW, I feel that the concept of the Definition of “Done” is Scrum’s own original take on the same class of ideas: breaking down silos in an organization to get stuff into the marketplace faster and faster.  I even talked about operationalizing software development back in 2004 and 2005 as a counterpoint to the project management approach which puts everyone in silos and pushes work through phase gates.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to information

Please share!
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail