Tag Archives: quality

Question: Product Owner and Technical Debt

Learn more about transforming people, process and culture with the Real Agility Program

Question from Meredith:

As a product owner, what are the best ways to record technical debt and what are some approaches to prioritizing that work amid the continuous delivery of working software?


Hi Meredith! This is an interesting question. I’ll start by answering the second part of your question first.  The two most common ways of handling technical debt, quality debt and legacy debt are:

  1. Fix as you go. The Scrum Team works on new PBIs every Sprint, but every time a PBI touches a technical, quality or legacy debt area, the team fixes “just enough” to make the PBI implementation have no debt.  This means that refactoring and the creation of automated tests (usually through TDD) are done on the parts of the product/system that have the problems.
  2. Allocate a percentage. In this scenario, the Scrum Team reduces its velocity (sometimes significantly) to allow for time to deal with the technical, quality and legacy issues. This reduction could be adjusted every Sprint, but is usually consistent for several Sprints in a row.

In both approaches, the business is paying for the debt accumulated, and the cost includes an “interest” fee.  In other words, the sooner you fix technical, quality and legacy debt, the less it costs.  This approach to thinking about your product/system is essential for long-term sustainability.  One organization I worked with took three years working on their system to clean it up without being able to add any new features!  Don’t let your system get to that point.

Now to the first part of your question…

As a Product Owner, you shouldn’t really be making decisions about this cleanup work. Your authority is limited to the Product Backlog which should not include technical items. The only grey area here is with defects which may be hard to classify as either fully business or fully technical. But technical design, duplication of code, technical defects, and legacy code all are under the full authority of the Scrum Development Team. Practically, this means that every Sprint the team has the authority to choose however few PBIs they feel they can take while considering the technical state of the product/system.  We trust and respect the team to make wise decisions.

Therefore, your main job as a Product Owner is to provide the team with as much information as possible about the business consequences of the work they are doing.  With strong communication and collaboration about this aspect of their work, the technical members of your team can make good trade-off decisions, and balance the need for new features with the need to clean up previous compromises in quality.

A final note: in order for this to work well, it is critical that the team not be pushed to further sacrifice quality and that they are given the support to learn the techniques and skills to create debt-free code.  (You might consider sending someone to our CSD training to learn these techniques and skills.)

Using these techniques, I have been able to help teams get very close to defect-free software deliveries (defect rates of 1 or 2 in production per year!)

Let me know in the comments if you would like any further clarification.

Please share!

Agile Transformation Metrics

Learn more about transforming people, process and culture with the Real Agility Program


When asked to provide metrics to assess “how well” an Agile transformation is going, re-frame the discussion around measuring changes in the impact the IT organization is having (or not) on it’s Business environment, and define a small set of “fitness for purpose” metrics.

The Inevitable Question about Agile Transformation Metrics

Sooner or later, as an IT organization embarks on a transformation towards Agile mindset and practices, someone will be asked to provide “hard evidence” that the effort is paying off, and the conclusion will be that metrics is the vehicle to satisfy that request. What are your Agile transformation metrics?

It’s been my experience that this request usually leads to a discussion about measuring the specific Agile initiatives the IT organization has launched. In organizations where the emphasis has been around engineering disciplines, such metrics might be things like unit test code coverage, or integration build times. If the focus  was around teams and process, then counting number of teams “converted” to Scrum, or people sent to Scrum Master training may appear as the choice.

While those measurement might be useful indicators in some context, they have two problems. First, they are akin to measuring the performance of the car engine without looking outside the window; the engine might be performing well, but if the car doesn’t have the wheels attached, we’re going nowhere. More importantly, though, these figures are usually meaningless for Business stakeholders, who are the ones usually asking for them in the first place.  Agile transformation metrics need to be meaningful to the Business.

Re-framing the Agile Transformation Metrics Question

Agile transformation efforts can be very costly exercises, therefore it is legitimate to ask about the results of such endeavour. The important thing to realize, though, is that this question is really equivalent to another question: “is the IT organization improving its impact on its Business environment.” Another way to put it is, borrowing from the terminology used by the Kanban community: “is the IT organization becoming more and more fit for purpose?” Answering this question, of course, requires a clear understanding of what is that the Business expects from its interactions with IT.

The IT organization can be seen as providing various services to customers. Arguably, if IT has decided to “transform” in some way (perhaps by moving towards an Agile mindset), it’s doing so to improve its impact on those customers, so this is what needs to be measured to know “how the transformation” is going.

Some of those customers are different areas of the organization (like Finance, or HR.) But it doesn’t stop there, because the Business’ engagement with IT doesn’t have value for its own sake. Ultimately, the Business is using IT as a way to optimize its operations so that it can provide external customers with more effective products and services. Moreover, IT is these days the direct channel through which those products and services are delivered to external customers (for example, through web sites and mobile applications.) Therefore, the concept of “fitness for purpose” of the IT organization can be extended to consider the fitness for purpose of the Business respect the external customers it intends to serve.

Defining the “Agile” Transformation Metrics

Measuring “agile transformation success” really means measuring the success of the exchanges between IT and the Business, and between the Business and its external customers.  Measuring the internal processes and practices that IT puts in place as part of that “transformation” is beside the point. This implies starting with a careful definition of the boundaries that delineate the exchanges to be measured. There might be more to external customer fitness for purpose than IT operations, for example, and that needs to be considered when defining Agile transformation metrics, especially if we’re later going to be drawing causation conclusions.

Defining Agile transformation metrics will be, of course,  a highly contextual exercise because every business organization is different.  But we can, however, draw again from the Kanban community for some general guidelines on what to look for. Their thought leaders talk about classifying metrics into 3 categories: fitness for purpose metrics, health indicators and improvement drivers.  Using this framework, when talking about “agile transformation metrics” we are referring mainly to the first category, and perhaps a bit to the second. Based on those, improvement initiatives can be put in place, and perhaps driven with metrics belonging to the third category.

A fitness for purpose metric (also known as KPI) is an indicator of something a customer will care about. This is a key distinction: if the metric is not easily recognizable and meaningful for the customer, then it’s not a KPI. Another key characteristic is that a minimum threshold for its value can be defined: if the metric goes below the threshold, the Business is putting the relation with its customers at risk (perhaps they will walk away, initiate legal actions, etc.). In other words, the Business is no longer “fit for purpose”. We can then measure the effectiveness of the “agile transformation” by analyzing how KPI values over time compare to their respective thresholds. A typical KPI is delivery time, measured from the moment a customer request is accepted and committed to, until the moment it’s delivered to production.  This is usually a good Agile transformation metric.

Health indicators are metrics that are inwards facing. Customers don’t really care about them (or even understand), but they indicate how a given aspect of the system is operating. The key characteristic is that they are not directly actionable; they only provide information that needs to be analyzed and put in context. As the value of a health indicator changes, we can draw some conclusions about how the system works, or explain why something is happening (or not), but it doesn’t necessarily leads to concrete action. Defect count is an example of this. Customers will certainly care about quality of the product, and we can make inferences about that quality by looking at how many defects we have, but the absolute number of defects will not necessarily make the product more or less fit for purpose. It may happen that customers consider the current quality to be “good enough”, irrespective of the number of defects.

Finally, improvement driver metrics are metrics put in place to influence behaviour towards a particular change. Their key characteristic is that they are temporary: we set a target on them and once the target is achieved, the metric is no longer necessary. The reason for this is related to the unintended behaviours that a metric might encourage in people, which may lead to locally optimizing the metric at the expense of other aspects, leading to global sub-optimization of the system. An example is unit testing code coverage: if we have determined that a given service is not fit for purpose and the cause is related to poor unit test coverage, then establishing a target for minimum coverage may influence developers to work on adding tests to reverse the situation.

Please share!

Design Debt & UX Debt is Technical Debt

Learn more about transforming people, process and culture with the Real Agility Program

Hey! Let’s all work together, please.

Technical Debt is a term which captures sloppy code, unmaintainable architecture, clumsy user experience, cluttered visual layout, bloated feature-sets, etc.  My stance is that the term, Technical Debt, includes all the problems which occur when people defer professional discipline — regarding any/every technical practice such as product management, visual and UX design, or code.

I assert that the change we need to catalyze in the business community is larger than any one discipline and I am worried that I have seen an increase in blog articles in recent years about concepts like “Design Debt”, “UX Debt”, “Experience Debt” — these articles unfortunately are not helping and have served only to divide the community. They are divisive, not because we shouldn’t be discussing the discreet facets in which Technical Debt can manifest, but because authors often take a decidedly combative approach in their writing.  Take these phrases for example:

  • “Product Design Debt Versus Technical Debt” written by Andrew Chen
  • “User Experience Debt: Technical debt is only half the battle” written by Clinton Christian
  • “Design debt is more dangerous because…” written by James Engwall.

I agree with Andrew Chen that Product Design Debt is a problem — I just don’t like that he chose to impose a dichotomy where there is none.  Why must he argue one “versus” another?  Clinton Christian has implied that we’re in a “battle”.  James Engwall has compared the “danger” of Design Debt relative to Technical Debt.  These words are damaging, I argue, because they divert attention to symptoms and away from root causes.

The root cause of Technical Debt is that people forget this simple principle of the Agile Manifesto: “Continuous attention to technical excellence and good design enhances agility.”

The root solution to Technical Debt — all of its forms — is to help business leaders realize there is a difference between “incremental” development and “iterative” development so they may understand the ROI of refactoring.  No technical expert should ever have to justify the business case for feature-pruning, refreshing a user interface, refactoring code, prioritizing defects.  Every business leader should trust that their technical staff are disciplined and excellent.

Yes, please blog about UX Debt and Product Development Debt, etc.  But please do so in a way that encourages cohesion and unity within the Product Development community.

Please share!

All Team Quality Standards Should Be Part of the Definition of “Done”

Learn more about transforming people, process and culture with the Real Agility Program

The other day a technology leader was asking questions as if he didn’t agree that things like pair programming and code review should be part of the Definition of “Done” because they are activities that don’t show up in a tangible way in the end product. But if these things are part of your quality standards, they should be included in the definition of “Done” because they inform the “right way” of getting things done. In other words, the Definition of “Done” is not merely a description of the “Done” state but also the way(s) of getting to “Done” – the “how” in terms of quality standards. In fact, if you look at most of any team’s definition of “Done”, a lot of it is QA activity, carried out either as a practice or as an operation that is automated. Every agile engineering practice is essentially a quality standard and as they become part of a team’s practice, should be included as part of the definition of “Done”. The leader’s question was “if we’re done and we didn’t do pair programming and pair programming is part of our definition of “Done”, then does that mean we’re not done?” Which is sort of a backwards question because if you are saying you’re done and you haven’t done pair programming, then by definition pair programming isn’t part of your definition of done. But there are teams out there who would never imagine themselves to be done without pair programming because pair programming is a standard that they see as being essential to delivering quality product.

Everything that a Scrum Development Team does to ensure quality should be part of their definition of “Done”. The definition of “Done” isn’t just a description of the final “Done” state of an increment of product. In fact, If that were true, then we should be asking why anything is part of the definition of “Done”. This is the whole problem that this artifact solves. If this were the case, the team could just say that they are done whenever they say they are done and never actually identify better ways of getting to done and establishing better standards. We could just say (and we did and still do), “there’s the software, it’s done,” the software itself being its own definition of “Done”.

On the contrary the definition of “Done” is what it means for something to be done properly. In other words, it is the artifact that captures the “better ways of developing software” that the team has uncovered and established as practice because their practices reflect their belief that “Continuous attention to technical excellence and good design enhances agility” (Manifesto for Agile Software Development). The definition of “Done” is essentially about integrity—what is done every Sprint in order to be Agile and get things done better. When we say that testing is part of our definition of “Done”, that is our way of saying that as a team we have a shared understanding that it is better to test something before we say that it is done than to say that it’s done without testing it because without testing it we are not confident that it is done to our standards of quality. Otherwise, we would be content in just writing a bunch of code, seeing that it “works” on a workstation or in the development environment and pushing it into production as a “Done” feature with a high chance that there are a bunch of bugs or that it may even break the build.

This is similar to saying a building is “Done” without an inspection (activity/practice) that it meets certain safety standards or for a surgeon to say that he or she has done a good enough job of performing a surgical operation without monitoring the vital signs of the patient (partly automated, partly a human activity). Of course, this is false. The same logic holds true when we add other activities (automated or otherwise) that reflect more stringent quality standards around our products. The definition of “Done”,therefore, is partly made up of a set of activities that make up the standard quality practices of a team.

Professions have standards. For example, it is a standard practice for a surgeon to wash his or her hands between performing surgical operations. At one time it wasn’t. Much like TDD or pair programming, it was discovered as a better way to get a job done. In this day and age, we would not say that a surgeon had done a good job if he or she failed to carry out this standard practice. It would be considered preposterous for someone to say that they don’t care whether or not surgeons wash their hands between operations as long as the results are good. If a dying patient said to a surgeon, “don’t waste time washing your hands just cut me open and get right to it,” of course this would be dismissed as an untenable request. Regardless of whether or not the results of the surgery were satisfactory to the patient, we would consider it preposterous that a surgeon would not wash his or her hands because we know that this is statistically extremely risky, even criminal behaviour. We just know better. Hand washing was discovered, recognized as a better way of working, formalized as a standard and is now understood by even the least knowledgable members of society as an obvious part of the definition of “Done” of surgery. Similarly, there are some teams that would not push anything to production without TDD and automated tests. This is a quality standard and is therefore part of their definition of “Done”, because they understand that manual testing alone is extremely risky. And then there are some teams with standards that would make it unthinkable for them to push a feature that has not been developed with pair programming. For these teams, pair programming is a quality standard practice and therefore part of their definition of “Done”.

“As Scrum Teams mature,” reads the Scrum Guide, “it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality.” What else is pair programming, or any other agile engineering practice, if it is not a part of a team’s criteria for higher quality? Is pair programming not a more stringent criteria than, say, traditional code review? Therefore, any standard, be it a practice or an automated operation, that exists as part of the criteria for higher quality should be included as part of the definition of “Done”. If it’s not part of what it means for an increment of product to be “Done”—that is “done right”—then why are you doing it?

Please share!

Measurements Towards Continuous Delivery

Learn more about transforming people, process and culture with the Real Agility Program

I was asked yesterday what measurements a team could start to take to track their progress towards continuous delivery. Here are some initial thoughts.

Lead time per work item to production

Lead time starts the moment we have enough information that we could start the work (ie it’s “ready”). Most teams that measure lead time will stop the clock when that item reaches the teams definition of “done” which may or may not mean that the work is in production. In this case, we want to explicitly keep tracking the time until it really is in production.
Note that when we’re talking about continuous delivery, we make the distinction between deploy and release. Deploy is when we’ve pushed it to the production environment and release is when we turn it on. This measurement stops at the end of deploy.

Cycle time to “done”

If the lead time above is excessively long then we might want to track just cycle time. Cycle time starts when we begin working on the item and stops when we reach “done”.
When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long. Measuring cycle time to “done” can be a good intermediate measurement while we work on reducing lead time to production.

Escaped defects

If a bug is discovered after the team said the work was done then we want to track that. Prior to hitting “done”, it’s not really a bug – it’s just unfinished work.
Shipping buggy code is bad and this should be obvious. Continuously delivering buggy code is worse. Let’s get the code in good shape before we start pushing deploys out regularly.

Defect fix times

How old is the oldest reported bug? I’ve seen teams that had bug lists that went on for pages and where the oldest were measured in years. Really successful teams fix bugs as fast as they appear.

Total regression test time

Track the total time it takes to do a full regression test. This includes both manual and automated tests. Teams that have primarily manual tests will measure this in weeks or months. Teams that have primarily automated tests will measure this in minutes or hours.
This is important because we would like to do a full regression test prior to any production deploy. Not doing that regression test introduces risk to the deployment. We can’t turn on continuous delivery if the risk is too high.

Time the build can be broken

How long can your continuous integration build be broken before it’s fixed? We all make mistakes. Sometimes something gets checked in that breaks the build. The question is how important is it to the team to get that build fixed? Does the team drop everything else to get it fixed or do they let it stay broken for days at a time?

Continuous delivery isn’t possible with a broken build.

Number of branches in version control

By the time you’ll be ready to turn on continuous delivery, you’ll only have one branch. Measuring how many you have now and tracking that over time will give you some indication of where you stand.

If your code isn’t in version control at all then stop taking measurements and just fix that one right now. I’m aware of teams in 2015 that still aren’t using version control and you’ll never get to continuous delivery that way.

Production outages during deployment

If your production deployments require taking the system offline then measure how much time it’s offline. If you achieve zero-downtime deploys then stop measuring this one.  Some applications such as batch processes may never require zero-downtime deploys. Interactive applications like webapps absolutely do.

I don’t suggest starting with everything at once. Pick one or two measurements and start there.

Please share!

Announcing: The Real Agility Program

Learn more about transforming people, process and culture with the Real Agility Program

Real Agility Program LogoThe Real Agility Program is an Enterprise Agile change program to help organizations develop high-performance teams, deliver amazing products, dramatically improve time to market and quality, and create work environments that are awesome for employees.

This article is a written summary of the Executive Briefing presentation available upon request from the Real Agility Program web site.  If you obtain the executive briefing, you can follow along with the article below and use it to present Real Agility to your enterprise stakeholders.

The Problem

At Berteig Consulting we have been working for 10 years to learn how to help organizations transform people, process and culture.  The problem is simple to state: there is a huge amount of opportunity waste and process waste in most normal enterprise-scale organizations.  If you have more than a couple hundred people in your organization, this almost certainly affects you.

We like to call this problem “the Bureaucratic Beast”.  The Bureaucratic Beast is a self-serving monster that seems to grow and grow and grow.  As it grows, this Beast makes it progressively more difficult for business leaders to innovate, respond to changes in the market, satisfy existing customers, and retain great employees.

Real Agility, a system to tame the Bureaucratic Beast, comes from our experience working with numerous enterprise Agile adoptions.  This experience, in turn, rests on the shoulders of giants like John Kotter (“Leading Change”), Edgar Schein (“The Corporate Culture Survival Guide”), Jim Collins (“Good to Great” and “Built to Last”), Mary Poppendieck (“Lean Software Development”) Jon Katzenbach (“The Wisdom of Teams”) and Frederick Brooks (“The Mythical Man-Month”).  Real Agility is designed to tame all the behaviours of the Bureaucratic Beast: inefficiency, dis-engaged staff, poor quality and slow time-to-market.

Studies have proven that Agile methods work in IT.  In 2012, the Standish Group observed that 42% of Agile projects succeed vs. just 14% of projects done with traditional “Bureaucratic Beast” methods.  Agile and associated techniques aren’t just for IT.  There is growing use of these same techniques in non-technoogy environments such as marketing, operations, sales, education, healthcare, and even heavy industry like mining.

Real Agility Basics: Agile + Lean

Real Agility is a combination of Agile and Lean; both systems used harmoniously throughout an enterprise.  Real Agility affects delivery processes by taking long-term goals and dividing them into short cycles of work that deliver valuable results rapidly while providing fast feedback on scope, quality and most importantly value.  Real Agility affects management processes by finding and eliminating wasteful activities with a system view.  And Real Agility affects human resources (people!) by creating “Delivery Teams” which have clear goals, are composed of multi-skilled people who self-organize, and are stable in membership over long periods of time.

There are lots of radical differences between Real Agility and traditional management (that led to the Bureaucratic Beast in the first place).  Real Agility prioritizes work by value instead of critical path, encourages self-organizing instead of command-and-control management, a team focus instead of project focus, evolving requirements instead of frozen requirements, skills-based interactions instead of roles-based interaction, continuous learning instead of crisis management, and many others.

Real Agility is built on a rich Agile and Lean ecosystem of values, principles and tools.  Examples include the Agile Manifesto, the “Stop the Line” practice, various retrospective techniques, methods and frameworks such as Scrum and OpenAgile, and various thinking tools compatible with the Agile – Lean ecosystem such as those developed by Edward de Bono (“Lateral Thinking”) and Genrich Altshuller (“TRIZ”).

Real Agility acknowledges that there are various approaches to Agile adoption at the enterprise level: Ad Hoc (not usually successful – Nortel tried this), Grassroots (e.g. Yahoo!), Pragmatic (SAFe and DAD fall into this category), Transformative (the best balance of speed of change and risk reduction – this is where the Real Agility Program falls), and Big-Bang (only used in situations of true desperation).

Why Choose Transformative?

One way to think about these five approaches to Agile adoption is to compare the magnitude of actual business results.  This is certainly the all-important bottom line.  But most businesses also consider risk (or certainty of results).  Ad-Hoc approaches to Agile adoption have poor business results and a very high level of risk.  Big-Bang approaches (changing a whole enterprise to Agile literally over night) often have truly stunning business results, but are also extremely high risk.  Grassroots, where leaders give staff a great deal of choice about how and when to adopt Agile, is a bit better in that the risk is lower, but the business results often take quite a while to manifest themselves.  Pragmatic approaches tend to be very low risk because they often accommodate the Bureaucratic Beast, but that also limits their business results to merely “good” and not great.  Transformative approaches which systematically address organizational culture are just a bit riskier than Pragmatic approaches, but the business results are generally outstanding.

More specifically, Pragmatic approaches such as SAFe (Scaled Agile Framework) are popular because they are designed to fit in with existing middle management structures (where the Bureaucratic Beast is most often found).  As a result, there is slow incremental change that typically has to be driven top-down from leadership.  Initial results are good, but modest.  And the long term?  These techniques haven’t been around long enough to know, but in theory it will take a long time to get to full organizational Agility.  Bottom line is that Pragmatic approaches are low risk but the results are modest.

Transformative approaches such as the Real Agility Program (there are others too) are less popular because there is significantly more disruption: the Bureaucratic Beast has to be completely tamed to serve a new master: business leadership!  Transformative approaches require top-to-bottom organizational and structural change.  They include a change in power relationships to allow for grassroots-driven change that is empowered by servant leaders.  Transformative approaches are moderate in some ways: they are systematic and they don’t require all change to be done overnight. Nevertheless, often great business results are obtained relatively quickly.  There is a moderate risk that the change won’t deliver the great results, but that moderate risk is usually worth taking.

Regardless of adoption strategy (Transformative or otherwise) there are a few critical success factors.  Truthfulness is the foundation because without it, it is impossible to see the whole picture including organizational culture.  And love is the strongest driver of change because cultural and behavioural change requires emotional commitment on the part of everyone.

Culture change is often challenging.  There are unexpected problems.  Two steps forward are often followed by one step back.  Some roadblocks to culture change will be surprisingly persistent.  Leaders need patience and persistence… and a systematic change program.

The Real Agility Program

The Real Agility Program has four tracks or lines of action (links take you to the Real Agility Program web site):

  1. Recommendations: consultants assess an organization and create a playbook that customizes the other tracks of the Real Agility Program as well as dealing with any important outliers.
  2. Execution: coaches help to launch project, product and operational Delivery Teams and Delivery Groups that learn the techniques of grassroots-driven continuous improvement.
  3. Accompaniment: trainer/coaches help you develop key staff into in-house Real Agility Coaches that learn to manage Delivery Groups for sustainable long-term efforts such as a product or line of business.
  4. Leadership: coaches help your executive team to drive strategic change for long-term results with an approach that helps executives lead by example for enterprise culture change.

Structurally an enterprise using Real Agility is organized into Delivery Groups.  A Delivery Group is composed of one or more Delivery Teams (up to 150 people) who work together to produce business results.  Key roles include a Business leader, a People leader and a Technology leader all of whom become Real Agility Coaches and take the place of traditional functional management.  As well, coordination across multiple Delivery Teams within a Delivery Group is done using an organized list of “Value Drivers” maintained by the Business leader and a supporting Business Leadership Group. Cross-team support is handled by a People and Technology Support Group co-led by the People and Technology leaders.  Depending on need there may also be a number of communities of practice for Delivery Team members to help spread learning.

At an organizational or enterprise level, the Leadership Team includes top executives from business, finance, technology, HR, operations and any other critical parts of the organization.  This Leadership Team communicates the importance of the changes that the Delivery Groups are going through.  They lead by example using techniques from Real Agility to execute organizational changes.  And, of course, they manage the accountability of the various Delivery Groups throughout the enterprise.

The results of using the Real Agility Program are usually exceptional.  Typical results include:

  • 20x improvement in quality
  • 10x improvement in speed to market
  • 5x improvement in process efficiency
  • and 60% improvement in employee retention.

Of course, these results depend on baseline measures and that key risk factors are properly managed by the Leadership Team.

Your Organization

Not every organization needs (or is ready for) the Real Agility Program.  Your organization is likely a good candidate if three or more of the following problems are true for your organization:

  • high operating costs
  • late project deliveries
  • poor quality in products or services
  • low stakeholder satisfaction
  • managers overworked
  • organizational mis-alignment
  • slow time-to-market
  • low staff morale
  • excessive overtime
  • you need to tame the Bureaucratic Beast

Consider that list carefully and if you feel like you have enough of the above problems, please contact us at tame.the.beast@berteigconsulting.com. or read more about the Real Agility Program for Enterprise Agility on the website.

Please share!

Book List for Enterprise Agile Transformations

Learn more about transforming people, process and culture with the Real Agility Program

Leaders of Agile Transformations for the Enterprise need to have good sources of information, concepts and techniques that will guide and assist them.  This short list of twelve books (yes, books) is what I consider critical reading for any executive, leader or enterprise change agent.  Of course, there are many books that might also belong on this list, so if you have suggestions, please make them in the comments.

I want to be clear about the focus of this list: it is for leaders that need to do a deep and complete change of culture throughout their entire organization.  It is not a list for people who want to do Agile pilot projects and maybe eventually lots of people will use Agile.  It is about urgency and need, and about a recognition that Agile is better than not-Agile.  If you aren’t in that situation, this is not the book list for you.


These books all help you to understand and work with the deeper aspects of corporate behaviour which are rooted in culture.  Becoming aware of culture and learning to work with it is probably the most difficult part of any deep transformation in an organization.

The Corporate Culture Survival Guide – Edgar Schein

Beyond the Culture of Contest – Michael Karlburg

The Heart of Change – John Kotter


This set of books gets a bit more specific: it is the “how” of managing and leading in high-change environments.  These books all touch on culture in various ways, and build on the ideas in the books about culture.  For leaders of an organization, there are dozens of critical, specific, management concepts that often challenge deeply held beliefs and behaviours about the role of management.

Good to Great – Jim Collins

The Leaders’ Guide to Radical Management – Steve Denning

The Mythical Man-Month – Frederick Brooks

Agile at Scale

These books discuss how to get large numbers of people working together effectively. They also start to get a bit technical and definitely assume that you are working in technology or IT. However, they are focused on management, organization and process rather than the technical details of software development. I highly recommend these books even if you have a non-technical background. There will be parts where it may be a bit more difficult to follow along with some examples, but the core concepts will be easily translated into almost any type of work that requires problem-solving and creativity.

Scaling Lean and Agile Development – Bas Vodde, Craig Larman

Scaling Agility – Dean Leffingwell

Lean Software Development – Mary and Tom Poppendieck


These books (including some free online books) are related to some of the key supporting ideas that are part of any good enterprise Agile transformation.

Toyota Talent: Developing Your People the Toyota Way – Jeffrey Liker, David Meier

Agile Retrospectives – Esther Derby

Continuous Delivery – Jez Humble, David Farley

The Scrum Guide – Ken Schwaber, Jeff Sutherland, et. al.

The OpenAgile Primer – Mishkin Berteig, et. al.

Priming Kanban – Jesper Boeg

Please share!

Technical Push-Back – When is it Okay? When is it Bad?

Learn more about transforming people, process and culture with the Real Agility Program

Whenever I run a Certified Scrum Product Owner training session, one concept stands out as critical for participants: the relationship of the Product Owner to the technical demands of the work being done by the Scrum team.

The Product Owner is responsible for prioritizing the Product Backlog. This responsibility is, of course, also matched by their authority to do so. When the Product Owner collaborates with the team in the process of prioritization, there may be ways which the team “pushes back”. There are two possible reasons for push-back. One is good, one is bad.

Bad Technical Push-Back

BudapestDSCN3928-smallThe team may look at a product backlog item or a user story and say “O gosh! There’s a lot there to think about! We have to build this fully-architected infrastructure before we can implement that story.” This is old waterfall thinking. It is bad. The team should always be thinking (and doing) YAGNI and KISS. Technical challenges should be solved in the simplest responsible way. Features should be implemented with the simplest technical solution that actually works.

As a Product Owner, one technique that you can use to help teams with this is that when the team asks questions, that you aggressively keep the user story as simple as possible. The questions that are asked may lead to the creation of new stories, or splitting the existing story. Here is an example…

Suppose the story is “As a job seeker I can post my resume to the web site…” If the technical team makes certain assumptions, they may create a complex system that allows resumes to be uploaded in multiple formats with automatic keyword extraction, and even beyond that, they may anticipate that the code needs to be ready for edge cases like WordPerfect format.  The technical team might also assume that the system needs a database schema that includes users, login credentials, one-to-many relationships with resumes, detailed structures about jobs, organizations, positions, dates, educational institutions, etc. The team might insist that creating a login screen in the UI is an essential prerequisite to allowing a user to upload their resume.  And as for business logic, thy might decide that in order to implement all this, they need some sort of standard intermediate XML format that all resumes will be translated into so that searching features are easier to implement in the future.

It’s all CRAP, bloat and gold-plating.

Because that’s not what the Product Owner asked for.  The thing that’s really difficult for a team of techies to get with Scrum is that software is to be built incrementally.  The very first feature built is built in the simplest responsible way without assuming anything about future features.  In other words, build it like it is the last feature you will build, not the first.  In the Agile Manifesto this is described as:

Simplicity, the art of maximizing the amount of work not done, is essential.

The second feature the team builds should only add exactly what the Product Owner asks for.  Again, as if it was going to be the last feature built.  Every single feature (User Story / Product Backlog Item) is treated the same way.  Whenever the team starts to anticipate the business in any of these three ways, the team is wrong:

  1. Building a feature because the team thinks the Product Owner will want it.
  2. Building a feature because the Product Owner has put it later on the Product Backlog.
  3. Building a technical aspect of the system to support either of the first types of anticipation, even if the team doesn’t actually build the feature they are anticipating.

Okay, but what about architecture?  Fire your architects.  No kidding.¹

Good Technical Push-Back

Rube Goldberg Self Operating Napkin

Sometimes stuff gets non-simple: complicated, messy, hard to understand, hard to change.  This happens despite us techies all being super-smart.  Sometimes, in order to implement a new feature, we have to clean up what is already there.  The Product Owner might ask the Scrum Team to build this Product Backlog Item next and the team says something like: “yes, but it will take twice as long as we initially estimated, because we have to clean things up.”  This can be greatly disappointing for the Product Owner.  But, this is actually the kind of push-back a Product Owner wants.  Why?  In order to avoid destroying your business!  (Yup, that serious.)

This is called “Refactoring” at it is one of the critical Agile Engineering practices.  Martin Fowler wrote a great book about this about 15 years ago.  Refactoring is, simply, improving the design of your system without changing it’s business behaviour.  A simple example is changing a set of 3 radio buttons in the UI to a drop-down box with 3 options… so that later, the Product Owner can add 27 more options.  Refactoring at the level of code is often described as removing duplication.  But some types of refactoring are large: replacing a relational database with a NoSQL database, moving from Java to Python for a significant component of your system, doing a full UX re-design on your web application.  All of these are changes to the technical attributes of your system that are driven by an immediate need to add a new feature (or feature set) that is not supported by the current technology.

The Product Owner has asked for a new feature, now, and the team has decided that in order to build it, the existing system needs refactoring.  To be clear: the team is not anticipating that the Product Owner wants some feature in the future; it’s the very next feature that the team needs to build.

This all relates to another two principles from the Agile Manifesto:

Continuous attention to technical excellence and good design enhances agility.


The best architectures, requirements, and designs emerge from self-organizing teams.

In this case, the responsibilities of the team for technical excellence and creating the best system possible override the short-term (and short-sighted) desire of the business to trade off quality in order to get speed.  That trade-off always bites you in the end!  Why? Because of the cost of fixing quality problems increases exponentially as time passes from when they were introduced.

Young Girl Wiping Face With Napkin








Refactoring is not a bad word.

Keep your code clean.

Let your team keep its code clean.

Oh.  And fire your architects.

Update Sep. 8, 2015: Check out this YouTube video on the closely related topic of who has authority over the Product Backlog and why developers should not set the order of PBIs:

¹ I used to be a senior architect reporting directly to the CTO of Charles Schwab.  Effectively, I fired myself and launched an incredibly successful enterprise architecture re-write project… with no up-front architecture plan.  Really… fire your architects.  Everything they do is pure waste and overhead.  Someday I’ll write that article :-)

Please share!

Great Article about TDD by J. B. Rainsberger

Learn more about transforming people, process and culture with the Real Agility Program

I just finished reading “Test-Driven Development as Pragmatic Deliberate Practice“.  Fantastic article.  I highly recommend it to anyone who is actively coding.  It strongly reflects my understanding of TDD as a fundamental technique in any Software Development Professional’s toolkit.

Please share!

The Rules of Scrum: I work with all the team members to expand the Definition of “Done”

Learn more about transforming people, process and culture with the Real Agility Program

The Definition of “Done” for a Scrum Team makes transparent how close the team’s work is coming to being shippable at the end of every Sprint.  Expanding the Definition of “Done” until the team is able to ship their product increment every Sprint is a process that every Team Member helps advance.  Team Members expand the Definition of “Done” by learning new skills, developing trust and gaining authority to do work, automating repetitive activities, and finding and eliminating wasteful activities.  When every Team Member is systematically expanding the Definition of “Done”, the team builds its capacity to satisfy business needs without relying on outside people, groups or resources.  If Team Members are not actively working on this, then many of the obstacles to becoming a high-performance team will not be discovered.

Please share!

The Rules of Scrum: I am truthful about the internal quality of my product (technical debt)

Learn more about transforming people, process and culture with the Real Agility Program

Scrum relies on the truthfulness of Team Members to allow for transparency about the internal quality of the product.  Internal quality is primarily related to the technical aspects of the product: its design, its architecture, lack of duplication in the code, and the level of coverage of the product with automated tests.  Scrum relies on the professionalism of the Team Members for the proper implementation of this rule. Being upfront, transparent, and truthful about the internal quality of the product allows for the Product Owner to understand how much time and effort the Team will allocate to improving the internal quality of the product and how much will be allocated for new features. This also gives the ScrumMaster an opportunity to connect with stakeholders who may be able to help remove technical debt and waste that is continuing to exist.  If Team Members are not truthful about the internal quality of the product, over time the system will become more cumbersome, more complex, and more painful to improve. This will also lead to a culture of hiding problems, which diametrically opposite to the intended use of Scrum: to uncover problems and allow us to solve them.  Another downside is that morale will begin to decrease as Team Members care less about the quality of their work. This, of course, will ultimately lead to external quality problems that result in customers being unhappy and looking for someone else to work with.

Please share!

ANN: Agile Software Engineering Practices training by Isráfíl Consulting

Learn more about transforming people, process and culture with the Real Agility Program

Isráfíl Consulting is finally prepared for the first series of its Agile Software Engineering Practices training courses. This series is offered in partnership with Berteig Consulting who are graciously hosting the registration process. Their team has also helped greatly in shaping the presentation style and structure of the course. The initial run will be in Ottawa, Toronto (Markham), and Kitchener/Waterloo.   

Topics covered will include Test Driven Development (TDD), testability, supportive infrastructure such as build and continuous integration, team metrics, incremental design and evolutionary architecture, dependency injection, and so much more. (This course won’t present the planning side of XP, but covers many other aspects common to XP projects) It makes a great complement for training in Agile Processes such as XP, Scrum, or OpenAgile. The overview slide presentation is available for free download from the Isráfíl web site.

The courses are scheduled for:

The course is $1250 CAD per student, and participants receive a transferrable discount of $100 CAD for other training with Berteig Consulting as a part of our ongoing partnership. I initially prototyped this course in Ottawa this December, and am very excited to see this through in several locales. Class size is limited to 15, so we can keep the instruction style more involved. The above schedules are linked to Berteig Consulting’s course system and have registration links at the bottom of the description. Locations are TBD, but will be updated at the above links as soon as they’re finalized.

A further series is planned for several US cities in March, and we’ll be sure to announce them as well.

Please share!

Finally – a solid metric for code quality.

Learn more about transforming people, process and culture with the Real Agility Program

Bob C. Martin (Uncle Bob to you and me) suggested, in his “quintessence” keynote at the Agile2008 conference that he had the perfect metric for code quality. Cyclomatic complexity and others were interesting mostly to those who invented them, etc. His answer was brilliant, and was easily measured during code reviews:

WTFs per minute

I love it. All you need is a counter and a stop-watch. Start code-review and start watch and start clicking anytime you see code that makes you say or think “What the F???”. This dovetails nicely with his original recommendation for a new statement in the agile manifesto: “Craftsmanship over Crap”.

Please share!