Category Archives: Agile Engineering

Customers Don’t Pay Me to Write Tests: The Importance of Tests

As a technical agile coach and trainer, I help teams discover ways of testing. Some teams ignore tests altogether, while others write every possible test possible, wasting valuable time and not being able to deliver at a good pace.

My first question is always this: How much does the customer pay for tests?

Nothing.

That’s right! Not a dime. I don’t even ship my product to them with any tests. They aren’t even compiled into bytecode for them. They are not going to pick up my application and open a debugger to make sure I’ve written tests that pass. They don’t care how many tests I’ve written or my code coverage ratio. They don’t care about unit, integration and acceptance tests, or how much time I spent on mocking and stubbing to isolate my functions. They only pay for working software.

So why write them?

I don’t write tests for the user. I don’t write tests for management. I write tests for me. I write tests for my future self. I write tests for my team members and any other developer that will need to change my code.

I write tests to prove that what I have written is what I have intended. I write tests to make my code manageable, to help me refactor when, inevitably, a new feature or change request arrives. I write tests so that I can fearlessly alter a system and know what I will break, and to find and repair bugs quickly before they are pushed into production.

I write tests so that my team members can feel a sense of code ownership, so that they too can alter, improve and remove my code and be able to predict the outcome. I write tests so that it becomes a form of documentation of the capability of the system.

Certainly they take time to write, but they save all kinds of time when it comes to changing things later. They allow me to do the one thing that software needs to do in the rapidly evolving market: adapt. I can adapt quickly to the needs of my customers to deliver quality features rapidly.

I write as many tests to make myself and my team feel confident that we can continuously develop a quality product at a sustainable pace, responding to the changes of the market and the needs of our customers. I write enough tests that I am releasing nearly bug-free code and I write as many tests as needed to satisfy just that!

In my Agile Software Developer training events, I help developers learn ways of writing tests to improve the quality of their work, so that they are able to spend more time developing new features rather than debugging old ones.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Sabine’s Principle of Cumulative Quality Advantage

Many organizations won’t survive the next decade. Of those that survive, even they are likely to be extinct before century’s end — especially the largest of contemporary organizations.

I was thinking today of a few essential adaptations that enterprises must make immediately in order to stave off their own almost-inevitable death.

With Regard to Business Strategy

  • Measure value delivered and make decisions empirically based on those data.
  • Strive toward a single profit-and-loss statement. Understand which value streams contribute to profit, yes, but minimize fine-grained inspection of cost.
  • Direct-to-consumer, small-batch delivery is winning. It will continue to win.

With Regard to People

  • Heed Conway’s Law. Understand that patterns of communication between workers directly effect the design and structure of their results. Organize staff flexibly and in a way which resembles future states or ‘desired next-states’ so those people produce the future or desired next-architectures. This implies that functional business units and structures based on shared services must be disassembled; instead, organize people around products and then finance the work as long-term initiatives instead of finite projects.
  • Distribute all decision-making to people closest to the market and assess their effectiveness by their results; ensure they interact directly with end users and measure (primarily) trailing indicators of value delivered. Influence decision-making with guiding principles, not policies.
  • The words ‘manager’ and ‘management’ are derogatory terms and not to be used anymore.
  • Teams are the performance unit, not individuals. Get over it.

With Regard to Technology

  • Technical excellence must be known by all to be the enabler of agility.
  • Technical excellence cannot be purchased — it is an aspect of organizational culture.

For example, in the realm of software delivery, extremely high levels of quality are found in organizations with the shortest median times-to-market and the most code deployments per minute. The topic of Continuous Delivery is so important currently because reports show a direct correlation between (a) the frequency of deployment and (b) quality.

That is, as teams learn to deploy more frequently, their time-to-market (lead time), recovery rates, and success rates all change for the better — dramatically!

I have a theory which is exemplified in the following graph.

Sabine’s Principle of Cumulative Quality Advantage Explained

As the intervals between deployments decrease (blue/descending line)

…quality increases (gold/ascending line)

…and the amount & cost of technical debt decreases (red area)

…and competitive advantage accumulates (green area).

Note: The cusp between red and green area represents the turning point an organization makes from responding to defects to preventing them.


This is a repost from David’s original article at tumblr.davesabine.com.

This post is inspired in part by these awesome texts:


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Do Great Work & Don’t Ship Defects

Teams are often pressured by business stakeholders to “go faster” and deliver new features quickly at the expense of quality. This pressure leads to technical debt unless the team stands strong. Engineers and developers, you have a responsibility to your teams, your profession, and to yourselves to uphold high standards. Yes, learn and incorporate techniques which enable you to deliver frequently but take care to also ensure that your code meets or exceeds your definition of done.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Design Debt & UX Debt is Technical Debt

Hey! Let’s all work together, please.

Technical Debt is a term which captures sloppy code, unmaintainable architecture, clumsy user experience, cluttered visual layout, bloated feature-sets, etc.  My stance is that the term, Technical Debt, includes all the problems which occur when people defer professional discipline — regarding any/every technical practice such as product management, visual and UX design, or code.

I assert that the change we need to catalyze in the business community is larger than any one discipline and I am worried that I have seen an increase in blog articles in recent years about concepts like “Design Debt”, “UX Debt”, “Experience Debt” — these articles unfortunately are not helping and have served only to divide the community. They are divisive, not because we shouldn’t be discussing the discreet facets in which Technical Debt can manifest, but because authors often take a decidedly combative approach in their writing.  Take these phrases for example:

  • “Product Design Debt Versus Technical Debt” written by Andrew Chen
  • “User Experience Debt: Technical debt is only half the battle” written by Clinton Christian
  • “Design debt is more dangerous because…” written by James Engwall.

I agree with Andrew Chen that Product Design Debt is a problem — I just don’t like that he chose to impose a dichotomy where there is none.  Why must he argue one “versus” another?  Clinton Christian has implied that we’re in a “battle”.  James Engwall has compared the “danger” of Design Debt relative to Technical Debt.  These words are damaging, I argue, because they divert attention to symptoms and away from root causes.

The root cause of Technical Debt is that people forget this simple principle of the Agile Manifesto: “Continuous attention to technical excellence and good design enhances agility.”

The root solution to Technical Debt — all of its forms — is to help business leaders realize there is a difference between “incremental” development and “iterative” development so they may understand the ROI of refactoring.  No technical expert should ever have to justify the business case for feature-pruning, refreshing a user interface, refactoring code, prioritizing defects.  Every business leader should trust that their technical staff are disciplined and excellent.

Yes, please blog about UX Debt and Product Development Debt, etc.  But please do so in a way that encourages cohesion and unity within the Product Development community.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

All Team Quality Standards Should Be Part of the Definition of “Done”

The other day a technology leader was asking questions as if he didn’t agree that things like pair programming and code review should be part of the Definition of “Done” because they are activities that don’t show up in a tangible way in the end product. But if these things are part of your quality standards, they should be included in the definition of “Done” because they inform the “right way” of getting things done. In other words, the Definition of “Done” is not merely a description of the “Done” state but also the way(s) of getting to “Done” – the “how” in terms of quality standards. In fact, if you look at most of any team’s definition of “Done”, a lot of it is QA activity, carried out either as a practice or as an operation that is automated. Every agile engineering practice is essentially a quality standard and as they become part of a team’s practice, should be included as part of the definition of “Done”. The leader’s question was “if we’re done and we didn’t do pair programming and pair programming is part of our definition of “Done”, then does that mean we’re not done?” Which is sort of a backwards question because if you are saying you’re done and you haven’t done pair programming, then by definition pair programming isn’t part of your definition of done. But there are teams out there who would never imagine themselves to be done without pair programming because pair programming is a standard that they see as being essential to delivering quality product.

Everything that a Scrum Development Team does to ensure quality should be part of their definition of “Done”. The definition of “Done” isn’t just a description of the final “Done” state of an increment of product. In fact, If that were true, then we should be asking why anything is part of the definition of “Done”. This is the whole problem that this artifact solves. If this were the case, the team could just say that they are done whenever they say they are done and never actually identify better ways of getting to done and establishing better standards. We could just say (and we did and still do), “there’s the software, it’s done,” the software itself being its own definition of “Done”.

On the contrary the definition of “Done” is what it means for something to be done properly. In other words, it is the artifact that captures the “better ways of developing software” that the team has uncovered and established as practice because their practices reflect their belief that “Continuous attention to technical excellence and good design enhances agility” (Manifesto for Agile Software Development). The definition of “Done” is essentially about integrity—what is done every Sprint in order to be Agile and get things done better. When we say that testing is part of our definition of “Done”, that is our way of saying that as a team we have a shared understanding that it is better to test something before we say that it is done than to say that it’s done without testing it because without testing it we are not confident that it is done to our standards of quality. Otherwise, we would be content in just writing a bunch of code, seeing that it “works” on a workstation or in the development environment and pushing it into production as a “Done” feature with a high chance that there are a bunch of bugs or that it may even break the build.

This is similar to saying a building is “Done” without an inspection (activity/practice) that it meets certain safety standards or for a surgeon to say that he or she has done a good enough job of performing a surgical operation without monitoring the vital signs of the patient (partly automated, partly a human activity). Of course, this is false. The same logic holds true when we add other activities (automated or otherwise) that reflect more stringent quality standards around our products. The definition of “Done”,therefore, is partly made up of a set of activities that make up the standard quality practices of a team.

Professions have standards. For example, it is a standard practice for a surgeon to wash his or her hands between performing surgical operations. At one time it wasn’t. Much like TDD or pair programming, it was discovered as a better way to get a job done. In this day and age, we would not say that a surgeon had done a good job if he or she failed to carry out this standard practice. It would be considered preposterous for someone to say that they don’t care whether or not surgeons wash their hands between operations as long as the results are good. If a dying patient said to a surgeon, “don’t waste time washing your hands just cut me open and get right to it,” of course this would be dismissed as an untenable request. Regardless of whether or not the results of the surgery were satisfactory to the patient, we would consider it preposterous that a surgeon would not wash his or her hands because we know that this is statistically extremely risky, even criminal behaviour. We just know better. Hand washing was discovered, recognized as a better way of working, formalized as a standard and is now understood by even the least knowledgable members of society as an obvious part of the definition of “Done” of surgery. Similarly, there are some teams that would not push anything to production without TDD and automated tests. This is a quality standard and is therefore part of their definition of “Done”, because they understand that manual testing alone is extremely risky. And then there are some teams with standards that would make it unthinkable for them to push a feature that has not been developed with pair programming. For these teams, pair programming is a quality standard practice and therefore part of their definition of “Done”.

“As Scrum Teams mature,” reads the Scrum Guide, “it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality.” What else is pair programming, or any other agile engineering practice, if it is not a part of a team’s criteria for higher quality? Is pair programming not a more stringent criteria than, say, traditional code review? Therefore, any standard, be it a practice or an automated operation, that exists as part of the criteria for higher quality should be included as part of the definition of “Done”. If it’s not part of what it means for an increment of product to be “Done”—that is “done right”—then why are you doing it?


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Measurements Towards Continuous Delivery

I was asked yesterday what measurements a team could start to take to track their progress towards continuous delivery. Here are some initial thoughts.

Lead time per work item to production

Lead time starts the moment we have enough information that we could start the work (ie it’s “ready”). Most teams that measure lead time will stop the clock when that item reaches the teams definition of “done” which may or may not mean that the work is in production. In this case, we want to explicitly keep tracking the time until it really is in production.
Note that when we’re talking about continuous delivery, we make the distinction between deploy and release. Deploy is when we’ve pushed it to the production environment and release is when we turn it on. This measurement stops at the end of deploy.

Cycle time to “done”

If the lead time above is excessively long then we might want to track just cycle time. Cycle time starts when we begin working on the item and stops when we reach “done”.
When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long. Measuring cycle time to “done” can be a good intermediate measurement while we work on reducing lead time to production.

Escaped defects

If a bug is discovered after the team said the work was done then we want to track that. Prior to hitting “done”, it’s not really a bug – it’s just unfinished work.
Shipping buggy code is bad and this should be obvious. Continuously delivering buggy code is worse. Let’s get the code in good shape before we start pushing deploys out regularly.

Defect fix times

How old is the oldest reported bug? I’ve seen teams that had bug lists that went on for pages and where the oldest were measured in years. Really successful teams fix bugs as fast as they appear.

Total regression test time

Track the total time it takes to do a full regression test. This includes both manual and automated tests. Teams that have primarily manual tests will measure this in weeks or months. Teams that have primarily automated tests will measure this in minutes or hours.
This is important because we would like to do a full regression test prior to any production deploy. Not doing that regression test introduces risk to the deployment. We can’t turn on continuous delivery if the risk is too high.

Time the build can be broken

How long can your continuous integration build be broken before it’s fixed? We all make mistakes. Sometimes something gets checked in that breaks the build. The question is how important is it to the team to get that build fixed? Does the team drop everything else to get it fixed or do they let it stay broken for days at a time?

Continuous delivery isn’t possible with a broken build.

Number of branches in version control

By the time you’ll be ready to turn on continuous delivery, you’ll only have one branch. Measuring how many you have now and tracking that over time will give you some indication of where you stand.

If your code isn’t in version control at all then stop taking measurements and just fix that one right now. I’m aware of teams in 2015 that still aren’t using version control and you’ll never get to continuous delivery that way.

Production outages during deployment

If your production deployments require taking the system offline then measure how much time it’s offline. If you achieve zero-downtime deploys then stop measuring this one.  Some applications such as batch processes may never require zero-downtime deploys. Interactive applications like webapps absolutely do.

I don’t suggest starting with everything at once. Pick one or two measurements and start there.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Summary of User Stories: The Three “C”s and INVEST

User Stories

Learning Objectives

Becoming familiar with the “User Story” approach to formulating Product Backlog Items and how it can be implemented to improve the communication of user value and the overall quality of the product by facilitating a user-centric approach to development.

Consider the following

User stories trace their origins to eXtreme Programming, another Agile method with many similarities to Scrum. Scrum teams often employ aspects of eXtreme Programming, including user stories as well as engineering practices such as refactoring, test-driven development (TDD) and pair programming to name a few. In future modules of this program, you will have the opportunity to become familiar enough with some of these practices in order to understand their importance in delivering quality products and how you can encourage your team to develop them. For now, we will concentrate on the capability of writing good user stories.

The Three ‘C’si

A User Story has three primary components, each of which begin with the letter ‘C’:

Card

  • The Card, or written text of the User Story is best understood as an invitation to conversation. This is an important concept, as it fosters the understanding that in Scrum, you don’t have to have all of the Product Backlog Items written out perfectly “up front”, before you bring them to the team. It acknowledges that the customer and the team will be discovering the underlying business/system needed as they are working on it. This discovery occurs through conversation and collaboration around user stories.
  • The Card is usually follows the format similar to the one below

As a <user role> of the product,

I can <action>

So that <benefit>.

  • In other words, the written text of the story, the invitation to a conversation, must address the “who”, “what” and “why” of the story.
  • Note that there are two schools of thought on who the <benefit> should be for. Interactive design specialists (like Alan Cooper) tell us that everything needs to be geared towards not only the user but a user Persona with a name, photo, bio, etc. Other experts who are more focused on the testability of the business solution (like Gojko Adzic) say that the benefit should directly address an explicit business goal. Imagine if you could do both at once! You can, and this will be discussed further in more advanced modules.

Conversation

  • The collaborative conversation facilitated by the Product Owner which involves all stakeholders and the team.
  • As much as possible, this is an in-person conversation.
  • The conversation is where the real value of the story lies and the written Card should be adjusted to reflect the current shared understanding of this conversation.
  • This conversation is mostly verbal but most often supported by documentation and ideally automated tests of various sorts (e.g. Acceptance Tests).

Confirmation

  • The Product Owner must confirm that the story is complete before it can be considered “done”
  • The team and the Product Owner check the “doneness” of each story in light of the Team’s current definition of “done”
  • Specific acceptance criteria that is different from the current definition of “done” can be established for individual stories, but the current criteria must be well understood and agreed to by the Team.  All associated acceptance tests should be in a passing state.

INVESTii

The test for determining whether or not a story is well understood and ready for the team to begin working on it is the INVEST acronym:

I – Independent

  • The solution can be implemented by the team independently of other stories.  The team should be expected to break technical dependencies as often as possible – this may take some creative thinking and problem solving as well as the Agile technical practices such as refactoring.

N – Negotiable

  • The scope of work should have some flex and not be pinned down like a traditional requirements specification.  As well, the solution for the story is not prescribed by the story and is open to discussion and collaboration, with the final decision for technical implementation being reserved for the Development Team.

V – Valuable

  • The business value of the story, the “why”, should be clearly understood by all. Note that the “why” does not necessarily need to be from the perspective of the user. “Why” can address a business need of the customer without necessarily providing a direct, valuable result to the end user. All stories should be connected to clear business goals.  This does not mean that a single user story needs to be a marketable feature on its own.

E – Estimable

  • The team should understand the story well enough to be able estimate the complexity of the work and the effort required to deliver the story as a potentially shippable increment of functionality.  This does not mean that the team needs to understand all the details of implementation in order to estimate the user story.

S – Small

  • The item should be small enough that the team can deliver a potentially shippable increment of functionality within a single Sprint. In fact, this should be considered as the maximum size allowable for any Product Backlog Item as it gets close to the top of the Product Backlog.  This is part of the concept of Product Backlog refinement that is an ongoing aspect of the work of the Scrum Team.

T – Testable

  • Everyone should understand and agree on how the completion of the story will be verified. The definition of “done” is one way of establishing this. If everyone agrees that the story can be implemented in a way that satisfies the current definition of “done” in a single Sprint and this definition of “done” includes some kind of user acceptance test, then the story can be considered testable.

Note: The INVEST criteria can be applied to any Product Backlog Item, even those that aren’t written as User Stories.

Splitting Stories:

Sometimes a user story is too big to fit into a Sprint. Some ways of splitting a story include:

  • Split by process step
  • Split by I/O channel
  • Split by user options
  • Split by role/persona
  • Split by data range

WARNING: Do not split stories by system, component, architectural layer or development process as this will conflict with the teams definition of “done” and undermine the ability of the team to deliver potentially shippable software every Sprint.

Personas

Like User Stories, Personas are a tool for interactive design. The purpose of personas is to develop a precise description of our user and so that we can develop stories that describe what he wishes to accomplish. In other words, a persona is a much more developed and specific “who” for our stories. The more specific we make our personas, the more effective they are as design tools.iii

Each of our fictional but specific users should have the following information:

  • Name
  • Occupation
  • Relationship to product
  • Interest & personality
  • Photo

Only one persona should be the primary persona and we should always build for the primary persona. User story cards using personas replace the user role with the persona:

<persona>

can <action>

so that <benefit>.

 


i The Card, Conversation, Confirmation model was first proposed by Ron Jeffries in 2001.

ii INVEST in Good Stories, and SMART Tasks. Bill Wake. http://xp123.com/articles/invest-in-good-stories-and-smart-tasks/

iii The Inmates are Running the Asylum. Alan Cooper. Sams Publishing. 1999. pp. 123-128.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Scrum Data Warehouse Project

May people have concerns about the possibility of using Scrum or other Agile methods on large projects that don’t directly involve software development.  Data warehousing projects are commonly brought up as examples where, just maybe, Scrum wouldn’t work.
I have worked as a coach on a couple of such projects.  Here is a brief description of how it worked (both the good and the bad) on one such project:
The project was a data warehouse migration from Oracle to Teradata.  The organization had about 30 people allocated to the project.  Before adopting Scrum, they had done a bunch of up-front analysis work.  This analysis work resulted in a dependency map among approximately 25,000 tables, views and ETL scripts.  The dependency map was stored in an MS Access DB (!).  When I arrived as the coach, there was an expectation that the work would be done according to dependencies and that the “team” would just follow that sequence.
I learned about this all in the first week as I was doing boot-camp style training on Scrum and Agile with the team and helping them to prepare for their first Sprint.
I decided to challenge the assumption about working based on dependencies.  I spoke with the Product Owner about the possible ways to order the work based on value.  We spoke about a few factors including:
  • retiring Oracle data warehouse licenses / servers,
  • retiring disk space / hardware,
  • and saving CPU time with new hardware
The Product Owner started to work on getting metrics for these three factors.  He was able to find that the data was available through some instrumentation that could be implemented quickly so we did this.  It took about a week to get initial data from the instrumentation.
In the meantime, the Scrum teams (4 of them) started their Sprints working on the basis of the dependency analysis.  I “fought” with them to address the technical challenges of allowing the Product Owner to work on the migration in order based more on value – to break the dependencies with a technical solution.  We discussed the underlying technologies for the ETL which included bash scripts, AbInitio and a few other technologies.  We also worked on problems related to deploying every Sprint including getting approval from the organization’s architectural review board on a Sprint-by-Sprint basis.  We also had the teams moved a few times until an ideal team workspace was found.
After the Product Owner found the data, we sorted (ordered) the MS Access DB by business value.  This involved a fairly simple calculation based primarily on disk space and CPU time associated with each item in the DB.  This database of 25000 items became the Product Backlog.  I started to insist to the teams that they work based on this order, but there was extreme resistance from the technical leads.  This led to a few weeks of arguing around whiteboards about the underlying data warehouse ETL technology.  Fundamentally, I wanted to the teams to treat the data warehouse tables as the PBIs and have both Oracle and Teradata running simultaneously (in production) with updates every Sprint for migrating data between the two platforms.  The Technical team kept insisting this was impossible.  I didn’t believe them.  Frankly, I rarely believe a technical team when they claim “technical dependencies” as a reason for doing things in a particular order.
Finally, after a total of 4 Sprints of 3 weeks each, we finally had a breakthrough.  In a one-on-one meeting, the most senior tech lead admitted to me that what I was arguing was actually possible, but that the technical people didn’t want to do it that way because it would require them to touch many of the ETL scripts multiple times – they wanted to avoid re-work.  I was (internally) furious due to the wasted time, but I controlled my feelings and asked if it would be okay if I brought the Product Owner into the discussion.  The tech lead allowed it and we had the conversation again with the PO present.  The tech lead admitted that breaking the dependencies was possible and explained how it could lead to the teams touching ETL scripts more than once.  The PO basically said: “awesome!  Next Sprint we’re doing tables ordered by business value.”
A couple Sprints later, the first of 5 Oracle licenses was retired, and the 2-year $20M project was a success, with nearly every Sprint going into production and with Oracle and Teradata running simultaneously until the last Oracle license was retired.  Although I don’t remember the financial details anymore, the savings were huge due to the early delivery of value.  The apprentice coach there went on to become a well-known coach at this organization and still is a huge Agile advocate 10 years later!

Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Link: Short Article About Pair Programming

Pair Programming Economics by Olaf Lewitz describes three activities in programming: typing, problem-solving and reading code. How does pair programming help? By making the balance between those three activities better.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Definition of DevOps and the Definition of “Done” – Quick Links

I was poling around for a good definition of DevOps and found a thoughtful article written by Ernest Mueller called What is DevOps?  Highly recommended reading as it includes lots of insight about the relationship between Agile and DevOps.  FWIW, I feel that the concept of the Definition of “Done” is Scrum’s own original take on the same class of ideas: breaking down silos in an organization to get stuff into the marketplace faster and faster.  I even talked about operationalizing software development back in 2004 and 2005 as a counterpoint to the project management approach which puts everyone in silos and pushes work through phase gates.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Technical Push-Back – When is it Okay? When is it Bad?

Whenever I run a Certified Scrum Product Owner training session, one concept stands out as critical for participants: the relationship of the Product Owner to the technical demands of the work being done by the Scrum team.

The Product Owner is responsible for prioritizing the Product Backlog. This responsibility is, of course, also matched by their authority to do so. When the Product Owner collaborates with the team in the process of prioritization, there may be ways which the team “pushes back”. There are two possible reasons for push-back. One is good, one is bad.

Bad Technical Push-Back

BudapestDSCN3928-smallThe team may look at a product backlog item or a user story and say “O gosh! There’s a lot there to think about! We have to build this fully-architected infrastructure before we can implement that story.” This is old waterfall thinking. It is bad. The team should always be thinking (and doing) YAGNI and KISS. Technical challenges should be solved in the simplest responsible way. Features should be implemented with the simplest technical solution that actually works.

As a Product Owner, one technique that you can use to help teams with this is that when the team asks questions, that you aggressively keep the user story as simple as possible. The questions that are asked may lead to the creation of new stories, or splitting the existing story. Here is an example…

Suppose the story is “As a job seeker I can post my resume to the web site…” If the technical team makes certain assumptions, they may create a complex system that allows resumes to be uploaded in multiple formats with automatic keyword extraction, and even beyond that, they may anticipate that the code needs to be ready for edge cases like WordPerfect format.  The technical team might also assume that the system needs a database schema that includes users, login credentials, one-to-many relationships with resumes, detailed structures about jobs, organizations, positions, dates, educational institutions, etc. The team might insist that creating a login screen in the UI is an essential prerequisite to allowing a user to upload their resume.  And as for business logic, they might decide that in order to implement all this, they need some sort of standard intermediate XML format that all resumes will be translated into so that searching features are easier to implement in the future.

It’s all CRAP, bloat and gold-plating.

Because that’s not what the Product Owner asked for.  The thing that’s really difficult for a team of techies to get with Scrum is that software is to be built incrementally.  The very first feature built is built in the simplest responsible way without assuming anything about future features.  In other words, build it like it is the last feature you will build, not the first.  In the Agile Manifesto this is described as:

Simplicity, the art of maximizing the amount of work not done, is essential.

The second feature the team builds should only add exactly what the Product Owner asks for.  Again, as if it was going to be the last feature built.  Every single feature (User Story / Product Backlog Item) is treated the same way.  Whenever the team starts to anticipate the business in any of these three ways, the team is wrong:

  1. Building a feature because the team thinks the Product Owner will want it.
  2. Building a feature because the Product Owner has put it later on the Product Backlog.
  3. Building a technical aspect of the system to support either of the first types of anticipation, even if the team doesn’t actually build the feature they are anticipating.

Okay, but what about architecture?  Fire your architects.  No kidding.¹

Good Technical Push-Back

Rube Goldberg Self Operating Napkin

Sometimes stuff gets non-simple: complicated, messy, hard to understand, hard to change.  This happens despite us techies all being super-smart.  Sometimes, in order to implement a new feature, we have to clean up what is already there.  The Product Owner might ask the Scrum Team to build this Product Backlog Item next and the team says something like: “yes, but it will take twice as long as we initially estimated, because we have to clean things up.”  This can be greatly disappointing for the Product Owner.  But, this is actually the kind of push-back a Product Owner wants.  Why?  In order to avoid destroying your business!  (Yup, that serious.)

This is called “Refactoring” and it is one of the critical Agile Engineering practices.  Martin Fowler wrote a great book about this about 15 years ago.  Refactoring is, simply, improving the design of your system without changing it’s business behaviour.  A simple example is changing a set of 3 radio buttons in the UI to a drop-down box with 3 options… so that later, the Product Owner can add 27 more options.  Refactoring at the level of code is often described as removing duplication.  But some types of refactoring are large: replacing a relational database with a NoSQL database, moving from Java to Python for a significant component of your system, doing a full UX re-design on your web application.  All of these are changes to the technical attributes of your system that are driven by an immediate need to add a new feature (or feature set) that is not supported by the current technology.

The Product Owner has asked for a new feature, now, and the team has decided that in order to build it, the existing system needs refactoring.  To be clear: the team is not anticipating that the Product Owner wants some feature in the future; it’s the very next feature that the team needs to build.

This all relates to another two principles from the Agile Manifesto:

Continuous attention to technical excellence and good design enhances agility.

and

The best architectures, requirements, and designs emerge from self-organizing teams.

In this case, the responsibilities of the team for technical excellence and creating the best system possible override the short-term (and short-sighted) desire of the business to trade off quality in order to get speed.  That trade-off always bites you in the end!  Why? Because of the cost of fixing quality problems increases exponentially as time passes from when they were introduced.

Young Girl Wiping Face With Napkin

 

 

 

 

 

 

 

Refactoring is not a bad word.

Keep your code clean.

Let your team keep its code clean.

Oh.  And fire your architects.

Update Sep. 8, 2015: Check out this YouTube video on the closely related topic of who has authority over the Product Backlog and why developers should not set the order of PBIs:

¹ I used to be a senior architect reporting directly to the CTO of Charles Schwab.  Effectively, I fired myself and launched an incredibly successful enterprise architecture re-write project… with no up-front architecture plan.  Really… fire your architects.  Everything they do is pure waste and overhead.  Someday I’ll write that article 🙂


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

A great example of agile style teamwork in a non-software environment

I am regularly asked for examples of where Agile Practices could be used that are not related to software development.

I recently came across this video and just had to share the link to it.

The company encourages all team members to participate, keeps things time-boxed and makes appropriate use of subject matter experts.

It’s awesome to see what can be done in 5 days.

http://www.youtube.com/watch?v=M66ZU2PCIcM

 


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Agile 2010 Session Proposal: TDD for iPhone Development

I’ve just submitted a proposed session to the Agile 2010 web site:

Test-Driven Development for the iPhone, iPod and iPad

Please go to the site and let me know in the comments there if you have any suggestions about the proposal!


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

ANN: Agile Software Engineering Practices training by Isráfíl Consulting

Isráfíl Consulting is finally prepared for the first series of its Agile Software Engineering Practices training courses. This series is offered in partnership with Berteig Consulting who are graciously hosting the registration process. Their team has also helped greatly in shaping the presentation style and structure of the course. The initial run will be in Ottawa, Toronto (Markham), and Kitchener/Waterloo.   

Topics covered will include Test Driven Development (TDD), testability, supportive infrastructure such as build and continuous integration, team metrics, incremental design and evolutionary architecture, dependency injection, and so much more. (This course won’t present the planning side of XP, but covers many other aspects common to XP projects) It makes a great complement for training in Agile Processes such as XP, Scrum, or OpenAgile. The overview slide presentation is available for free download from the Isráfíl web site.

The courses are scheduled for:

The course is $1250 CAD per student, and participants receive a transferrable discount of $100 CAD for other training with Berteig Consulting as a part of our ongoing partnership. I initially prototyped this course in Ottawa this December, and am very excited to see this through in several locales. Class size is limited to 15, so we can keep the instruction style more involved. The above schedules are linked to Berteig Consulting’s course system and have registration links at the bottom of the description. Locations are TBD, but will be updated at the above links as soon as they’re finalized.

A further series is planned for several US cities in March, and we’ll be sure to announce them as well.


Affiliated Promotions:

Register for a Scrum, Kanban and Agile training sessions for your, your team or your organization -- All Virtual! Satisfaction Guaranteed!

Please share!
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail
Berteig
Upcoming Courses
View Full Course Schedule
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 4
2023
Details
Advanced Certified ScrumMaster® (ACSM)
Online
C$1795.00
Apr 5
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1895.00
Apr 11
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 14
2023
Details
Win as a Manager with Your New Agile Coach: ChatGPT
Online
C$0.00
Apr 14
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 17
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
Apr 18
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
Apr 19
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 21
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 25
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1895.00
Apr 26
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Apr 28
2023
Details
Win as a Manager with Your New Agile Coach: ChatGPT
Online
C$0.00
Apr 28
2023
Details
Advanced Certified Scrum Product Owner® (ACSPO)
Online
C$1525.75
May 3
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 5
2023
Details
Kanban Systems Improvement® (KMPII)
Online
C$1610.75
May 10
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 12
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 12
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1100.75
May 16
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
May 16
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
May 17
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
May 17
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 19
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 19
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
May 26
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
May 26
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1610.75
Jun 7
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 9
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 9
2023
Details
Kanban System Design® (KMPI)
Online
C$1610.75
Jun 13
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
Jun 14
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 16
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 16
2023
Details
Kanban for Scrum Masters (ML-KSM)
Online
C$495.00
Jun 20
2023
Details
Kanban for Product Owners (ML-KPO)
Online
C$495.00
Jun 21
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Jun 21
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 23
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 23
2023
Details
Real Agility™ Team Performance Coaching with BERTEIG (COACHING-TPC)
Online
C$750.00
Jun 30
2023
Details
Real Agility™ Real Agility™ Ask Me Anything / Coaching
Online
C$750.00
Jun 30
2023
Details
Scrum Master Bootcamp with CSM® (Certified Scrum Master®) [Virtual Learning] (SMBC)
Online
C$1610.75
Jul 5
2023
Details
Kanban Systems Improvement® (KMPII)
Online
C$1610.75
Jul 11
2023
Details
Product Owner Bootcamp with CSPO® (Certified Scrum Product Owner®) [Virtual Learning] (POBC)
Online
C$1610.75
Jul 12
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Jul 19
2023
Details
Team Kanban Practitioner® (TKP)
Online
C$1015.75
Aug 15
2023
Details