Lisa Crispin has written an excellent article about defect tracking in agile environments. The article is a couple of months old now, but if you haven’t read it yet, you definitely should! I particularly like the perspective that Mary Poppendieck offers in the article…
Since Quality is Not Negotiable, and since defects in both XP and Scrum are meant to be dealt with immediately, there is a good argument to be made for doing away with any type of automated tool-based defect tracking.
One team I coached really struggled with this. They were working on the first every agile project in their organization and none of the team members had any prior experience with agile or with test-driven development. It was a small project, but management wanted to do things “right”. They had a team room, a cross-functional team, short iterations, etc. etc.
One member of the team was a person who was their Quality Assurance expert. This person was extremely capable: able to write excellent test scripts (in Excel) and able to think carefully about quality. However, the organization was not set up to support an agile approach to quality: QA folks were measured on how many defects they found and logged in their defect tracking system.
Since the team had decided to use test-driven development, they soon found that they had an odd question: when does incomplete work become defective work that needs to be logged? In other words, since the tests are written first and they start out by failing, and since this is normal and happens for every single feature, exception condition and even method of code, tracking all these as “defects” would be meaningless. So how do you decide that something is a defect instead of test-driven work that just isn’t yet making the tests pass?
After a great deal of discussion and confusion and consternation, we finally agreed to a simple and effective practice: inside of an iteration, any issues or problems that come up are written on the whiteboard. The whiteboard needs to be clear by the end of the iteration. If it is not, then the items that are still there are logged into the defect tracking system. These items also become top priority for the next iteration. As well, any items which had non-passing tests by the end of the iteration were also put into the defect tracking system and became top priority for the next iteration.
This helped… and we only ended up with at most one or two items at the end of each iteration that could be logged… and often there were no known defects.
This was legitimately painful for the QA person whose defect logging rates went way down. Fortunately, since this was a pilot project, it was easy to explain to management what was going on. This is a very good example of how a management practice that makes a lot of sense in a non-agile environment becomes a huge obstacle in an agile environment.