I recently had the pleasure of coaching a team through the use of agile methods to run an IT infrastructure upgrade project. As a coach, I was there to observe and guide, to inspect and adapt. The goal of the project was to upgrade a critical piece of software that was at the end of its support life, and at the same time upgrade another piece of software to a more recent minor version. As well, new hardware would be used to run the updated software. All of this is in the context of a mission-critical system used by a major US financial services corporation. Let’s call this project the “System Renewal” project or SR.
Prior to deciding to use agile methods, the SR project was defined using two non-agile requirements documents which listed in detail what the results of the project shall and shall-not be. One was the “business” requirements and one was the “system” requirements. For example:
Description: The SR upgrade shall maintain the same or better level of performance.
Requirement Priority: Essential
Source/References: Joe Somebody
Comments: The SR version x.01 shall be upgraded to the x.02 as a part of this project.
I worked with some key people to review the project requirements before beginning. We decided that we would hold a workshop with the team to accomplish three things: introduce the team to agile methods, provide some initial team-building opportunities, and to introduce and launch the project with the team. The team decided to use two week iterations and to shoot for a single release after four iterations. Since this was much faster than the time estimated using the old waterfall methodology, there was a large margin of safety in case a second release was necessary.
During this workshop, the team re-organized the requirements into larger “user stories” and smaller tasks. This process of re-organization allowed the team members to collaborate and to get to know the nature of the project in more depth. There was a challenge here: the work to be done had nothing to do with end users. The team along with some guidance from myself and some other coaches, decided that the “user stories” would be very similar to the top level of a traditional work breakdown structure (WBS) and the tasks would be the second level. As much as possible, dependencies among the user stories were avoided so that they could be re-organized from iteration to iteration as circumstances required.
Once the project launch workshop was complete, the team immediately moved into the first iteration planning meeting. In this meeting, the team estimated the size of the tasks and selected the user stories to be completed in the first iteration. The basic idea was to accomplish one main goal: to upgrade the software on a small number of systems (the total number of systems to be upgraded was about 15). A secondary goal was to make some progress on automating a set of system tests to include in a regression suite. Prior to this project, all the tests were done manually. No new tests would be created as the functionality of the system would not be changed.
The first iteration went fairly well: from our burndown, it appeared that not much was accomplished in the first six or seven days of the ten day iteration. The last few days were extremely productive and the main goal of the iteration was accomplished. Some progress was made on automating tests, but that was not actually finished so manual tests were used to confirm the operation of the upgraded systems. During this first iteration I acted as the team’s coach by facilitating the daily status meetings, maintaining the burndown charts, and trying to resolve impediments to the team’s progress. At this point, impediments were mostly related to the physical and techinical environment: lack of phones, whiteboards, system access, etc.
The team had a half-day iteration reflection meeting that focused on questions about estimation and tracking of task status. The customer representative accepted the work that had been completed although an actual demonstration was not performed.
The second iteration was planned as the first. A few days in, the second challenge was encountered: the existing manual testing revealed that the new version of the software had modified some interfaces with other systems. As a result, the software did not work. The team temporarily halted its work in order to examine the problem and try to determine a solution. The team quickly got in touch with the vendor of the software and a simple fix was planned that would be deployed near the end of the second iteration. Some re-organization of the iteration’s user stories and tasks was done in order to accomodate this problem. This was done in collaboration with the customer representative.
At this point it should be noted that this was a first for the organization: testing early revealed a problem that normally would be discovered right at the very end of the project rather than right at the beginning. This allowed the team to re-organize their work to remain productive and so there was no change to the schedule. This is one of the main benefits of the combination of iterative delivery and test-driven work: the early discovery of problems and the ability to work around them without substantial schedule or scope impact.
The second iteration concluded successfully. The third and fourth iterations were relatively uneventful. And the team deployed the upgraded systems more than two months early. No new software was created. The key practices used either fully or partially included: self-steering team, iterative delivery, adaptive planning, test-driven work, and maximize communication (use of both co-located team and information radiators). The question of appropriate metrics did not come up except insofar as the team was substantially exempted from the corporate procedural compliance measurements, and was operating in an environment where time-to-market was considered the most important factor.