Regardless of the agile methodology you are using to drive your software development efforts, you should have an explicit definition of done. I say explicitly, because you will always have one – whether you define it or not. Regardless of your process, even if you are following a Waterfall based process, the level of quality you demand (or allow) your software to reach before you ship it is your definition of done.
Explicitly defining what Done means to your organization helps communication and collaboration, and allows you to set the bar for quality, as well as drive your process improvement efforts.
In this post I will provide some guidance on how to use your Definition of Done to drive collaboration efforts between your developers and the operations engineers, in an organization that is trying to adopt a DevOps mindset.
Microsoft’s very own Donovan Brown gave us what I view as the best definition of what DevOps is. Even if you’ve heard it before, I believe that it bears repeating, in order to drive the point of this topic:
DevOps is the union of people, processes, and product to enable continuous delivery of value to the users
An organization trying to adopt a DevOps mentality should have every single member’s job be defined by that statement.
The Definition of Done
When a Product Backlog item or an Increment is described as "Done", everyone must understand what "Done" means. Although this may vary significantly per Scrum Team, members must have a shared understanding of what it means for work to be complete, to ensure transparency
The emphasis is my own.
What most development teams end up doing is coming up with a laundry list of quality demands like the following:
- Code complete
- Unit tests code coverage is <insert some number here>% or higher
- Automated build
- QA Tested
Yours may have a few others, or missing some, or have minor variations on the same theme. The result is often disjointed, where some items may be nice and lofty asperations, but not achievable by the organization and the resources and knowledge they currently have.
Done for DevOps
What I prefer to do is to use directed questions to drive the teams’ Definition of Done. These questions are, of course, asked with the aforementioned DevOps definition in mind.
The following are some examples of shared questions that drive this point and (hopefully) start driving the change in mentality required for success as a DevOps organization.
Start a conversation with the delivery team (developers, testers, ops, business unit, i.e. anyone responsible for the delivery of the product) during your next retrospective, post mortem, or whenever you discuss process improvement options, and ask one or more of these questions.
What must we do to ensure continuous delivery of these features?
The answers to this question is this question are as much the responsibility of developers as it is the testers and the operations engineers. Deploying in small batches of changes, architecting the solution in a certain way that makes (automated) deployment easier, using certain development practices, such as the use of feature flags, test automation, setting up infrastructure as code, and deployment patterns such as blue/green deployment increase the ease and likelihood of successful deployment.
Note that some of the aforementioned practices are in the hands of the developers, some in the hands of testers, and others in the hands of operations.
Collaboration is key.
What must we do to ensure that features delivered are valuable?
Let’s face it – ask 10 development teams why they are building their current feature, and at least 9 of them will answer “because it’s in the backlog”, “because my manager said so”, or “huh?”
The fact that our stakeholders, sponsors, product owner, project manager, or team leader, or CEO requested or even demanded a feature, may be a good enough reason to do something, but they are not all-knowing. We don’t know that the features will in fact be valuable to our business.
For an internal app, we will probably want to know if and how the new change affects time to complete a transaction or error rates. What measures could be put in place to prove that?
For commercial apps, we will probably want to know if and how the new change affects conversions, sales, consumption, etc.
For public sector apps, we might want to know how the new change affects consumption, speed of use, efficiency, backend costs, etc.
The business unit should provide these metrics. Developers must develop the features with hooks to enable measuring these things. Testers must verify that the metrics provide the expected data, and operations must monitor these.
What must we do to ensure that features are working properly?
Yes, of course you need to test your system, ideally with automated tests, topped off by some manual exploratory tests by your testers. Unfortunately, that is not enough.
You want to be able to know that your system is working in production. You want to be able to identify problems before they affect your users.
With the right measures and performance counters in production, monitoring not only your diskspace, memory consumption, and other hardware concerns, but also how your business scenarios are performing.
Make sure your definition of done asks questions such as “What do I need to measure in order to identify problems before our users are affected?”
Defects will escape your delivery teams’ quality processes. That’s a fact. Make sure they do not repeat themselves by asking yourself something like “What can I monitor to guarantee that this problem doesn’t show up again?”
Be sure to not only ask these questions; make sure you also answer them, and implement the hooks that will allow you to monitor these.
These are but a few conversation starters. Asking them is important. Answering them is crucial. Following up with an implementation will guarantee that your definition of done is in line with your business goals and your organization’s endeavor to adopt a DevOps mindset.
Keep Calm and DevOps!