Outcomes over outputs
Short summary of Outcomes over outputs
The concept of using outcomes over output has caught my attention pretty early on in my career - as it’s such an obvious and seemingly easy concept, yet still hard to implement properly. And at times it produces somewhat challenging situations when working with people who are used to working in a more “command and control” like environment.
The book
The book I summarize here today is a short one titled “Outcomes over Output”. It starts by stating the obvious (that delivering many “things” in terms of features, our “outputs” does not necessarily deliver any value for the business in the form of outcomes or impact). And it aims to answer the question how outcomes can help to get there instead.
The book defines outcomes as “the human behaviors that drive business results .. happen when you deliver the right features” (and ideally by delivering as few features as possible).
One interesting fact here is to navigate the levels properly. The book points out that there’s a natural chain of casualties between Resources, Activities, Outputs, Outcomes, and Impact. Managing by outputs is dangerous because there’s no guarantee for any outcomes. Managing by impact only, however, is a common misconception and tends to not work either because these are usually too high-level to be useful (think of “we need to make more revenues”). Outcomes are the sweet spot, where it’s about a focus on changing customer behavior in such a way that it drives a concrete business result.
One key problem here is that oftentimes it’s not entirely clear whether a certain activity leads to the behavioral change (outcome) we want. That’s where experiments come into play to learn quickly whether any given activity works as intended.
Finding the right outcomes
The key starting question is: “what are the customer behaviors that drive business results?”. So we are looking for things that people do, which has the nice property of being observable and measurable. And we are looking for leading indicators on the way to out intended business result. Normally, there will be a lot of uncertainty in these outcomes we come up with. That’s why we should call them Hypothesis and phrase them in the form of a) what we believe, and b) the evidence that we’re seeking to know that we’re right or wrong.
Joshua Seiden puts out three “magic questions” to ask when creating Outcomes:
“What are the user and customer behaviors that drive business results? (This is the outcome we want to create) How can we get people to do more of these behaviors? (These are the features, policy changes, promotions, etc. that we’ll do to try to create the outcomes.) How do we know that we’re right? (This uncovers the dynamics of the system, as well as the tests and metrics we’ll use to measure our progress)
A word on technology initiatives
The author points out what I can confirm from my experience: internal tech initiatives are often very bad at defining outcomes. Refactorings and such are often measured by how many subsystems are already touched/completed. It would, however, be much better to measure them in terms of behavior, e.g. how many users are on the old vs. the new system, or how many developers have access to new possibilities now etc.
Output based planning is the root of the problem
I covered the whole idea around better planning and roadmap creating already in my article on roadmaps (link). The book puts the problem in a nice way:
“Roadmaps fail when they present a picture of the future that is at odds with what we know about the future. If we were setting out to cross an uncharted desert - one we cannot see from the air, and that was of unknown size - it would be crazy to predict that we could cross it in a few hours. [..] you’d be reckless to make a prediction. Instead, you’d probably present your journey (if you chose to make it at all!) as an exploration.“”
The parallels here to product development is that there’s often also many unknowns and uncertainties involved. So instead of outcomes, we want to plan for themes of work, problems to solve, outcomes to deliver, or a certain customer story.
A big challenge to overcome here is stakeholder expectations. Many stakeholders want a fixed date and a fixed feature set. And it might feel frustrating that the best answer we can give here is “we stop working on something when we’ve made enough progress to feel satisfied.” One way to deal with this is to establish clear hypothesis and measures of success upfront with stakeholders, and regularly review performance against these. But if definitely also requires a cultural change towards trust, and a culture to accept failure and learning and being ready to talk about it. The way to get skeptical stakeholders on board is to show what’s in it for them: actual impact on the business, versus ticking off a list of outcomes.
Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email