Innovation Accounting at Scurri.com – Part 2
In my last post in this two post series I introduced the concept of Innovation Accounting which our team has implemented at Scurri.com I outlined the background to its introduction and the use of the MVP (Minimal Viable Product) to establish a baseline in order that you understand if you are making progress toward meeting your stated ideal.
(If you haven’t read the first part of the article I would suggest doing so the next bit makes sense)
To recap there are specific steps we focused on to implement Innovation Accounting and we implemented processes to achieve three learning milestones:
- Building a Minimal Viable Product (MVP) to establish real data on the assumption that you require to be tested. This is your baseline
- Using experiments to tune your “engine” from the baseline to the stated ideal.
- Reviewing progress toward the stated ideal and decide to either pivot (change direction) or persevere.
In part one we looked at step 1, establishing a Baseline and now I will outline the remaining two steps required for Innovation Accounting.
Tuning the Engine
So once we know the gap we need to close, the team (and relevant advisors)brainstorm the various initiatives that we think that can move the needle and bring us from the established baseline to the ideal state. Each of these are examined and the hypothesis is carefully checked to ensure that we can measure the impact of the initiative. This is the case whether its a product development, marketing or any other type of initiative. If we can’t measure the impact of our change, we eliminate the experiment, because we cannot validate if it makes any impact. We also double check to see if seems logical that the action will improve the key driver of growth that we are focusing on. This step is important as sometimes it can be easy to come up with tactical initiatives that seem logical in isolation but actually work against the overall growth strategy. An extreme example to illustrate the point is trying to build an annual subscription service for a one time event like a wedding. The repeat purchase model is completely different to the one time purchase or event.
Once we are certain that we can measure, and generally the hypothesis seems to be logical and the right thing to do, we build something or do something that will test it. When designing experiments we use the MVP principal, what is the minimal amount of resources that we need to use to validate or disprove the assumption. Once we work that out we put the task into our task bucket for our next prioritisation meeting and it goes onto the Kanban board to be implemented (More about Kanban another time).
Of course if the experiment works we will see the key metric we are watching rise from the baseline. If not we test the next thing and so on. Poor quantitative results force us to declare failure and drive more qualitative research, getting out and interviewing customers and finding out what is wrong with our execution and we work through the build, measure, learn loop.
You don’t need a lot of users to test these hypotheses, we worked on a stream of circa 200 unique visitors a day. Its important that the metrics you choose for your Innovation Accounting adhere to Eric Ries’s “Value of three A’s” rule:
The metrics must be:
- Actionable – demonstrate clear cause and effect
- Accessible – easily understood, easy for everyone to obtain
- Auditable – Not too complex and important they are accurate
Of course if none of the activities are making any difference, this leads to a possible pivot. The pivot is a change in direction and is based on the previous learning, its best to do it quickly and to make bold changes as the opportunity for learning is greater. Iterating fast means that you get the maximum learning completed and increases your odds of working out the business model before your cash runs out.
Our lessons learned:
- Putting in clearly defined processes, and training everyone on the team in why and how it works makes the execution of Innovation Accounting manageable and workable. Its advisable to allocate the implementation of the process to one individual to drive it.
- Using a tool like the strategy canvas, makes it easier to understand, and visualise what are the important elements of your hypothesis, checking if the individual elements make sense and logically work together makes it easier to prioritise the experiments you need to make.
- It can be tempting to test the latest “good idea” without checking its relevance toward the overriding assumption. Its really worth running it through the strategy canvas with a group to check if the idea is really that relevant. This process can eliminate a lot of waste and unnecessary work.