The Introduction of Innovation Accounting – Part 2

by · January 27, 2012

Innovation Accounting at Scurri.com – Part 2

Innovation_AccountingIn my last post in this two post series I introduced the concept of Innovation Accounting which our team has implemented at Scurri.com  I outlined the background to its introduction and the use of the MVP (Minimal Viable Product) to establish a baseline in order that you understand if you are making progress toward meeting your stated ideal.

(If you haven’t read the first part of the article I would suggest doing so the next bit makes sense)

To recap there are specific steps we focused on to implement Innovation Accounting and we implemented processes to achieve three learning milestones:

  1. Building a Minimal Viable Product (MVP) to establish real data on the assumption that you require to be tested.  This is your baseline
  2.  Using experiments to tune your “engine” from the baseline to the stated ideal.
  3. Reviewing progress toward the stated ideal and decide to either pivot (change direction) or persevere.

In part one we looked at step 1, establishing a Baseline and now I will outline the remaining two steps required for Innovation Accounting.

Tuning the Engine

So once we know the gap we need to close, the team (and relevant advisors)brainstorm the various initiatives that we think that can move the needle and bring us from the established baseline to the ideal state.   Each of these are examined and the hypothesis is carefully checked to ensure that we can measure the impact of the initiative.  This is the case whether its a product development, marketing or any other type of initiative.  If we can’t measure the impact of our change, we eliminate the experiment, because we cannot validate if it makes any impact.  We also double check to see if seems logical that the action will improve the key driver of growth that we are focusing on.  This step is important as sometimes it can be easy to come up with tactical initiatives that seem logical in isolation but actually work against the overall growth strategy.  An extreme example to illustrate the point is trying to build an annual subscription service for a one time event like a wedding.  The repeat purchase model is completely different to the one time purchase or event.

Once we are certain that we can measure, and generally the hypothesis seems to be logical and the right thing to do, we build something or do something that will test it.  When designing experiments we use the MVP principal, what is the minimal amount of resources that we need to use to validate or disprove the assumption.  Once we work that out we put the task into our task bucket for our next prioritisation meeting and it goes onto the Kanban board to be implemented  (More about Kanban another time).

Reviewing Progress

Of course if the experiment works we will see the key metric we are watching rise from the baseline.  If not we test the next thing and so on.  Poor quantitative results force us to declare failure and drive more qualitative research, getting out and interviewing customers and finding out what is wrong with our execution and we work through the build, measure, learn loop.

You don’t need a lot of users to test these hypotheses, we worked on a stream of circa 200 unique visitors a day.  Its important that the metrics you choose for your Innovation Accounting adhere to Eric Ries’s “Value of three A’s” rule:

The metrics must be:

  • Actionable – demonstrate clear cause and effect
  • Accessible – easily understood, easy for everyone to obtain
  • Auditable – Not too complex and important they are accurate

Of course if none of the activities are making any difference, this leads to a possible pivot.  The pivot is a change in direction and is based on the previous learning, its best to do it quickly and to make bold changes as the opportunity for learning is greater.  Iterating fast means that you get the maximum learning completed and increases your odds of working out the business model before your cash runs out.

Our lessons learned:

  • Putting in clearly defined processes, and training everyone on the team in why and how it works makes the execution of Innovation Accounting manageable and workable.  Its advisable to allocate the implementation of the process to one individual to drive it.
  • Using a tool like the strategy canvas, makes it easier to understand, and visualise what are the important elements of your hypothesis, checking if the individual elements make sense and logically work together makes it easier to prioritise the experiments you need to make.
  • It can be tempting to test the latest “good idea” without checking its relevance toward the overriding assumption.  Its really worth running it through the strategy canvas with a group to check if the idea is really that relevant.  This process can eliminate a lot of waste and unnecessary work.

  • http://smartsoftwaremarketing.co.uk/ Giles Farrow

    Rory,

    Thank you for sharing your experiences. It is fantastically useful to see Eric’s theories being put into practice, especially on this side of the Atlantic.

    Congratulations

    • http://www.scurri.com Rory

      Giles,

      Thanks. We were lucky enough to have Eric coach us on an accelerator programme, he really made a big impact on us and influenced us a great deal.

      Rory

  • Pingback: Software Marketing Tweetables - 30 January 2012 | Smart Software Marketing