Home » Innovation »Open Innovation

Innovation Metrics: Input From Intel, Sara Lee, Grundfos and J&J

by Stefan Lindegaard
January 22, 20101/22/10 5 Comments

I have often shunned the idea of metrics for innovation as it has been very difficult finding companies being good at this.

However, I believe it is important to work this out in order to raise the innovation productivity and in this post I share some input from a couple of large corporations based on a discussion on LinkedIn last year.

The discussion was started by Jimm Feldborg, who is R&D Manager at Grundfos in China. Jimm pointed out that a good start is to understand whether your indicator is a:

• Lag indicator. The results are lagged with weeks, months or years and cannot be changed. Some examples are rewards and the number of patents.

• Current indicator. The results happen right now giving you some possibillities to act and change and thus affect the future results. Some examples are the number of ideas generated and ongoing projects.

• Lead indicator. The results are predictive for the future. You can make radical change in your approach and thus affect the results. Examples are pretty hard to give here.

Jeff Murphy, an Executive Director at Johnson & Johnson, suggests that innovation metrics (and metrics for any deployment like this) need to be dynamic by design. He continues:

1) Initially, metrics should focus on engagement, training and participation of individuals.

2) Then, as you begin to build a critical mass of capable individuals, the focus of your metrics shifts to your innovation pipeline (active projects by stage, flow of projects through concept, development, launch or kill…) and early wins. This is in addition to item 1 metrics above.

3) Finally, as your organization’s initiative begins to mature, your focus shifts to the end goals – return on investment, successful new products or services launched, revenue from new launches, etc. as well as optimizing your development and commercialization process. This is in addition to the item 2 metrics.

If an organization gets ahead of itself in the metrics area, it can lead to unrealistic expectations during the early stages. On the other hand, if it gets behind on implementing the appropriate metrics and delays getting to #3, it leads to under performance, and activity without business results.

Jeff emphazises that the key is to match your selected metrics with your deployment life cycle. He also states that there are literally hundreds of metrics that you can choose from, but optimally, 8-15 metrics at any one time for an organization will be enough for senior management and/or the effective management of the innovation deployment. If any more granularity is needed, that should be done at the functional level.

Lessons from Intel

Personally, I keep getting back to a visit at Intel a few years back. It was interesting to get an inside view of this Silicon Valley giant, but it was also strange to sense how it was driven by control rather than creativity. I did miss a more creative sense, but I take my hats off for their ability to measure their innovation initiatives in which they track the following information.

 Number of innovation-related rewards and recognitions

 Numbers from various feedback mechanisms, showing employee acceptance and understanding of the initiative

 Results from the innovation self assessment capability maturity framework (survey measuring five levels of maturity related to innovation behavior)

 Percentage of our budget dedicated to innovation, research, and exploration of emerging technologies

 Shareholder value created from innovation activities. (Shareholder Value = IT Efficiencies + Business Value provided to the IT customers)

 Number of ideas generated in specific innovation harvesting campaigns

 Number of ideas harvested from the campaigns and turned into implementable projects

 Number of invention disclosure filings (IDFs)

 Number of Intel patent submissions

 Number of white papers published.

You can read further in this paper, Developing Systemic Innovation in an IT organization, that provides an overview of how Intel works to foster and encourage innovation through its IT organization.

Paul Chaudury, VP of Innovation at Sara Lee, mentions that he has successfully followed the below KPI creation and reporting process at his prior job:

1. Net sales from new products. Reported monthly.
% of net sales from new products at year 1 and the next three years

2. Projected value of the pipeline. Reported quarterly.
Risk adjusted year 2 sales value of all ideas and projects in pipeline.
Different probability rates for each phase in stage gate.

3. Average time to market from concept to launch date. Reported monthly.
# of weeks from exit of concept phase to launch date

4. Aggregate portfolio net present value. Reported quarterly.
Risk adjusted NPV of cash flow of all projects in pipeline from feasibility to
launch phase in stage gate process.

Paul mentions that most of the calculations were automated and that reporting was done to senior management. KPIs were part of the key annual objectives and performance review for all involved with innovation. After a year from implementation, the process was continuously improved to fit the needs and it became a part of the company culture.

I also scratched the topic in a recent blog post, Increasing The Innovation Productivity  in which you can also find a few metrics used by P&G.

Francois Couture also started a new discussion related to this in my Leadership+Innovation group at LinkedIn.

Your input is highly appreciated.

Currently there are "5 comments" on this Article:

  1. Stefan:

    In my experience, J&J has it right. The first focus on metrics needs to be about building skills, excitement and activities around innovation. The second focus as the innovation program matures should be on activities and the pipeline. Finally, as ideas come to fruition as new products and services, the metrics need to be about return, share or differentiation.

    The challenge is that most firms are comfortable measuring the ROI only (quantifiable) and are impatient to measure those things. They start measuring quantifiable things when there's really not much to measure and ignore the qualitative concepts. Then they are upset when there's no measurable output.

    It takes time – often years – for a good idea to be realized as a new product or service. If the innovation program is new, ROI may not be realized for a year or more in many industries, so measure the excitement, activities and engagement until the ideas can develop.

  2. Thanks for sharing more!!!

  3. Ravi Rao says:

    I really like the idea of dynamic metrics. What a great concept! Thanks for sharing.

  4. Hello,

    thanks for sharing your insights.

    I have got not a lot of experience on metrics, I just believe they are extremely important if not fundamental to the growth of any organization. I've recently been studying a bit the matter and then came across quite an interesting book I'd like to share with you. It proposes a dynamic approach to performance measurement, referred as "socialization of measurements". This is claimed to be a breakthrough in the field.

    Here you go the details, then:

    Transforming Performance Measurement: Rethinking the Way We Measure and Drive Organizational Success.

    By Dean R. Spitzer


    Floriano Bonfigli.

  5. […] revenue from products introduced within the past 3 years). Stefan Lindegaard has also written a very good post on this subject, with examples from Johnson & Johnson and […]