One of the common metaphors used to describe innovation is that of a journey – a complex, fitful travel through uncertain territory involving false starts, wrong directions, blind alleys, and unexpected problems. Successful innovation implies the completion of this risky adventure and – through widespread adoption and diffusion of the new idea as a product, service, or process – a happy ending with valuable returns on the original investment. But it also provides an opportunity to reflect on the journey and to take stock of the knowledge acquired through an often difficult experience. It’s worth doing this because the knowledge gained through such reflection can provide a powerful resource to help with the next innovation journey.
Not all innovation is, of course, successful – but the opportunities for learning from failure are also considerable. Understanding what doesn’t work on a technological level, or recognizing the difficulties in a particular marketplace, which led to nonadoption, is useful information to take stock of and use when planning the next expedition. Experience is an excellent teacher – but its lessons will only be of value if there is a systematic and committed attempt to learn them.
This chapter reviews the ways in which learning can be captured from the innovation experience.
It will be useful to briefly take stock of the key themes we have been covering in the book. We can summarize these as follows:
We have also argued that innovation management is not a matter of doing one or two things well, but about good all-round performance. There are no, single, simple magic bullets but a set of learned behaviors. In particular, we have identified four clusters of behavior, which we feel represent particularly important routines. Successful innovation:
In the strategy domain, there are no simple recipes for success but a capacity to learn from experience and analysis is essential. Research and experience point to three essential ingredients in innovation strategy:
Within the area of linkages, developing close and rich interaction with markets, with suppliers of technology and other organizational players, is of critical importance. Linkages offer opportunities for learning – from tough customers and lead users, from competitors, from strategic alliances, and from alternative perspectives. The theme of “open innovation” is increasingly becoming recognized as relevant to an era in which networking and open collective innovation are the dominant mode.
In order to succeed, organizations also need effective implementation mechanisms to move innovations from idea or opportunity through to reality. This process involves systematic problem-solving and works best within a clear decision-making framework, which should help the organization to stop projects as well as to progress development if things are going wrong. It also requires skills in project management and control under uncertainty and parallel development of both the market and the technology streams. And it needs to pay attention to managing the change process itself, including anticipating and addressing the concerns of those who might be affected by the change.
Finally, innovation depends on having a supporting organizational context in which creative ideas can emerge and be effectively deployed. Building and maintaining such organizational conditions are a critical part of innovation management and involve working with structures, work organization arrangements, training and development, reward and recognition systems, and communication arrangements. Above all, the requirement is to create the conditions within which a learning organization can begin to operate, with shared problem identification and solving and with the ability to capture and accumulate learning about technology and about management of the innovation process.
Throughout the book, we have tried to consider the implications of managing innovation as a generic process but also to look at the ways in which approaches need to take into account two key challenges in the twenty-first century – those of managing “beyond the steady state” and “beyond boundaries.” The same basic recipe still applies, but there is a need to configure established approaches and to learn to develop new approaches to deal with these challenges.
To build dynamic capability, we need to focus on two dimensions of learning.
First, there is the acquisition of new knowledge to add to the stock of knowledge resources that the organization possesses. These can be technological or market knowledge, understanding of regulatory and competitive contexts, and so on. As we’ve seen throughout the book, innovation represents a key strategy for developing and sustaining competitiveness in what are increasingly “knowledge economies” – but being able to deploy this strategy depends on continuing accumulation, assimilation, and deployment of new knowledge. Firms that exhibit competitive advantage – the ability to win and to do so continuously – demonstrate “timely responsiveness and rapid product innovation, coupled with the management capability to effectively co-ordinate and redeploy internal and external competencies” [1].
And second, there is knowledge about the innovation process itself – the ways in which it can be organized and managed, the bundle of routines that enable us to plan and execute the innovation journey. Figure 15.1 reminds us of the model we have been using as an explanatory framework, and “innovation capability” refers to our ability to create and operate such a framework in our organizations.
But in a constantly changing environment, that capability may not be enough – faced with moving targets along several dimensions (markets, technologies, sources of competition, regulatory rules of the game), we have to be able to adapt and change our framework. This process of constant modification and development of our innovation capability – adding new elements, reinforcing existing ones, and sometimes letting go of older and no longer appropriate ones – is the essence of what is called “dynamic capability” [1].
The lack of such capability can explain many failures, even among large and well-established organizations. For example, the problem of:
The costs of not managing learning – of lacking dynamic capability – can be high. At the least, it implies a blunting of competitive edge, a slipping against previously strong performance. In some cases, the fall accelerates and eventually leads to terminal decline – as the fate of companies such as Digital, Polaroid, or Swissair, once feted for their innovative prowess, indicates. In others – such as IBM – there is a complete rethink and reinvention of the business, radically changing the operating routines and allowing new models to emerge. For others – such as Nokia – the process of reinvention continues, having moved from being a sprawling conglomerate linked to timber and paper to being dominant in mobile phone handsets to now playing a key role in providing the network infrastructure for the digital world.
So we need to look hard at the ways in which organizations can learn – and how they do so in conscious and strategic fashion. In other words, how do they learn to learn? This is why routines play such an important role in managing innovation – they represent the firm-specific patterns of behaviors that enable a firm to solve particular problems [6]. They embody what an organization (and the individuals within it) has captured from their experience about how to learn.
We can think of the innovation process shown in Figure 15.1 as a learning loop – picking up signals that trigger a response. As we’ve suggested, organizations should undertake some form of review of innovation projects in order to help them develop both technological and managerial capabilities [7]. One way of representing the learning process that can take place in organizations is to use a simple model of a learning cycle based on the work of David Kolb (Figure 15.2).
Here learning is seen as requiring the following [8]:
Effective learning from and about innovation management depends on establishing a learning cycle around these themes. In that sense, it is an “adaptive” learning system, helping the organization survive and grow within its environment. But making sure that this adaptive system works well also requires a second learning loop, one that can “reprogram” the system to tune it better to a changing environment and as a result of lessons learned about how well it works. (It’s a little like a central heating or air-conditioning system – there is an adaptive loop that responds when the temperature gets hotter or colder in the room by modifying the output of the heater or air-conditioning unit. But we also need someone to think about – and reset – the thermostat to suit the changing conditions.) This kind of “double loop” or generative learning is at the heart of the innovation management challenge [9–11]. How can we periodically step back and review how well the overall system is working and adapt it to new circumstances? This is the challenge of building “dynamic capability.”
We should also recognize the problem of unlearning. Not only is learning to learn a matter of acquiring and reinforcing new patterns of behavior – it is often about forgetting old ones [12]. Letting go in this way is by no means easy, and there is a strong tendency to return to the status quo or equilibrium position – which helps account for the otherwise surprising number of existing players in an industry who find themselves upstaged by new entrants taking advantage of new technologies, emerging markets of new business models. Managing discontinuous innovation requires the capacity to cannibalize and look for ways in which other players will try and bring about “creative destruction” of the rules of the game. Jack Welch, former CEO of General Electric, is famous for having sent out a memo to his senior managers asking them to tell him how they were planning to destroy their businesses! The intention was not, of course, to execute these plans, but rather to use the challenge as a way of focusing on the need to be prepared to let go and rethink – to unlearn [13]. In his studies of the shipbuilding division of Hyundai, Linsu Kim talks about the powerful approach of “constructed crisis” – creating a sense of urgency and challenge, which allows for both learning and unlearning to take place [14]. And Dorothy Leonard warns against the complacency that comes when “core competencies” become “core rigidities” – and block the organization form seeing or acting on urgent signals for change [15].
No organization or individual starts with a fully developed version shown in Figure 15.1. We learn and adapt our approach, building capability through a process of trial and error, gradually improving our skills as we find what works for us. These “behavioral routines” become embedded in “the way we do things around here”; they reflect our approach to managing innovation.
We need to recognize the importance of failure in this. Innovation is all about trying new things out – and they may not always work. Experimentation and testing, prototyping and pivoting are all part and parcel of the innovation story, and it is through this process that we gradually build capability.
Case Study 15.1 looks at the role of failure as a support for learning.
Most smart innovators recognize that failure comes with the innovation territory. “You can’t make an omelet without breaking eggs” is as good a motto as any to describe a process that by its very nature involves experimentation and learning. Typically, organizations work on the assumption that of 100 new product ideas, only a handful will make it through to success in the marketplace, and they are comfortable with that because the process of failing provides them with rich new insights, which help them refocus and sharpen their next efforts.
Entrepreneurs face the same challenge in starting up a new venture. It’s impossible to predict how a market will react, how technologies will behave, how new business models will gain acceptance, and so the approach is one of experimentation around a core idea. Feedback from carefully designed experiments allows the venture to pivot, to move around the core focus to get closer to the viable idea, which will work.
The problem is not with failure – innovations will often fail since they are experiments, steps into the unknown. It’s with failing to learn from those experiences.
Failure is important in at least three ways in innovation:
Experienced innovators know this and use failure as a rich source of learning. Most of what we’ve learned from innovation research has come from studying and analyzing what went wrong and how we might do it better next time – Robert Cooper’s work on stage gates, NASA’s development of project management tools, Toyota’s understanding of the minute trial-and-error learning loops, which their kaizen system depends upon and which have made it the world’s most productive carmaker [16,17]. Google’s philosophy is all about “perpetual beta” – not aiming for perfection but allowing for learning from its innovation. And IDEO, the successful design consultancy, has a slogan that underlines the key role learning through prototyping plays in their projects – “fail often, to succeed sooner!” Failure is also built into models of “agile innovation”; here the challenge is in making sure the experimental loops and learning capture are part of a system of “intelligent failure” [18–20].
So rather than seeing failure in innovation as a problem, we should see it as an important resource – as long as we learn from it.
If we are to extract useful learning from successful – or unsuccessful – innovation activities, then we need to look at the range of tools that might help us with the task. In the following section, we’ll briefly look at some of the possible approaches to this task.
Postproject reviews (PPRs) are structured attempts to capture learning at the end of an innovation project – for example, in a project debrief. This is an optional stage, and many organizations fail to carry out any kind of review, simply moving on to the next project and running the risk of repeating the mistakes made in the previous projects. Others do operate some form of structured review or postproject audit; however, this does not of itself guarantee learning since emphasis may be more on avoiding blame and trying to cover up mistakes.
On the positive side, they work well when there is a structured framework against which to examine the project, exploring the degree to which objectives were met, the things that went well and those that could be improved, the specific learning points raised, and the ways in which they can be captured and codified into procedures that will move the organization forward in terms of managing technology in future [21].
But such reviews depend on establishing a climate in which people can honestly and objectively explore issues that the project raises. For example, if things have gone badly, the natural tendency is to cover up mistakes or try and pass the blame around. Meetings can often degenerate into critical sessions with little being captured or codified for use in future projects.
The other weakness of PPRs is that they are best suited to distinct projects – for example, developing a new product or service or implementing a new process [22]. They are not so useful for the smaller-scale, regular incremental innovation, which is often the core of day-to-day improvement activity. Instead, we need some form of systematic capture. Variations on the standard operating procedures approach can be powerful ways of capturing learning – particularly in translating it from tacit and experiential domains to more codified forms for use by others [23]. They can be simple – for example, in many Japanese plants working on “total productive maintenance” programs, operators are encouraged to document the operating sequence for their machinery. This is usually a step-by-step guide, often illustrated with photographs and containing information about “know-why” as well as “know-how.” This information is usually contained on a single sheet of paper and displayed next to the machine. It is constantly being revised as a result of continuous improvement activities, but it represents the formalization of all the little tricks and ideas that the operators have come up with to make that particular step in the process more effective [24].
On a larger scale, capturing knowledge into procedures also provides a structured framework within which to operate more effectively. Increasingly, organizations are being required by outside agencies and customers to document their processes and how they are managed, controlled, and improved – for example, in the quality area under ISO 9000, in the environmental area under ISO 14000, and in an increasing number of customer/supplier initiatives such as Ford’s QS9000.
Once again, there are strengths and weaknesses in using procedures as a way of capturing learning. On the plus side, there is much value in systematically trying to reflect on and capture knowledge derived from experience – it is the essence of the learning cycle. But it only works if there is commitment to learning and a belief in the value of the procedures and their subsequent use. Otherwise, the organization simply creates procedures that people know about but do not always observe or use. There is also the risk that, having established procedures, the organization then becomes resistant to changing them – in other words, it blocks out further learning opportunities.
Benchmarking is the general name given to a range of techniques that involve comparisons – for example, between two variants of the same process or two similar products – so as to provide opportunities for learning [25–27]. Benchmarking can, for example, be used to compare how different companies manage the product development processes; where one is faster than the other, there are learning opportunities in trying to understand how they achieve this [28].
Benchmarking works in two ways to facilitate learning. First, it provides a powerful motivator since comparison often highlights gaps, which – if they are not closed – might well lead to problems in competitiveness later. In this sense, it offers a structured methodology for learning and is widely used by external agencies who see it as a lever with which to motivate particularly smaller enterprises to learn and change [29]. It provides a powerful focus for the operation of “learning networks” (described in Chapter 7), since it offers a framework around which shared learning can be targeted and monitored and across which experiences can be exchanged [30].
But benchmarking also provides a structured way of looking at new concepts and ideas. It can take several forms, between similar activities
The last group is often the most challenging since it brings completely new perspectives. By looking at, for example, how a supermarket manages its supply chain, a manufacturer can gain new insights into logistics. By looking at how an engineering shop can rapidly set up and change over between different products can help a hospital use its expensive operating theaters more effectively.
For example, Southwest Airlines achieved an enviable record for its turnaround speed at airport terminals. It drew inspiration from watching how industry carried out rapid changeover of complex machinery between tasks – and, in turn, those industries learned from watching activities such as pit-stop procedures in the Grand Prix motor racing world. In a similar fashion, dramatic productivity and quality improvements have been made in the health-care sector, drawing on lessons originating in inventory management systems in manufacturing and retailing [31].
Building on the success of benchmarking as an organizational development tool, there has been increasing use of capability maturity models [32]. The origin of the term came from software projects where it became clear that success – in terms of delivering regularly on time, within budget, and with low error rates was not an accident – it resulted from a learned and developed capability. In such models, the auditing and reviewing process in benchmarking is done against ideal-type or normative models of good practice. Such an approach found particular expression during the “quality revolution” of the 1990s, where benchmarking frameworks such as the Malcolm Baldrige Award in the United States, the Deming Prize in Japan, and the European Quality Award all used sophisticated benchmarking frameworks [33]. The approach has been extended to a number of other domains – for example, software development processes, project management, IT implementation, and new product development [32]. It has been used by policymakers aiming to upgrade performance in key sectors – for example, in the United Kingdom, a framework for benchmarking and auditing manufacturing performance was developed and offered as a national service, with special emphasis on assisting smaller firms improve their performance [34,35].
Agile innovation methods also make extensive use of a formal learning cycle. Whether in projects within established organizations or as part of the “lean start-up” approach, the core idea is controlled experimentation. Hypotheses are developed and tested, and the resulting feedback used to help learn how to target and manage the innovation development, using concepts such as pivoting to support the approach [19,20].
In thinking about innovation management, we can draw an analogy with financial auditing where the health of the company and its various operations can be seen through auditing its books. The principle is simple: using what we know about successful and unsuccessful innovation and the conditions that bring it about, we can construct a checklist of questions to ask of the organization. We can then score its performance against some model of “best practice” and identify where things could be improved.
This auditing approach has considerable potential relevance for the practice of innovation management, and a number of frameworks have been developed to support it. Back in the 1980s, the UK National Economic Development Office developed an “innovation management tool kit,” which has been updated and adapted for use as part of a European program aimed at developing better innovation management among small- and medium-sized enterprises (SMEs). Another framework, originally developed at London Business School, was promoted by the UK Department of Trade and Industry. Others include the “living innovation” model, which was jointly promoted with the Design Council [34,36], and various innovation frameworks promoted by trade and business associations. Francis offers an overview of a number of these [37]. This tradition has continued with the work of NESTA in the United Kingdom, which has commissioned a variety of studies to help develop an “Innovation Index,” offering a measurement framework for both practice and performance in innovation [38].
Other frameworks that cover particular aspects of innovation management, such as creative climate, continuous improvement, and product development, have been developed [39–41]. With the increasing use of the Internet have come a number of sites that offer interactive frameworks for assessing innovation management performance as a first step toward organization development.
In each case, the purpose of such auditing is not to score points or win prizes but to enable the operation of an effective learning cycle through adding the dimension of structured reflection. It is the process of regular review and discussion, which is important rather than detailed information or exactness of scores. The point is not simply to collect data but to use these measures to drive improvement of the innovation process and the ways in which it is managed. As the quality guru, W. Edwards Deming, pointed out, “If you don’t measure it you can’t improve it!”
There are typically two dimensions of interest in carrying out such an “innovation audit”:
Figure 15.3 indicates the range of measures that we might put in place, covering the inputs and outputs of the process together with our core interest, how the process itself is organized and managed. An overview of such approaches is given by Richard Adams and colleagues [42].
Two sets of measures represent things we could count and evaluate as indicators of innovation – how much we put in (time, money, skilled resources, etc.) and what the outputs from the process are.
Inputs to the innovation process are important – if we don’t spend any time or money, or invest in skilled staff and their further development, then we are unlikely to be able to operate a systematic process to generate ideas and translate them into innovations that create value. Possible indicators here might include spending on R&D or market research, investment in training and development, or the percentage of skilled scientists and engineers on the staff. More subtle but potentially interesting measures might include the amount spent on open-ended or “blue-sky” exploration compared with “mainstream” innovation activities, or the diversity of the backgrounds of staff recruited to help with the process.
In reviewing outputs – innovative performance – we can again look at a number of possible measures and indicators. For example, we could count the number and range of patents and scientific papers as indicators of knowledge produced or the number of new products introduced (and percentage of sales and/or profits derived from them) as indicators of product innovation success [43]. And we could use measures of operational or process elements, such as customer satisfaction surveys to measure and track improvements in quality or flexibility [29,44]. We can also try to assess the strategic impact where the overall business performance is improved in some way and where at least some of the benefits can be attributed directly or indirectly to innovation – for example, growth in revenue or market share, improved profitability, higher value added [45].
Interestingly, recent attempts to develop different output measures of innovation performance have highlighted the previously “hidden” innovation potential in sectors such as the creative industries, professional services, or advertising [46,47].
We could also consider a number of more specific performance measures of the internal workings of the innovation process or particular elements within it. For example, we could monitor the number of new ideas (product/service/process) generated at the start of innovation system, failure rates – in the development process, in the marketplace, or the number or percentage of overruns on development time and cost budgets. In process innovation, we might look at the average lead time for introduction or use measures of continuous improvement – suggestions/employee, number of problem-solving teams, savings accruing per worker, cumulative savings, and so on.
In reviewing how well our innovation operates, we could look at the ways in which the process itself is organized and managed. The core questions in our process model are relevant here:
There are various measures that we could apply to support reflection and analysis around these questions. In each chapter of the book, we have tried to present checklists and frameworks for thinking about these questions – for example, how good is the “creative climate” of the organization or how well strategy is deployed and communicated [40]. It’s also important to use such frameworks as a starting point for more focused exploration. Throughout the book, we have stressed that while the challenge in innovation management is generic, there are specific issues around which specific responses need to be configured.
We might, for example, look at the case of service innovation and focus our audit questions around themes that might be particularly relevant in thinking about managing such innovation. See Box 15.1 for a discussion of five components involved in measuring service innovation.
Similarly, we have been arguing that there are conditions – beyond the steady state – where we need to take a different approach to managing innovation and to introduce new or at least complementary routines to those helpful in dealing with “steady-state” innovation. Again we can develop specific audit questions to help facilitate this kind of reflection, and the website has an example of such a framework. Or we could consider different stages in the life cycle of the organization – for example, there is a tool to aid reflection around key questions for start-up entrepreneurs on the website.
We can also develop audits for particular aspects of the innovation process – for example, is there a “creative climate” within which ideas can flourish and be built upon? Or are there structures and processes in place to enable high involvement of employees in the innovation process? Are there conditions – beyond the steady state – where we need to take a different approach to managing innovation and to introduce new or at least complementary routines to those helpful in dealing with “steady-state” innovation?
Table 15.1 summarizes some structured frameworks around these themes.
TABLE 15.1 Audit Frameworks to Support Capability Development
Key Questions and Issues in Managing Innovation | Reflection and Development Aids Available on Website |
How well do we manage innovation? | Innovation audit |
How well do we manage service innovation? | Service innovation (STARS) framework |
Start-up phase for new ventures | Entrepreneurs checklist |
Do we engage our employees fully in innovation? | High-involvement innovation audit |
How well do we manage discontinuous innovation? | Discontinuous innovation audit |
How widely do we search in an open-innovation world? | Search strategies audit |
Do we have a creative climate for innovation? | Creative climate review |
Can we make the most of external knowledge for innovation? | Absorptive capacity review |
How effective are our selection processes for innovation? | Selection audit |
Do we have a clear innovation strategy – and is it communicated and deployed? | Innovation strategy audit |
In this section, we give some examples of reflecting on the innovation process in any organization.
There are many approaches that an organization could take to managing the challenge of finding opportunities to trigger the innovation process. How well it does it is another matter – but one way we could tell might be to listen to the things people said in describing “the way we do things around here” – in other words, the pattern of behavior and beliefs that creates the climate for innovation.
And if we walked around the organization, we’d expect to hear people talking about the methods they actually use. We should hear things such as around here…
Of course, part of the search question is about picking up rather weak signals about emerging – and sometimes radically different – triggers for innovation. So to deal with the unexpected, people in smart firms might also say things such as around here…
If we visited a smart organization, we’d expect to find that people we approached would tell us things such as around here…
And when it comes to just “getting it done,” we would expect to hear things such as around here…
We’d also expect them to have some provision for the wilder and more radical kind of project, which might need to go on a rather different route in making its journey. People might say about things such as around here…
Statements we’d expect to hear around such a strategically focused and led organization might include around here…
And we’d also expect some stretching strategic leadership, getting the organization to think well outside its box and anticipate very different challenges for the future – expressed in statements such as around here…
If we visited such an organization, we’d find evidence of these approaches being used widely and people would say things such as around here…
We’d also find a recognition that one size doesn’t fit all and that innovative organizations need the capacity – and the supporting structures and mechanisms – to think and do very different things from time to time. So we’d also expect to find people saying things such as around here…
If we were to visit a successful innovative player, we’d get a sense of how far they had developed these capabilities for networking by asking around. People would typically say things such as around here…
And there would be some evidence of their increasing efforts to create wide-ranging “open-innovation”-type links – with statements such as around here…
Smart firms actively manage their learning – and the kinds of things people might say in such organizations would be that around here…
A great deal of research effort has been devoted to the questions of what and how to measure in innovation. The risk is that we become so concerned with these questions that we lose sight of the practical objective, which is to reflect upon and improve the management of the process. The format of any particular audit tool is not important; what is needed is the ability to use it to make a wide-ranging review of the factors affecting innovation success and failure and how management of the process might be improved. It offers:
So, for example, an organization with no clear innovation strategy, with limited technological resources, and no plans for acquiring more, with weak project management, with poor external links, and with a rigid and unsupportive organization would be unlikely to succeed in innovation. By contrast, one that was focused on clear strategic goals had developed long-term links to support technological development, had a clear project management process, that was well supported by senior management, and that operated in an innovative organizational climate would have a better chance of success.
Figure 15.4 gives an example of a framework for thinking about developing innovation management capability.
Of course, no organization starts with a perfectly developed capability to organize and manage innovation. It undertakes the process of trial-and-error learning, slowly finding out which behaviors work and which do not and gradually repeating and reinforcing them into a pattern of “routines.” Developing innovation capability involves establishing and reinforcing those routines and reviewing and checking that they are still appropriate or whether they need replacing or modifying. View 15.1 gives some examples of these reflection points. Some useful key questions are as follows:
We have repeatedly said that innovation is complex, uncertain, and almost (but not quite) impossible to manage. That being so, we can be sure that there is no such thing as the perfect organization for innovation management; there will always be opportunities for experimentation and continuous improvement. As we have suggested throughout the book, the challenge is to constantly review and reconfigure in the light of changing circumstances – whether discontinuous “beyond the steady state” innovation or in the context of “open innovation where the challenge is working beyond the boundaries.” In the end, innovation management is not an exact or predictable science but a craft, a reflective practice in which the key skill lies in reviewing and configuring to develop dynamic capability.
Throughout the book, we have tried to consider the implications of managing innovation as a generic process but also to look at the ways in which approaches need to take into account two key challenges in the twenty-first century – those of managing “beyond the steady state” and “beyond boundaries.” The same basic recipe still applies, but there is a need to configure established approaches and to learn to develop new approaches to deal with these challenges.
In this chapter, we have looked at the ways in which organizations can capture learning and build capability in innovation management. The major requirement is for a commitment to undertake such learning, but it can also be enabled by the use of tools and reflection aids. In particular, the chapter looks at various approaches to innovation auditing and offers some templates for reviewing and developing capability across the process as a whole and in particular key areas.
A wide range of books and online reviews of innovation now offer some form of audit framework including the Pentathlon model from Cranfield University [48] and Bettina von Stamm’s “Innovation wave” model – see [49–52] for other examples. Commercial organizations such as IMP3rove (www.improve-innovation.eu) offer a benchmarking and review framework, and the International Standards Organization is now exploring establishing an international framework.
Websites include www.innovationforgrowth.co.uk, http://www.bobcooper.ca, http://innovationexcellence.com/, http://www.cambridgeaudits.com/. AIM Practice also has a variety of audit tools around innovation, and NESTA (https://www.nesta.org.uk/) has a number of reports linked to its major Innovation Index project.
You can find a number of additional downloadable case studies on the companion website, including the following:
3.148.107.254