Chapter 7. Conclusion

Many organizations have begun their journey toward adopting AI: they’ve built data science teams, launched projects, and so forth. Of course, not all of these initiatives have delivered. There’s a noticeable gap between the “haves” and “have nots” as AI becomes a matter of competitive advantage. In this report, we promised that you’d learn how to operationalize AI—in other words, learn the important steps to develop a comprehensive understanding across your organization of how to build AI solutions in a repeatable, timely manner to shorten TTM, lower overhead, and reduce risks. Think of this as a recipe, if you will. The end goal of operationalizing AI is to establish an AI Center of Excellence and move forward as a unified organization on that journey. Let’s recap these steps.

Summary of Key Points

In the current view of AI practices and challenges in industry, we’ve identified key priorities for business and technical stakeholders:

  • Priority from a business stakeholder’s perspective: find a proper balance between abstracting another team’s contributions and obstructing the other teams.

  • Priority from a technical stakeholder’s perspective: the metric that puts almost all of these points into context is time-to-market—the time required for a project to progress from proof of concept to production.

Other key points from the first few sections:

  • Enumerate the personas—the descriptions of people in specific roles needed for operationalizing AI—that become guides for the conversations during design thinking and are needed for staffing, training, project planning, collaboration, and so on.

  • Leverage design thinking as a process for the entire organization to view AI applications as a shared problem. This is done through the understand-explore-model loop and by building a shared language for collaborative problem solving at speed and at scale.

  • The five stages of the AI life cycle—Scope, Understand, Build, Deploy, and Manage and Trust—bring together the personas into a collaborative process.

Pulling these points together, we introduced the concept of an AI Factory to avoid the all-too-frequent antipattern of “one-off” AI applications that are difficult to sustain. Key points to recall include:

  • Prioritize a road map, based on the current state and future directions, for which AI solutions the data science team will focus on and what the other departments will need to support.

  • Focus on setting up the three most important components: people, process, and platform.

Once you have the road map and components in place, integrate with the other supporting departments so that they can provide long-term support for the factory to establish an AI Center of Excellence (CoE).

The Red Bull case study noted how its team has developed an important decision process to distinguish among “sandcastle,” “log cabin,” and “castle” categories for projects in “Transition in Practice”:1

Once the proof of concept has been positively evaluated, we then transition all, none, or parts of the components to a more stable and scalable business applications environment.

Another key point from Red Bull was about its integration with the rest of the organization:

All in all, this pragmatic and jointly developed approach is working well and crucially allows data science and IT to work together constructively and on an equal footing. Importantly, this collaboration framework has also built the crucial basis for jointly discussing further extensions on tech requirements and roles needed for the data science initiative to flourish in our non-digital-native organization.

The excerpts from the Capital One presentation during OSCON 2019 include an exemplary description about an AI CoE in operation. This highlights the importance of trusted AI:

How can business processes and platforms be designed to reduce cognitive load and increase mutual trust, while allowing these teams to interoperate effectively?

Capital One described an important element of its process for balancing the abstraction with obstruction, mentioned in “Capital One: Model as a Service for Real-Time Decisioning”:

Define a separation of concerns for effective interactions between the kinds of teams involved. Then they can avoid requiring an overwhelming amount of synchronization, which would increase costs and risks.

The notion of a separation of concerns should be familiar to almost everyone who’s studied computer science. It’s the motivation for developing abstraction layers. Business stakeholders should make note of this approach. Capital One also highlighted the importance of time-to-market as a key performance indicator for its AI solutions—in other words, measuring how to make the deployment of AI applications affordable within an enterprise environment.

Finally, the Wunderman Thompson case study illustrated a full implementation of the AI CoE, bringing together most of the approaches outlined in this report. It cited the importance, circa 2020, of these approaches for its response to uncertain markets. Michael Murray, president and chief product officer at Wunderman Thompson, described it in “Expanding to Other Operating Companies”:

We now have an AI Factory that allows us to tap into the full power of Wunderman’s Identity Network and provide solutions at scale and speed to our clients. The Factory allows us to go from a new business idea to an AI solution in the matter of weeks, not months.

Looking Forward

As organizations move away from science experiments to AI at scale, we expect that the field of operationalizing AI will continue to evolve. While we do not expect the roles (personas) we listed to change dramatically in the near future, we do expect an acceleration of people shifting into these roles from existing roles, partly assisted by the democratization of AI/ML techniques. We are likely to see further advances in platforms—at algorithmic, functional, and infrastructure levels—to improve performance, capture domain knowledge better, and lower the threshold of entry. A growing sense of social responsibility and an increasing set of regulatory requirements will drive further demand for trusted AI. We expect this demand to be so widespread that the default process steps for operationalizing AI will be refined to support trust and governance across all stages of the AI life cycle. Against this backdrop, leaders should make sure that they first lay the foundations to operationalize AI, then observe and adapt as appropriate for their business.

1 Red Bull’s corporate headquarters is not far from Salzburg, Austria, which is overlooked by a castle called Hohensalzburg, one of the largest medieval castles in Europe. It’s also one that was rarely attacked throughout its long history. In other words, Red Bull knows about impressive, enduring castles!.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.124.8