CHAPTER 7

Assembling Your AI Operations Team

by Terence Tse, Mark Esposito, Takaaki Mizuno, and Danny Goh

Here is a common story of how companies trying to adopt AI fail. They work closely with a promising technology vendor. They invest the time, money, and effort necessary to achieve resounding success with their proof of concept and demonstrate how the use of artificial intelligence will improve their business. Then everything comes to a screeching halt—the companies finds themselves stuck, at a dead end, with their outstanding proof of concept mothballed and their teams frustrated.

What explains the disappointing end? Well, it’s hard—in fact, very hard—to integrate AI models into a company’s overall technology architecture. Doing so requires properly embedding the new technology into the larger IT systems and infrastructure—a top-notch AI won’t do you any good if you can’t connect it to your existing systems. But while companies pour time and resources into thinking about the AI models themselves, they often do so while failing to consider how to make these models actually work with the systems they have.

The missing component here is AI operations—or “AIOps” for short. This is a practice involving building, integrating, testing, releasing, deploying, and managing the system to turn the results from AI models into the insights desired by the end users. At its most basic, AIOps boils down to having not just the right hardware and software but also the right team: developers and engineers with the skills and knowledge to integrate AI into existing company processes and systems. Evolved from a software engineering and practice that aims to integrate software development and software operations, it is the key to converting the work of AI engines into real business offerings and achieving AI at a large, reliable scale.

Start with the Right Environment

Only a fraction of the code in many AI-powered businesses is devoted to AI functionality—actual AI models are, in reality, a small part of a much larger system, and how users can interface with them matter as much as the model itself. To unlock the value of AI, you need to start with a well-designed production environment (the developers’ name for the real-world setting where the code meets the user). Thinking about this design from the beginning will help you manage your project, from probing whether the AI solution can be developed and integrated into the client’s IT environment to the integration and deployment of the algorithm in the client’s operating system. You want a setting in which software and hardware work seamlessly together, so a business can rely on it to run its real-time daily commercial operations.

A good product environment must successfully meet three criteria:

Dependability

Right now, AI technologies are fraught with technical issues. For example, AI-driven systems and models will stop functioning when fed wrong or malformed data. Furthermore, the speed they can run at is bound to diminish when they have to ingest a large amount of data. These problems will, at best, slow the entire system down and, at worst, bring it to its knees.

Avoiding data bottlenecks is important to creating a dependable environment. Putting well-considered processing and storage architectures in place can overcome throughput and latency issues. Furthermore, anticipation is key. A good AIOps team will consider ways to prevent the environment from crashing and prepare contingency plans for when things do go wrong.

Flexibility

Business objectives—and the supporting flows and processes within the overall system—change on an ongoing basis. At the same time, everything needs to run like clockwork at a system level to enable the AI models to deliver their promised benefits: Data imports must happen at regular intervals according to some fixed rules, reporting mechanisms must be continuously updated, and stale data must be avoided by frequent refreshing.

To meet the ever-evolving business requirements, a production environment needs to be flexible enough for quick and smooth system reconfiguration and data synchronization without compromising running efficiency. Think through how to best build a flexible architecture by breaking it down into manageable chunks, like Lego blocks that can subsequently be added, replaced, or taken off.

Scalability and extendibility

When businesses expand, the “plumbing” within the infrastructure inevitably has to adapt. This can involve scaling up existing capabilities and extending into new competencies. Yet an inescapable fact is that different IT systems often carry different performance, scalability, and extendibility characteristics. The result: Many problems will likely arise when businesses try to cross system boundaries.

Being able to simultaneously retain “business as usual” while embedding upgraded AI models is critical to business expansion. The success depends greatly on the ability of the team to constantly adjust, tinker, and test the existing system with the new proposed solution, reaching equilibrium through functionality of old with new systems.

Good Systems Come from Good Teams

The question, therefore, isn’t whether you need an AIOps team, it’s what kind of AIOps team makes the most sense for your business. For most businesses, the most important decision they’ll make with their AIOps team is whether they want to build the team in house or contract it out. There are advantages to both, but here’s what the tradeoffs look like:

Do it yourself

On the plus side, creating your own team to build and maintain a production environment gives you full control over the entire setup. It can also save a lot of potential management and contractual hassles resulting from having to work with external suppliers. This applies to both large companies, which may want to verticalize the AIOps team, as well as for small- to medium-sized enterprises that may want to expand the competencies of their IT team to be able to deal with the production environment directly.

That said, DIY is no small undertaking—it involves significant administrative and organizational burdens, not to mention overhead. Additionally, companies need to develop expertise and knowledge of AIOps in house. The upfront economic impact is also likely to be huge: High initial cash outlays are needed and tied up to buy depreciating assets like storage hardware and servers. Even with cloud infrastructure, the “trial and error” setup activities will likely push installation costs up.

Plug and play

An alternative is to partner with an AIOps vendor. A good vendor will be able to work closely with its client, offering the required expertise to construct and run a production environment that sits well within the client’s IT infrastructure and can support AI models, be they self-developed or supplied by third parties. With such a service, enterprises can access a robust production environment and a trustworthy AIOps team while freeing up the enormous resources otherwise necessary to run their own AIOps.

However, for many businesses, this may mean losing the right to own a proprietary system and a full say in the running of AIOps. It may come across as a compromise between financial constraints and access to a solid and robust AI architecture, which may not be as bespoke as in the case of a native AIOps project but good enough to help the firm digitize its production.

For any business wanting to leverage the benefits of AI, what truly matters is not the AI models themselves; rather, it’s the well-oiled machine, powered by AI, that takes the company from where it is today to where it wants to be in the future. Ideals and one-time projects don’t. AIOps is therefore not an afterthought; it’s a competitive necessity.

__________

Terence Tse is a cofounder of the AI solutions provider Nexus FrontierTech and professor in entrepreneurship at ESCP Business School. Mark Esposito is a cofounder and the Chief Learning Officer at Nexus FrontierTech. He has worked as a professor of economics at Hult International Business School and Arizona State University’s Thunderbird and served as an institute council coleader for the MOC Program at Harvard Business School. Takaaki Mizuno is a cofounder and the CTO at Nexus FrontierTech and the author of numerous publications, including Web API: The Good Parts, which became a bestseller on Amazon Japan. Danny Goh is a cofounder and the CEO at Nexus FrontierTech and an entrepreneurship expert at the Saïd Business School, University of Oxford.


Adapted from “The Dumb Reason Your AI Project Will Fail,” on hbr.org, June 8, 2020 (product #H05O4O).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.237.58