AI Hero: Cathy

My son Guillermo told me something some time ago that made me think. He said he wanted to learn computer programming, which probably made me the happiest parent on earth, but he asked for one condition: I had to be with him the entire time. My kids don’t usually ask their parents for help with homework, so why was Guillermo suddenly asking for it? “I’m afraid I may create an AI that turns against me,” was his answer.

Setting aside how little that says about me and my abilities to provide my own family with confidence in AI, Guillermo’s comment was a symptom of something bigger. If society doesn’t trust AI, how is it going to have any positive impact on it?

AI has huge potential to help solve society’s most pressing challenges. Great people are making others’ lives better by dedicating their own to research into new cures for diseases, inclusion of people with disabilities, environmental protection, refugee assistance, and natural disaster intervention, among many other examples.

Empowering those great people with AI can help humanity to take a new look at the problems we are facing. Every single aspect of our lives can potentially be augmented and improved with AI. The positive transformation that AI can create in society is just beginning, and new innovative applications are being introduced every day.

But for positive transformation to happen, AI has to be trusted by society. Every single technological disruption in history has required a careful balance between the development of technology and its safety. For example, many breakthroughs had to be made for aviation to be used safely at scale and enable the global impact it has today, and we had to learn how to transport electricity safely before it could power our cities and homes.

In the same way, we need to develop safe AI and balance its accelerated pace of development with protections against its potential risks. AI development is leading to challenging questions: What are the risks of AI to user privacy? How do we make sure AI algorithms don’t learn unfair or biased behaviors from real-world biases? Who is accountable for the decisions made by AI?

These questions and many others require a different breed of hero. Previous transformations have required heroes focused on the technology. Aviation, electricity, and software made progress because of genius innovators who took the technology a little bit further in every iteration. AI is different. It connects with humans at a deeper level, and it has the potential to participate in impactful decisions affecting many areas of our lives as it proliferates in health care, financial services, transportation, and education. Therefore, it requires heroes who can combine technological innovation with a humanistic approach, analytical thinking with social skills, left brain with right brain.

Cathy has worked at EY, a global leader in assurance, tax, transaction, and advisory services, for 26 years. She is a social innovator who comfortably moves between these opposing dynamisms every day. Her left brain–right brain balance is embedded in her DNA, and in conjunction with her innate curiosity for learning, it has motivated Cathy’s balanced approach to AI.

Being from a small town in rural Saskatchewan (Canada) with limited scholastic resources didn’t hold Cathy back from pursuing a life-long love of learning, just like bullying, blindness, poverty, or a family disease didn’t stop the other AI heroes in this book. When her high school didn’t offer physics, Cathy took it by correspondence and re-created lab experiments at home. With no local museums or even a movie theatre, Cathy leveraged her small public library to learn about the rest of the world through books.

Having a father who was an early adopter of technology gave Cathy access to computers at a very young age. It was while playing a video game for the Commodore 64 that another of Cathy’s passions surfaced. In the classic Lemonade Stand game, players make decisions about different aspects of running a lemonade stand, like pricing, stock, and advertising, based on external factors like the weather and competitors. The uncertainty of the results and the delicate balance required to maximize growth while keeping risk low fascinated Cathy—so much so that she enrolled in the University of Waterloo’s Masters of Accounting program, and joined EY as part of its co-op program.

Cathy’s atypical combination of interests spanning technology, accounting, and social issues didn’t take long to show up. In her time working at EY she has made use of her accounting background and CPA designation to navigate the balance between business and societal interests, not only with new technologies but also broader social issues including climate change, sustainability, social inclusion, and diversity. Throughout her career she has acted as a bridge between opposing interests, trying to find common ground.

From the mainframe to ERP systems to ecommerce, Cathy has experienced the evolution of technology firsthand and observed the growing trust challenges created by each new technological innovation. She managed the risks of the first distributed computers automating tasks that were previously performed by humans. She evaluated the sufficiency of mechanisms to ensure data protection and privacy as financial systems became web-enabled, and she had to think about how to avoid fraud, cyber-crime, and money laundering as digitization and compute power increased. In a nutshell, Cathy’s role is to always ask, “What can go wrong?”

When AI started to sound like the next step in the technology journey, Cathy was already there. Driven by her personal interests, she was thinking about the potential risks of scaling AI at the enterprise level even before AI was in the enterprise. She started to share her opinions on the topic with EY colleagues, customers, LinkedIn followers, and even her book club. Cathy lives by EY’s motto: “The better the question. The better the answer. The better the world works.”—and AI had a lot of questions to be answered.

Based on this early interest in managing the risks of AI, Cathy was asked to take on the role of EY’s Global Trusted AI Advisory Leader in 2018. Since then, she’s had the huge challenge of setting the strategy at EY to help customers across the most regulated and mission-critical industries to responsibly embrace artificial intelligence.

To meet this challenge Cathy used a similar approach to the one you will learn about in the next chapter. Instead of starting with processes or technologies, she focused on identifying the outcome. What does responsible AI look like? What is needed for users to trust in AI?

To answer these questions, Cathy worked with a global team of individuals at EY with diverse backgrounds to define the unique attributes for their organization, aligned with the company’s values—the first step in any responsible AI journey. After three months of close partnership between representatives from EY’s Technology Risk, Data and Analytics, Innovation, Assurance, and IT Advisory teams, the company released its list of trusted AI attributes: performance, transparency, explainability, resiliency, and unbiased. (We will look in detail at Microsoft’s equivalent attributes, which are strongly aligned with these, in the next chapter.)

Cathy’s contribution didn’t end with the development of EY’s trusted AI attributes, though; in fact, that was just the beginning. She quickly recognized the importance of fitting the ethical and social considerations for AI into the existing enterprise governance. Trust in AI can be achieved only if the trust attributes are embedded across the organization and throughout the AI development lifecycle.

EY already had such a lifecycle in place (MLOps, which you learned about in Chapter 5), but it needed to be redefined to make sure the trust attributes were enforced. The result was the launch of EY’s Trusted AI Framework, which mapped the trust attributes to the AI development lifecycle. This framework includes a three-step process to promote trust in AI initiatives in the company:

Purposeful design
Design and build systems that purposefully integrate the right balance of robotic, intelligent, and autonomous capabilities to advance well-defined business goals, mindful of context, constraints, readiness and risks.
Agile governance
Track emergent issues across social, regulatory, reputational, and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing and management, model training, and monitoring.
Vigilant supervision
Continuously fine-tune, curate and monitor systems to ensure reliability in performance, identify and remediate bias, promote transparency and inclusiveness.

Changing the entire development lifecycle in an organization to take into account the AI trust principles is not an easy task. As Cathy went through this journey, she identified best practices that she now recommends to EY’s customers. One of these is creating an AI advisory board that can work with the organization to determine which use cases to apply AI to and bring in broader ethical and social considerations. It’s important for the AI advisory board to ask not only “Can we use AI?” but also “Should we?”

Other practices recommended by EY include awareness training for executives and developers, as well as design standards for the development of AI that incorporate a detailed risk and control framework. We’ll discuss these and other governance practices in more detail in the next chapter.

Cathy compares these practices to the way she taught her twin daughters how to ride a bike: it took patience and the appropriate safeguards, in the form of training wheels, parental supervision, and a controlled environment, to avoid early falls. And even after they’d learned the basics it was important to stay vigilant, as her daughters could still suffer a mishap as they moved into new terrain, rode faster, or tried new things like going hands-free. AI is very similar. It should be trained and operated with appropriate safeguards that match its capabilities and limitations, and continuous monitoring mechanisms need to be put in place to provide early warning if it fails or begins to show suboptimal performance in new terrains or capabilities.

Cathy’s twin daughters, like any other children, will always push to test their limits. Companies will similarly want to push the limits of how they can utilize new technologies like AI. Your role in both cases is to make sure that happens in a safe environment, understanding the limitations at each step and putting the appropriate safeguards in place. As a business or technical leader, you will play a big role in the sustainable development of AI in your company. Establishing a strong culture of responsibility in your organization will be critical to develop AI that can be trusted by your employees and your customers, and ultimately to contribute to the societal impact of AI.

I won’t cover how to ride a bike in this book, but in the next chapter you do learn how to set up the AI training wheels for your company. First, I will cover some key principles you should consider for AI development. Then, you will learn key practices to put those principles to work in your governance processes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.217.5