Chapter 1. Contemporary Model Governance

Complex systems tend to drift toward unsafe conditions unless constant vigilance is maintained.

 — Closing the AI Accountability Gap, Google Research

Building the best AI system starts with cultural competencies and business processes. Along with a case that illustrates what happens when an AI system is built without proper rigor, Chapter 9 presents numerous cultural and procedural approaches you can use to improve AI performance and safeguard your organization’s AI against real-world safety and performance problems. The primary goal of the methodologies discussed in this chapter is to create better AI systems. This might mean improved in silica test data performance. But it really means training models that perform as expected once deployed in vivo, so you don’t lose money, hurt people or cause other harms.

Chapter 9 begins with a discussion of basic legal standards, to inform system developers of their fundamental obligations when it comes to safety and performance. Because those who do not study history are bound to repeat it, Chapter 9 then highlights AI incidents, and discusses why understanding AI incidents is important for proper safety and performance in AI systems. Since many AI safety concerns require thinking beyond technical specifications, Chapter 9 then blends model risk management (MRM) and information technology (IT) security best practices to put forward numerous ideas for improving AI safety culture and processes within organizations. Chapter 9 will close with a case study focusing on safety culture, legal ramifications, and AI incidents.

Basic Legal Obligations

As makers of consumer products, data scientists and ML engineers have a fundamental obligation to create safe systems. To quote a recent Brookings Institute report, Products liability law as a way to address AI harms, “Manufacturers have an obligation to make products that will be safe when used in reasonably foreseeable ways. If an AI system is used in a foreseeable way and yet becomes a source of harm, a plaintiff could assert that the manufacturer was negligent in not recognizing the possibility of that outcome.” Just like car or power tool manufacturers, makers of AI systems are subject broad legal standards for negligence and safety. Product safety has been the subject of large amounts of legal and economic analysis, but this subsection will focus on one of the first and simplest standards for negligence: the Hand Rule. Named after Judge Learned Hand, and coined in 1947, provides a viable framework for AI product makers to think about negligence and due diligence. The Hand Rule says that a product maker takes on a burden of care, and that such care should always be greater than the probability that an incident involving the product occurs multiplied by the expected loss related to an incident. Stated algebraically:

BPL

In more plain terms, organizations are expected to apply care, i.e. time, resources, or money, to a level commensurate to the cost associated with a risk. Otherwise legal liability can ensue.

In Figure 1, Burden is the parabolicly increasing line, and risk, or Probability multiplied by Loss, is the parabolicly decreasing line. While these lines are not related to a specific measurement, their parabolic shape is meant to reflect the last mile problem in removing all AI system risk, and that the application of additional care beyond a reasonable threshold leads to diminishing returns for decreasing risk as well.

500
Figure 1-1. An illustration of the Hand Rule. Adapted from Economic Analysis of Alternative Standards of Liability in Accident Law.

While it’s probably too resource intensive to calculate the quantities in the Hand Rule exactly, it is important to think about these concepts of negligence and liability when designing an AI system. For a given AI system, if the probability of an incident is high, if the monetary, or other, loss associated with a system incident is large, or both quantities are large, your organization needs to spend extra resources on ensuring safety for that system. Moreover, your organization should document to the best of your ability that due diligence exceeds the estimated failure probabilities multiplied by the estimated losses.

Of course there are legal considerations beyond product liability. The United States (US) Federal Trade Commission’s (FTC) recent rounds of AI guidance are also important for safety and performance. The FTC is urging organizations deploying AI to prioritize fairness, transparency, accountability, and mathematical soundness. While many of those subjects are better suited for other chapters, accountability is crucial for safety and performance. In this context, accountability often means holding yourself to independent standards and allowing for independent oversight. The FTC has also been crystal clear about deceptive practices. AI cannot be used to deceive consumers, or you may face serious enforcement activities. When the FTC found that the photosharing app EverAlbum was being used to collect training data for a facial recognition system operating under a parallel line of business named Paravision, they forced the deletion of the facial recognition system and levied orders to prevent revenue generation based off the deceptive practices.

Much like the EU General Data Protection Regulation (GDPR) has changed the way companies handle data in US, any EU AI regulations will likely have an out-sized impact on US AI deployments. Well, the EU did recently propose sweeping and wide-ranging AI regulations. These regulations cover nearly every aspect of the commercial use of AI, and for safety and performance they mandate risk-tiering, and differing levels of system documentation, quality management, and monitoring based on risk determination. The remainder of Chapter 9 and much of Chapter 15 will be provide helpful information on addressing these requirements and more.

AI Incidents

In many ways, the fundamental goal of the AI safety processes and related model debugging discussed in Chapter 15, is to prevent and mitigate AI incidents. Here, we’ll loosely define AI incidents as any outcome of the system that could cause harm. And using the Hand rule as a guide, the severity of an AI incident is increased by the loss the incident causes, and decreased by the care taken by the operators to mitigate those losses.

500
Figure 1-2. A basic taxonomy of AI incidents. Adapted from What to Do When AI Fails.

Because complex systems drift toward failure, there is no shortage of AI incidents to discuss as examples. AI incidents can range from annoying to deadly — from mall security robots falling down stairs, to self-driving cars killing pedestrians, to mass-scale diversion of healthcare resources away from those who need them most. As pictured in Figure 2, AI incidents can be roughly divided into three buckets.

  • Abuses: AI can be used for nefarious purposes, apart from specific hacks and attacks on AI systems. The day may already be have come where hackers use AI to increase the efficiency and potency of their more general attacks. What the future could hold is even more frightening. Specters like autonomous drone attacks and ethnicity profiling by authoritarian regimes are already on the horizon.

  • Attacks: Examples of all major types of attacks - confidentiality, integrity, and availability attacks (see Chapter 13>> for more information) - have been published by researchers. Confidentiality attacks involve the exfiltration of training data or model logic from AI system end-points. Integrity attacks include adversarial manipulation of training data or model outcomes, either through adversarial examples, evasion, impersonation, or poisoning. Availability attacks can be conducted through more standard denial-of-service approaches, or via algorithmic discrimination induced by some adversary to deny system services to certain groups of users.

  • Failures: AI system failures tend to involve algorithmic discrimination, safety and performance lapses, data privacy violations, inadequate transparency, or problems in third party system components.

AI incidents are a reality. And like the systems from which they arise, AI incidents can be complex. AI incidents have multiple causes: failures, attacks, and abuses. They also tend to blend traditional notions of computer security, with concerns like data privacy and algorithmic discrimination.

The 2016 Tay chatbot incident is an informative example. Tay was a state-of-the-art chatbot trained by some of the world’s leading experts at Microsoft Research for the purpose of interacting with people on Twitter to increase awareness about AI. Sixteen hours after its release - and 96,000 tweets later - later Tay had spiraled into a neo-nazi pornagrapher and had to be shut down. What happnened? Twitter users quickly learned that Tay’s adpative learning system could easily be poisoned. Racist and sexual content tweeted at the bot was quickly incorporated into its training data, and just as quickly resulted in offensive output. Data poisoning is an integrity attack, but due to the context in which it was carried out, this attack resulted in algorthmic discrimination. It’s also important to note that Tay’s designers, being world-class experts at an extremely well-funded research center, seemed to have put some guide rails in place. Tay would respond to certain hot-button issues with pre-canned responses. But that was not enough, and Tay devolved into a public security and algorithmic discrimination incident for Microsoft Research.

Likely because of nothing more than silly hype, Tay was released without counter-measures for attacks, and due to the complexity of its operating environment this security breach morphed into a large-scale algorithmic discrimination incident. Think this was a one-off incident? Wrong. Just recently, again due to hype and failure to think through performance, safety, and security risks systematically, many of Tay’s most obvious failures were repeated in ScatterLab’s release of it’s Lee Luda chatbot. When designing AI systems, plans should be compared to past known incidents in hope of preventing future similar incidents. This is precisely the point of recent AI incident database efforts and associated publications.

AI incidents can also be an apolitical motivator for responsible technology development. For better or worse, cultural and political viewpoints on topics like algorithmic discrimination and data privacy can vary widely. Getting a team to agree on ethical considerations can be very difficult. It might be easier to get them working to prevent embarrassing and potentially costly or dangerous incidents, which should be a baseline goal of any serious data science team. The notion of AI incidents is central to understanding AI safety and a central theme of this chapter’s content is cultural competencies and business processes that can be used to prevent and mitigate AI incidents. We’ll dig into those mitigants in the next sections and take a deep dive into a real incident to close the chapter.

Organizational and Cultural Competencies for Responsible AI

An organization’s culture is an essential aspect of responsible AI. This section will discuss the cultural competencies like accountability, drinking your-own champagne, domain expertise, and the stale adage, “go fast and break things.”

Accountability

A key to the successful mitigation of AI risks is real accountability within organizations for AI incidents. If no one’s job is at stake when an AI system fails, gets attacked, or is abused for nefarious purposes, then it’s entirely possible that no one in that organization really cares about AI safety and performance. In addition to developers who think through risks, apply software quality assurance (QA) techniques, and model debugging methods, organizations need individuals or teams who validate AI system technology and audit associated processes. Organizations also need someone to be responsible for AI incident response plans. All of this is why leading financial institutions, whose use of predictive modeling has been regulated for decades, employ a practice known as model risk management (MRM). MRM is patterned off the Federal Reserve’s S.R. 11-7 model risk management guidance, that arose out the of the financial crisis during the Great Recession. Notably, implementation of MRM often involves accountable executives and several teams that are responsible for safety and performance of models and AI systems.

Leadership and Teams: Chief Model Risk Officer and the Three Lines of Defense

Implementation of MRM standards usually requires several different teams and executive leadership. Two key tenants form the cultural backbone for MRM:

  • Effective Challenge: Effective challenge dictates that personnel who did not build an AI system perform validation and auditing of such systems. MRM practices typically distribute effective challenge across three “lines of defense,” where system developers make up the first line of defense and independent technical validators and process auditors make up the second and third lines, respectively.

  • Accountable Leadership: A specific executive within an organization should be accountable for ensuring AI incidents do not happen. This position is referred to as chief model risk officer (CMRO). It’s also not uncommon for CMRO terms of employment and compensation structure to be linked to AI system performance. The role of CMRO offers a very straightforward cultural check on AI safety and performance. When your boss really cares about AI system safety and performance, then you start to care too.

  • Incentives: Data science staff and management must be incentivized to implement AI responsibly. Often, compressed product timelines can incentivize the creation of a minimum viable product first, with rigorous testing and remediation relegated to the end of the model life cycle immediately before deployment to production. Moreover, AI testing and validation teams are often evaluated by the same criterion as AI development teams, leading to a fundamental misalignment where testers and validators are encouraged to move quickly rather than assure quality. Aligning timeline, performance evaluation, and pay incentives to team function helps solidify a culture of responsible AI and risk mitigation.

Of course, small or young organizations may not be able to spare an entire full-time employee to monitor ML AI system risk. But it’s important to have an individual or group responsible and held accountable if AI systems cause incidents. If an organization assumes everyone is accountable for ML risk and AI incidents, the reality is that no one is accountable.

Cultural effective challenge

Whether your organization is ready to adopt full-blown MRM practices, or not, you can still benefit from certain aspects of MRM. In particular, the cultural competency of effective challenge can be applied outside of the MRM context. At it’s core, effective challenge means actively challenging and questioning steps in the development of AI systems. An organizational culture that encourages serious questioning of AI system designs will be more likely to develop effective AI systems or products, and to catch problems before they explode into harmful incidents. Note that effective challenge cannot be abusive, and it must apply equally to all personnel developing an AI system, especially so-called “rockstar” engineers and data scientists. Effective challenge should also be structured, such as weekly meetings where current design thinking is questioned and alternative design choices are considered.

Drinking Your Own Champagne

Also known as “eating your own dog food,” the practice of drinking your own champagne refers to using your own software or products inside of your own organization. Often a form of pre-alpha or pre-beta testing, drinking your own champagne can identify problems that emerge from the complexity of real-world deployment environments before bugs and failures affect customers, users or the general public. Because serious issues like concept drift, algorithmic discrimination, shortcut learning or underspecification are notoriously difficult to identify using standard ML development processes, drinking your own champagne provides a limited and controlled, but also realistic, test bed for AI systems. Of course, when organizations employ demographically and professional diverse teams, including domain experts in the field where the AI system will be deployed, drinking your own champagne is more likely to catch a wider variety of problems. Drinking your own champagne also brings the classical Golden Rule into AI. If you’re not comfortable using a system on yourself or your own organization, then you probably shouldn’t deploy that system.

Diverse and Experienced Teams

Diverse teams can bring wider and uncorrelated perspectives to bear on design, development, and testing AI systems. Non-diverse teams often do not. Many have documented the unfortunate outcomes that can arise as a result of data scientists not considering demographic diversity in the training or results of AI systems. A potential solution to these kinds of oversights is increasing demographic diversity on AI teams from its current woeful levels. Business or other domain experience is also important when building teams. Domain experts are instrumental in feature selection and engineering, and in the testing of system outputs. In the mad rush to develop AI systems, domain expert participation can also serve as a safety check. Generalist data scientists often lack the experience necessary to deal with domain-specific data and results. Misunderstanding the meaning of input data or output results is a recipe for disaster that can lead to AI incidents when a system is deployed. Unfortunately, the social sciences deserve a special emphasis when it comes to data scientists forgetting or ignoring the importance of domain expertise. In a trend referred to as "tech’s quiet colonization of the social sciences,” several organizations have pursued regrettable AI projects that seek to replace decisions that should be made by trained social scientists or that simply ignore the collective wisdom of social science domain expertise altogether.

“Going Fast and Breaking Things”

The mantra, “go fast and break things,” is almost a religious belief for many “rock-star” engineers and data scientists. Sadly, these top practitioners also seem to forget that when they go fast and break things, things get broken. As AI systems make more high-impact decisions that implicate autonomous vehicles, credit, employment, grades and university attendance, medical diagnoses and resource allocation, mortgages, pre-trial bail, parole and more, breaking things means more than buggy apps. It can mean that a small group of data scientists and engineers causes real harm at scale to many people. Participating in the design and implementation of high-impact AI systems requires a mindset change to prevent egregious performance and safety problems. Practitioners must change from prioritizing the number of software features they can push or the test data accuracy of an ML model, to recognizing the implications and downstream risks of their work.

Organizational Processes for Responsible AI

Organizational processes play a key role in assuring AI systems are safe and performant. Like the cultural competencies discussed in the previous section, organizational processes are a key non-technical determinant of reliability in AI systems. This section on processes starts out by urging practitioners to consider, document, and attempt to mitigate any known or foreseeable failure modes for their AI systems. This section then discusses a mature and tested process framework for governing predictive models known as model risk management (MRM). While the culture section focused on the people and mindsets necessary to make MRM a success, this section will outline the different processes MRM uses to mitigate risks in advanced predictive modeling and ML systems. While MRM is an incredible process standard to which we can all aspire, there are additional important process controls that are not typically part of MRM. We’ll look beyond traditional MRM in this section and highlight crucial processes for change management, pair or double programming, and security permission requirements for code deployment. This section will close with a discussion of AI incident response. Nearly all powerful commercial technologies suffer incidents. AI is no different. No matter how hard we work to minimize harms while designing and implementing an AI system, we still have to prepare for failures and attacks.

Forecasting Failure Modes

AI safety and ethics experts roughly agree on the importance of thinking through, documenting, and attempting to mitigate foreseeable failure modes for AI systems. Unfortunately they also mostly agree that this is a nontrivial task. Happily, new resources and scholarship on this topic have emerged in recent years that can help AI system designers forecast incidents in more systematic ways. If holistic categories of potential failures can be identified, it makes hardening AI systems for better real-world performance and safety a more pro-active and efficient task. In this subsection, we’ll discuss one such strategy, along with a few additional processes for brainstorming future incidents in AI systems.

Known Past Failures

As discussed in Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database, one the most efficient ways to mitigate potential AI incidents in your AI systems is to compare your system design to past failed designs. Much like transportation professionals investigating incidents, cataloging incidents, using the findings to prevent related incidents, and to test new technologies, several AI researchers, commentators, and trade organizations have begun to collect and analyze AI incidents in hopes of preventing repeated and related failures. Likely the most high-profile and mature AI incident repository is the Partnership on AI’s AI Incident Database. This searchable and interactive resource allows registered users to search a visual database with keywords and locate different types of information about publicly recorded incidents. Others have begun collecting AI incidents in simpler GitHub repositories:

Consult these resources while developing your AI systems. If you see something that looks familiar, stop and think about what you’re doing. If a system similar to the one you’re designing, implementing, or deploying has caused an incident in the past, this is one of strongest indicators that your new system could cause an incident.

Failures of Imagination

Imagining the future with context and detail is never easy. And it’s often the context in which AI systems operate, accompanied by unforeseen or unknowable details, that lead to AI incidents. In a recent workshop paper, the authors of Overcoming Failures of Imagination in AI Infused System Development and Deployment put forward some structured approaches to hypothesize about those hard-to-imagine future risks. In addition to deliberating on the who (e.g., investors, customers, vulnerable non-users), what (e.g., well-being, opportunities, dignity), when (e.g., immediately, frequently, over long periods of time), and how (e.g., taking an action, altering beliefs) of AI incidents, they also urge system designers to consider:

  • Assumptions that the impact of the system will be only beneficial, and to admit when uncertainty in system impacts exists.

  • The problem domain and applied use cases of the system, as opposed to just the math and technology.

  • Any unexpected or surprising results, user interactions, and responses to the system.

Causing AI incidents is embarrassing, if not costly or illegal, for organizations. AI incidents can also hurt consumers and the general public. Yet, with some foresight, many of the currently known AI incidents could have been mitigated, if not wholly avoided. It’s also possible that in performing the due diligence of researching and conceptualizing AI failures, you find that your design or system must be completely reworked. If this is the case, take comfort that a delay in system implementation or deployment is likely less costly than the harms your organization or the public could experience if the flawed system was released.

Model Risk Management

The process aspects of MRM mandate thorough documentation of modeling systems, human review of systems, and ongoing monitoring of systems. These processes represent the bulk of the governance burden for the Federal Reserve’s SR 11-7 MRM guidance, which is overseen by the Federal Reserve and the Office of the Comptroller of the Currency (OCC) for predicitve models deployed in material consumer finance applications. MRM represents the culmination of decades of predictive modeling governance in consumer finance, and lessons learned from incidents during those same years. While only large organizations will be able to fully embrace all that MRM has to offer, any serious AI practitioner can learn something from the discipline. The subsection below breaks MRM processes down into smaller components so that you can start thinking through using aspects of MRM in your organization.

Risk-tiering

As outlined in the opening of Chapter 9, the product of the probability of a harm occurring and likely loss resulting from that harm is an accepted way to rate the risk of given AI system deployment. The product of risk and loss has a more formal name in the context of MRM, materiality. Materiality is a powerful concept that enables organizations to assign realistic risk levels to AI systems. More importantly, this risk-tiering allows for the efficient allocation of limited development, validation, and audit resources. Of course, the highest materiality applications should recieve the greatest human attention and review, while the lowest materiality applications could potentially be handled by automatic machine learning (AutoML) systems and undergo minimal validation. Because risk mitigation for AI systems is an ongoing task, proper resource allocation between high, medium, and low risk systems is a must for effective governance.

Model Documentation

MRM standards also require that systems be thoroughly documented. In the traditional MRM setting, documentation covers:

  • Stakeholder contact information

  • System business justification

  • Mathematical assumptions and system usage limitations

  • Input data dictionary

  • Preprocessing and algorithm description

  • Discussion of evaluated alternative approaches

  • Output data dictionary

  • Completed testing and validation

  • Down- and upstream dependencies

  • Plans for ongoing improvements, testing, validation, and monitoring

Of course, these documents can be hundreds of pages long, especially for high-materiality systems. If you’re feeling like that sounds impossible for your organization today, then maybe these two simpler requirements might work instead. First, documentation should enable accountability for system stakeholders, ongoing system maintenance, and a degree of incident response. Second, documentation must be standardized across systems, for the most efficient audit and review processes. The proposed datasheet and model card standards may also be helpful for smaller or younger organizations.

Model Monitoring

A primary tenant of AI safety is that AI system performance in the real-world is hard to predict and performance must be monitored. Hence, deployed system performance should be monitored frequently and until a system is decommissioned. Systems can be monitored for any number of problematic conditions, the most common being input drift. While AI system training data encodes information about a system’s operating environment in a static snapshot, the world is anything but static. Competitors can enter markets, new regulations can be passed, consumer tastes can change, and pandemics or other disasters can happen. Any of these can change the live data that’s entering your AI system away from the characteristics of its training data, resulting in decreased, or even dangerous, system performance. To avoid such unpleasant surprises, the best AI systems are monitored both for drifting input and output distributions and for decaying quality, often known as model decay. While performance quality is the most common quantity to monitor, AI systems can also be monitored for anomalous inputs or predictions, specific attacks and hacks, and for drifting fairness characteristics.

Model Inventories

Any organization that is deploying AI should be able to answer straightforward questions like:

  • How many AI systems are currently deployed?

  • How many customers or users do these systems affect?

  • Who are the accountable stakeholders for each system?

MRM achieves this goal through the use of model inventories. A model inventory is a curated and up-to-date database of all an organization’s AI systems. Model inventories can serve as a repository for crucial information in documentation, but should also link to monitoring plans and results, auditing plans and results, important past and upcoming system maintenance and changes, and plans for incident response.

System Validation and Auditing

Under traditional MRM practices, an AI system undergoes two primary reviews before its release. The first review is a technical validation of the system, where skilled validators, not uncommonly Ph.D. data scientists, attempt to poke holes in system design and implementation, and work with system developers to fix any discovered problems. The second review investigates processes. Audit and compliance personnel carefully analyze the system design, development, and deployment, along with documentation and future plans, to ensure all regulatory and internal process requirements are meant. Only after these two reviews, would an executive sign-off for deployment begin. Moreover, because AI systems change and drift over time, review must take place whenever a system undergoes a major update or at an agreed upon future cadence.

You may be thinking (again) that your organization doesn’t have the resources for such heavy-handed reviews. Of course that is a reality for many small or younger organizations. The keys for validation and auditing, that should work at nearly an organization, are having technicians who did not develop the system test it, having a function to review non-technical internal and external obligations, and having sign-off oversight for important AI system deployments.

Beyond Model Risk Management

MRM is not the only place to draw inspiration for improved AI safety and performance processes. There are also lots of lessons to be learned from software development best practices and from IT security. This subsection will shine a light on pair programming, least privilege, change management and incident response from an AI safety and performance perspective.

Pair and Double Programming

Because they tend to be complex and stochastic, it’s hard to know if any given ML algorithm implementation is correct! This is why some leading AI organizations implement ML algorithms twice as a quality assurance (QA) mechanism. Such double implementation is usually achieved by one of two methods: pair programming or double programming. In the pair programming approach, two technical experts code an algorithm without collaborating. Then they join forces and work out any discrepancies between their implementations. In double programming, the same practitioner implements the same algorithm twice, but in very different programming languages, such as Python (object-oriented) and SAS (procedural). They must then reconcile any differences between their two implementations. Either approach tends to catch numerous bugs that would otherwise go unnoticed until the system was deployed. Pair and double programming can also align with the more standard workflow of data scientists prototyping algorithms, while dedicated engineers harden them for deployment. However, for this to work, engineers must be free to challenge and test data science prototypes and not relegated to simply re-coding prototypes.

Security Permissions for Code Deployment

The concept of least privilege from IT security states that no system user should ever have more permissions than they need. Least privilege is a fundamental process control that, likely because AI systems touch so many other IT systems, tends to be thrown out the window for AI build-outs and for so-called “rock star” data scientists. Unfortunately, this is an AI safety and performance anti-pattern. Outside the world of over-hyped AI and rock star data science, it’s long been understood that engineers cannot adequately test their own code and that others in a product organization, product managers, attorneys, or executives, should make the final call as to when software is released.

For these reasons, the IT permissions necessary to deploy an AI system should be distributed across several teams within an IT organizations. During development sprints, data scientists and engineers certainly must retain full control over their development environments. But, as important releases or reviews approach, the IT permissions to push fixes, enhancements, or new features to user-facing products are transferred away from data scientists and engineers to product managers, legal, executives or others. Such process controls provide a gate that prevents unapproved code from being deployed.

Change Management

Like all complex software applications, AI systems tend to have a large number of different components. From backend ML code, to application programming interfaces (APIs), to user interfaces, changes in any component of the system can cause side-effects in other components. Add in issues like data drift, emergent data privacy and anti-discrimination regulations, and complex dependencies on third-party software, and change management in AI systems becomes a serious concern. There are many frameworks and project management approaches for change management. If you’re in the planning or design phase of a mission-critical AI system, you’ll likely need to make change management a first-class process control. Without explicit planning and resources for change management, process or technical mistakes that arise through the evolution of the system, like using data without consent or API mismatches, are very difficult to prevent. Furthermore, without change management, such problems might not even be detected until they cause an incident.

AI Incidents Response

According to the vaunted SR 11-7 guidance, “even with skilled modeling and robust validation, model risk cannot be eliminated”. If risks from AI systems and ML models cannot be eliminated, then such risks will eventually lead to incidents. Incident response is already a mature practice in the field of computer security. Venerable institutions like NIST and SANS have published computer security incident response guidelines for years. Given that AI is a less mature and higher-risk technology than general purpose enterprise computing, formal AI incident response plans and practices are a must for high-impact or mission critical AI systems.

Formal AI incident response plans enable organizations to respond more quickly and effectively to inevitable incidents. Incident response also plays into the Hand Rule discussed at the beginning of Chapter 9. With rehearsed incident response plans in place, organizations may be able to identify, contain, and eradicate AI incidents before they spiral into costly or dangerous public spectacles. Although only mandated by regulation in a few specific verticals as of today, AI incident response plans are one of the most basic and universal ways to mitigate AI-related risks. Before a system is deployed, incident response plans should be drafted and tested. For young or small organizations that cannot fully implement model risk management, AI incident response is a primary and potent AI risk control to consider. Borrowing from computer incident response, AI incident response can be thought of in six phases:

Phase 1: Preparation

In addition to clearly defining an AI incident for your organization, preparation for AI incidents includes personnel, logistical, and technology plans for when an incident occurs. Budget must be set aside for response, communication strategies must be put in place, and technical safeguards for standardizing and preserving model documentation, out-of-band communications, and shutting down AI systems must be implemented. One of the best ways to prepare and rehearse for AI incidents are table top discussion exercises, where key organizational personnel work through a realistic incident. Good starter questions for an AI incident table top include:

  • Who has the organizational budget and authority to respond to an AI incident?

  • Can the AI system in question be taken offline? By whom? At what cost? What upstream processes will be affected?

  • Which regulators or law enforcement agencies need to be contacted? Who will contact them?

  • Which external law firms, insurance agencies, or public relation firms need to be contacted? Who will contact them?

  • Who will manage communications? Internally, between responders? Externally, with customers or users?

Phase 2: Identification

Identification is when organizations spot AI failures, attacks, or abuses. In practice, this tends to involve more general attack identification approaches, like network intrusion monitoring, and more specialized monitoring for AI system failures, like monitoring for concept drift or algorithmic discrimination. Identification also means staying vigilant for AI-related abuses. Often the last step of the identification phase is to notify management, incident responders, and others specified in incident response plans.

Phase 3: Containment

Containment refers to mitigating the incident’s immediate harms. Keep in mind that harms are rarely limited to the system where the incident began. Like more general computer incidents, AI incidents can have network effects that spread throughout an organizations’ and its customers’ technologies. Actual containment strategies will vary depending on whether the incident stemmed from an external adversary, and internal failure, or an off-label use or abuse of an AI system. If necessary, containment is also a good place to start communicating with the public.

Phase 4: Eradication

Eradication involves remediating any affected systems. For example, sealing off any attacked systems from vectors of in- or ex-filtration, or shutting down a discriminatory AI system and temporarily replacing it with a trusted rule-based system. After eradication, there should be no new harms caused by the incident.

Phase 5: Recovery

Recovery means ensuring all affected systems are back to normal and that controls are in place to prevent similar incidents in the future. Recovery often means re-training or re-implementing AI systems, and testing that they are performing at documented pre-incident levels. Recovery can also require careful analysis of technical or security protocols for personnel, especially in the case of an accidental failure or insider attack.

Phase 6: Lessons Learned

Lessons learned refers to corrections or improvements of AI incident response plans based on the the successes and challenges encountered while responding to the current incident. Response plan improvements can be process- or technology-oriented.

For a sneak peek at a free and open AI incident response plan, see the Sample Incident Response Plan provided by the specialty law firm bnh.ai.

Case Study: Death by Autonomous Vehicle

On the night of March 18th 2018, Elaine Herzberg was walking a bicycle across a wide intersection in Tempe, Arizona. In what has become one of the most high-profile AI incidents, she was struck by an autonomous Uber test vehicle traveling at roughly 40 mph. According to the National Safety Transportation Board (NTSB), the test vehicle driver, who was obligated to take control of the vehicle in emergency situations, was distracted by a smart phone. The self driving AI system also failed to save Ms. Herzberg. The system did not identify Ms. Herzberg until 1.2 seconds before impact, too late to prevent a brutal crash.

Fallout

Autonomous vehicles are thought to offer safety benefits over today’s status quo of human-operated vehicles. Indeed, self-driving cars have driven millions of miles with no fatalities. Yet, the NTSB’s report states that Uber’s, “system design did not include a consideration for jaywalking pedestrians,” and criticized lax risk assessements and immature safety culture at the company. Furthermore, an Uber employee raised serious concerns about 37 crashes in the previous 18 months and common problems with test vehicle drivers just days before the Tempe incident. As a result of the Tempe crash, Uber’s autonomous vehicle testing was stopped in four other cities and governments around the US and Canada began re-examining safety protocols for self-driving vehicle tests. The driver has been charged with negligent homicide. Uber has been excused from criminal liability, but came to a monetary settlement with the deceased’s family. The city of Tempe and State of Arizona were also sued by Ms. Herzberg’s family for $10 million each.

An Unprepared Legal System

It must be noted that the legal system in the US is not yet prepared for the reality of AI incidents, leaving employees, consumers and the general public largely unprotected from the unique dangers of AI systems operating in our midst. The EU Parliament has put forward a liability regime for AI systems that would mostly prevent large technology companies from escaping their share of the consequences in future incidents. In the US, any plans for Federal AI product safety regulations are still in a highly preliminary phase. In the interim, individual cases of AI safety incidents will likely be decided by lower courts with little education and experience in handling AI incidents, enabling Big Tech and other AI system operators to bring vastly asymmetric legal resources to bear against individuals caught up in incidents related to complex AI systems. Even for the companies and AI system operators, this legal limbo is not ideal. While the lack of regulation seems to benefit those with the most resources and expertise, it makes risk management and predicting the outcomes of AI incidents more difficult. Regardless, future generations may judge us harshly for allowing the criminal liability of one of the first AI incidents, involving many data scientists and other highly paid professionals, to be pinned solely on a test driver of a supposedly automated vehicle.

Lessons Learned

What lessons learned from Chapter 9 could be applied to this case?

  • Lesson 1: Culture is important. A mature safety culture is a broad risk control, bringing safety to the forefront of design and implementation work, and picking up the slack in corner cases that processes and technology miss. Learned from the last generation of life-changing commercial technologies, like aerospace travel and nuclear power, a more mature safety culture at Uber could have prevented this incident, especially since an employee raised serious concerns in the days before the crash.

  • Lesson 2: Mitigate foreseeable failure modes. The NTSB concluded that Uber’s software did not specifically consider jaywalking pedestrians as a failure mode. For anyone who’s driven a car with pedestrians around, this should have been an easily foreseeable problem for which any self-driving car should be prepared. AI systems generally are not prepared for incidents unless their human engineers make them prepared. This incident shows us what happens when those preparations are not made in advance.

  • Lesson 3: Test AI systems in their operating domain. After the crash, Uber stopped and reset its self-driving car program. After improvements, they were able to show via simulation that their new software would have started breaking 4 seconds before impact. Why wasn’t the easily foreseeable reality of jaywalking pedestrians tested with these same in-domain simulations before the March 2018 crash? The public may never know. But enumerating failure modes and testing them in realistic scenarios could prevent you or your organization from having to answer these kinds of unpleasant questions.

A potential bonus lesson here is to consider not only accidental failures, like the Uber crash, but also malicious hacks against AI systems and the abuse of AI systems to commit violence. Terrorists have turned motor vehicles into deadly weapons before, so this is a known failure mode. Precautions must be taken in autonomous vehicles, and in driving assistance features, to prevent hacking and violent outcomes in these systems. Regardless of whether its an accident or a malicious attack, AI incidents will certainly kill more people. Our hope is that governments and other organizations will take AI safety seriously, and minimize the number of these somber incidents in the future.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.60.149