5
AI Ethics Statements That Actually Do Something

In chapter 1 we saw how ethics can be concrete in the sense that it’s not mushy touchy-feely opinion, so let’s everyone shrug their shoulders and move on. Ethical beliefs are about the world, specifically, about what is right and wrong, good and bad, permissible and impermissible, and organizations can get ethics right or wrong. In chapters 2 through 4, we dug into the details of three complex ethical issues in AI: bias, explainability, and privacy. Through an understanding of those issues, we learned a variety of Structure from Content Lessons.

All of this and more must ultimately coalesce into an AI ethical risk program: an articulation of how a Structure in your organization gets created, scaled, and maintained to systematically and comprehensively identify and manage the ethical, reputational, regulatory, and legal risks of AI. But if your organization is like most, you (justifiably) see this as a big lift. You’re unlikely to put this book down and immediately get to work creating this program and the organizational and cultural change it entails.

Where to start, then? The standard first step in industry, among nonprofits, and even among countries, is to articulate ethical standards for AI development and deployment—a set of ethical values or principles. Sometimes they’re referred to as “AI Ethics Principles,” though increasingly, they’re considered the organization’s approach to “Responsible” or “Trustworthy” AI. Hundreds of organizations have put out such statements, and it’s a fine place to start. Articulating ethical standards is obviously important if you want guidance for the board, the C-suite, product owners and developers, and so on. You’ve got to start somewhere.

It isn’t necessary to dive into the details of any one particular AI ethics statement, for while the lists vary, just about all of them list a bunch of the following as their “principles” or “values”:

  • Fairness
  • Antidiscrimination
  • Transparency
  • Explainability
  • Respect
  • Accountability
  • Accuracy
  • Security
  • Reliability
  • Safety
  • Privacy
  • Beneficence
  • Human in the loop, human control, human oversight, human-centric design

The lists are usually accompanied with a sentence or two about what the organization means when it says it values these things. Here are some real-world examples:

  • Diversity, nondiscrimination, and fairness. The BMW Group respects human dignity and therefore sets out to build fair AI applications. This includes preventing noncompliance by AI applications.1
  • Transparent. We are transparent when a customer communicates with an AI and regarding our use of customer data.2
  • Data protection and privacy. Data protection and privacy are a corporate requirement and at the core of every product and service.3 We communicate clearly how, why, where, and when customer and anonymized user data is used in our AI software.4
  • Accountability. We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.5

I said that this is a good place to start. But here as elsewhere, execution is everything, and in my estimation, this is all a bit weak.

The problem isn’t that anything here is false or ethically problematic. It’s that such statements aren’t particularly helpful to an organization that is serious about integrating AI ethical risk mitigation standards into its AI ecosystem. I’ve had multiple companies reach out to me after they have “strong internal agreement on a set of AI ethics principles,” which always look something like the above, but have hit a wall when it comes to putting those principles into practice. How do you get from a statement to answering questions like, “Do we need an ethics committee?” “Should we tell our product developers to engage in ‘AI ethics by design’?” and ultimately, and always, “How do we turn this into key performance indicators (KPIs)?!”

But a move now to talk about structure, process, and practice, is too soon. These companies are running into trouble because these kinds of lists—sometimes couched in the comforting corporate language of “frameworks”—suffer from a number of problems that prevent them from being helpful.

Four Problems with Standard AI Ethics Statements

1. They lump together Content and Structure

The distinction between Content and Structure roughly maps to the distinction between goals and strategies or tactics, and if you can’t distinguish between goals and strategies, not only are you confused, but you’re going to make bad decisions that lead to bad outcomes. You might, for instance, sacrifice a goal for a strategy because you don’t have a firm grip on these things to begin with.

Take “accountability,” for instance. It appears on almost everyone’s list. But being accountable or, rather, ensuring that particular people in product development and deployment are accountable for your AI’s impacts isn’t an ethical goal in itself. It’s a strategy to reach a different goal, which is to increase the probability you’ll deploy ethical AI. Making particular people accountable—by, say, assigning role-specific responsibilities—is a way to decrease the probability that things will fall between the cracks. And if you could somehow stop things from falling between the cracks and that method worked without holding anyone accountable, then you could stop holding people accountable. (Of course, there probably is no such thing, but the thought experiment demonstrates that accountability is a strategy, not a goal.)

2. They lump together ethical and nonethical values

When you’re talking AI ethics, the values and principles you stand behind should be ethical in nature. But these lists include nonethical values.

Take, for instance, security and accuracy. The former refers to the domain of security for the sake of preventing and defending against various kinds of cyberattacks. The latter refers to the goals of engineers training the AI models.

Including these things is commensurate with the slide I noted earlier, from talking about “AI ethics” to “Responsible” or “Trustworthy” AI. In principle, this is fine. In practice, it tends to push ethical concerns into the back seat. Here’s how a lot of conversations that I’ve had go:

“Do you have an AI ethical risk program?”

“Oh yes, we take responsible AI very seriously.”

“That’s great! What are you doing around it?”

“Well, we do a lot of testing and monitoring of our models, checking for data drift, overfitting, and so on. We’ve built security into the product development process, and we make sure our model outputs are explainable.”

“I see. So when it comes to AI ethics in particular, what are you doing?”

“AI ethics specifically? Um … I guess it’s just the explainability stuff, really.”

“Got it. So it sounds like your Responsible AI Program is primarily about how well your model functions from an engineering and security perspective, without a particular focus on, say, discriminatory outputs, privacy violations, and various ethical and reputational risks that can be realized in particular use cases with AI.”

“Yeah, I guess that’s right. We’re not really sure what to do with that stuff.”

The problem isn’t with a concept like “Responsible AI” or “Trustworthy AI.” And it’s absolutely essential for organizations to earn trust that they protect their AI models, the data they train on, and the data they output, from cyberattacks as well as for their developers to create accurate models and reliable products. The problem is that lumping together ethical, cybersecurity, and engineering issues into one pile of responsible development and deployment affects how people think and make decisions about where they are and where they need to be in their AI ethical risk journey. If we don’t lump them together, but instead talk about them as distinct entities that need to be individually addressed, people will address each of them instead of not realizing that their high score on their Responsible AI metrics is primarily the result of overperforming on engineering metrics, not the result of (underperforming) ethical risk mitigation efforts.

Another pragmatic difficulty here is that, insofar as an AI Ethics Statement is the start of a more general AI ethical risk program, and it’s deployed effectively (more on this later), some senior member of the organization will have to own this program. That senior member will have authority over those people who ensure that the values are being operationalized. But it is rare for a single senior leader to own AI ethics and AI cybersecurity and AI engineering or product development. As a result, it makes for a more functional document if it speaks to a cohesive program for which a senior leader can be responsible. It should go without saying—or rather, it should be said elsewhere—that a senior leader like a chief information and security officer should work to protect their AI infrastructure against attack and that the chief data officer should work to ensure accurate and reliable models.

3. They lump together instrumental and noninstrumental values

When something is of instrumental value, it’s good because it gets you something else that’s good. Pens are of instrumental value because they help you to write things down, which is good for remembering things, which is good for getting things done you want to get done. Being famous has instrumental value because it allows you to get reservations at fancy restaurants. Voting has instrumental value because it’s a good way for the will of the people to be expressed and have a significant impact on how they will be governed. And so on.

Noninstrumental value is … well, it’s what it sounds like. Things that are of noninstrumental value are good in themselves, good for their own sakes, intrinsically good, and so on. What is noninstrumentally valuable? Ethicists disagree, but pleasure and pain are good candidates (“Why is pleasure good? Why is pain bad? I don’t know … they just are!”), as are happiness, a meaningful life, justice, and autonomy.

The relationship between instrumental and noninstrumental values is pretty clear: the noninstrumental values explain why these other things are of instrumental value. For example, we instrumentally value exercise because it is conducive to health. Health itself is also of instrumental value because it helps us to avoid sickness, which is painful. Why do we want to avoid the pain of sickness? Well, it’s not because it gets us some further thing. Rather, it’s just something we noninstrumentally disvalue.6 That said, some things are both instrumentally and noninstrumentally valuable. For example, living a great life is noninstrumentally valuable, and it’s instrumentally valuable because it inspires other people to lead great lives.

The distinction is important if we’re going to think clearly about the content of our AI ethical values. In particular, we don’t want to sacrifice what’s of noninstrumental value for something of instrumental value; that would usually be both ethically problematic and just kind of a boneheaded thing to do. Unfortunately, I’m afraid those standard AI ethics statements do it quite often. Consider, for instance:

  • Human in the loop, human-centric design, human oversight
  • Transparency
  • Explainability

Human in the loop. Having a human in the loop means that you have a human decision-maker standing between the outputs of an AI and the impactful decision that gets made. For instance, if the AI says, “Fire (on) that person,” you want a human taking that into consideration in deciding what to do and you don’t want the AI to automatically fire (on) that person. But why do you want a human in the loop? Presumably because you think having that kind of human (emotionally) intelligent, experienced oversight is important for preventing really bad outcomes. In other words, that person plays a certain function—to stop some bad things from happening when the AI goes awry. They are of instrumental value, at least in some cases, and having a human in the loop is not a goal all by itself. So, it would be odd to say that one of your values is having a human in the loop.

Think about what would happen if we found a method of AI oversight that was better than a human in the loop. Suppose, for instance, you had one AI that checks the outputs of another AI. And suppose the overseer AI were better at overseeing than a human (because, for instance, the human is too slow to keep up with the outputs of the AI being overseen). It would be counterproductive to your goal of ensuring, as best you can, ethically safe outputs to stick with the human over the superior AI. The danger in claiming that having a human in the loop is one of your values is that you might make that decision because you’ve confused a goal—ethical safety—with a strategy for attaining that goal, having a human in the loop.

Transparency. Transparency is about how openly and honestly you communicate with others, including, say, how clearly you communicate to users of your AI what data it collects, what the organization might do with that data, and the very fact that they’re engaging with an AI in the first place. Is being transparent an instrumental or noninstrumental value? Does it lead to good outcomes or is it good in itself?

As I see things, transparency is of instrumental value.

Transparency is part of a strategy to gain trust because you’re trustworthy. Since acting ethically earns trust, and acting unethically destroys it, ensuring that your AI is deployed ethically is crucial to an overall effort to earn trust.

Your goal cannot simply be to get trust. Con men are very good at getting people’s trust. Being highly manipulative is compatible with getting people’s trust. But the fact that you can get people’s trust through nefarious means doesn’t make you trustworthy. What’s more, it’s a highly unstable situation; even if you can get away with it for some time, eventually you’ll get found out. Grifters are nomadic for a reason.

Notice that being trustworthy is not enough to get trust. You also need to communicate to people what it is you are doing or they won’t know that what you’re doing makes you trustworthy. In other words, first get your ethical house in order. That will make you trustworthy. Now tell everyone about your house. That will get you the trust, but not because you’ve been manipulative. Transparency is good because it builds trust—it’s a means to the end of building trust—on the condition that you’re behaving in a way that warrants or earns trust.7

Explainability. Sometimes explainability matters, as we saw in chapter 3, because it’s required for expressing respect. That’s a case of regarding an explanation as an expression of a noninstrumental value. In other cases, however, it’s ethically permissible either to not provide any explanation at all (e.g., in cases of informed consent to using a black box) or to provide an explanation solely because it’s useful to do so (e.g., in cases of identifying bias or because it makes the product usable by consumers).

While it makes sense to list “respect” as a value and, in some cases, to fill it in with the conditions under which explanations are required to express that respect, it doesn’t make a whole lot of sense to list “explainability” as a value. After all, if your statement articulates your values or principles that you don’t want to breach, but explainability is only sometimes important, either you’ll have to live up to silly standards of making everything explainable or you’ll have to be in breach of your stated principles whenever you reasonably don’t prioritize explainability for a given model. In short, it would be silly to include it in a list of values, potentially dangerous insofar as you sacrifice some other goal on its behalf unnecessarily (e.g., sacrificing near-perfect accuracy of a cancer-diagnosing AI), and also tremendously inefficient, because you’ll spend many hours working on explaining something for which we don’t need an explanation.

4. They describe overly abstract values

The fourth problem with the standard approach is its most significant: these principles don’t actually tell anyone what to do.

My favorite example of this is the value of fairness. It’s on everyone’s list. No one wants to have unfair AI. And that, really, is the problem. The value of “fairness” is so broad that even the Ku Klux Klan subscribes to it. Ask a Klan member, “Hey, are you for fairness? Do you value justice?” You’ll hear in reply, “Absolutely!” Of course, their conception of what counts as fair and just widely differs from what your organization counts as fair and just (one hopes). But the fact that everyone can subscribe to it is enough to show that it doesn’t actually indicate anything about what to do.

The same can be said of most other values. (Google, for instance, proudly proclaims as one of its AI ethics principles “being socially beneficial.” What benefits should be conferred to whom, to what extent, and for how long or often is, to an ethicist’s eye, an absence with a presence.) And we can see how trivial they are when we see what sorts of questions remain. Here are just a few.

  • What counts as a violation of privacy?

    – Maybe my AI gathers a ton of data from hundreds of thousands of people, including Bob, anonymizes that data, and then trains an ML in a way that Bob never knows his data was used in this way. Further, no individual person knows about Bob, since it’s all in a massive data set. Have I violated Bob’s privacy? What if Bob clicked “I accept” on the banner that links to Terms and Conditions that explains all that in legalese? Is that consent, from an ethical perspective?

  • What does respecting someone look like in product design?

    – Should we go with “buyer beware” or “we’ve got your back”? Is it assuming that people are autonomous rational agents capable of free choice even in the face of enticements? Or is it a matter of thinking that some product design choices create a kind of enticement that undermines autonomy and so we should forbid those design choices?

  • When does something count as discriminatory?

    – In chapter 2, we saw the way in which this question is complicated and gives rise to a host of others: When is differential impact across subpopulations ethically acceptable? How do we assess which of the various metrics for fairness is appropriate in a given use case? How do we approach given current antidiscrimination law?

Put plainly: if you want your values to be nontrivial and action guiding, you’ll have to do more than mouth words like “fairness,” “privacy,” and “respect.”

Better Content Guides Actions

We want to integrate ethical risk mitigation standards into AI product development and deployment, and we need an ethical North Star, some place to start the journey of nestling ethical standards into operations. Typically, people articulate a set of values like the ones listed above and ask, “How do we operationalize this?” Then, they hit a dead end because of a lack of clarity (the three lumps) and a lack of substance (overly abstract values). We can fix this. We need to keep Content and Structure separate, keep our ethics distinct from our nonethics, and articulate our values in a concrete way.

Here are four steps you can take to accomplish exactly that.

Step 1. State your values by thinking about your ethical nightmares

Let’s remember that we’re engaged in AI ethical risk mitigation. We are not, at least not first and foremost, in the business of striving toward some rosy ideal. That said, sometimes a good defense is a good offense, and it makes sense to articulate your goals in a positive light: we express the values we’re striving for instead of the disvalues we’re trying to avoid. But you can articulate those values in light of the ethical nightmares you want to avoid.

Your ethical nightmares are partly informed by the industry you’re in, the particular kind of organization you are, and the kinds of relationships you need to have with your clients, customers, and other stakeholders for things to go well. Take three examples to see how this might go:

  • If you’re a health-care provider that, among other things, uses AI to make treatment recommendations to doctors and nurses, and widespread false positives and false negatives in testing for (life-threatening) illnesses is your ethical nightmare, then doing no harm is your value.
  • If you’re a financial services company that uses AI to give investment recommendations and clients being (and feeling) taken advantage of is one of your ethical nightmares, then clear, honest, comprehensive communication is one of your values.
  • If you’re a social media platform that facilitates communications of various sorts among hundreds of millions of people around the globe and misinformation and lies spreading in a way that potentially undermines democracy is one of your ethical nightmares, then the communication of (reasonably thought) true claims is one of your values.

Notice how specific nightmares give rise to values that are more clearly defined than highly abstract values like respect, fairness, and transparency. In the case of financial services, we could have simply said, “Respect our clients.” But ethical nightmares involving a breach of respect brings things into focus. Those ethical nightmares highlight the ways in which an organization may fail to respect someone. And by giving substance to the ways in which you might fail to respect them, you can say more about what, for your organization, respect is. Respect is, at least in part, engaging in clear, honest, and comprehensive communications about one’s recommendations. It’s about telling the truth, the whole truth, and nothing but the truth.

Step 2: Explain why you value what you do in a way that connects to your organization’s mission or purpose

If you can’t do this, then it seems like ethical goals or nightmares are just something bolted onto an already finished product. If you’re going to weave ethical risk mitigation standards throughout your AI strategy and product life cycle, all of which is done for the sake of the organization’s mission, then you need to show how your ethical values are part and parcel of achieving that mission. If you can’t do that, then employees will regard it as a nice to have, not a need to have, and you’ll see the AI ethics program fall by the wayside and your ethical risks realized. Here’s what it might look like to weld your mission to your ethical values:

  • We are, first and foremost, health-care providers. We are entrusted with one of the most sacred things on this planet: human life. More specifically, each and every one of our patients trusts us to take the best care of them that we possibly can. While speed and scale are important, it can never come at the cost of decreased quality of care.
  • We are financial professionals. People give us their money so that we can protect and grow their wealth. This is money they or their loved ones have worked hard to acquire. This is money for their children’s school and daycare. This is money for their retirement. This is money for life-saving surgeries. This is money for family vacations or a once-in-a-lifetime world tour. Our clients have given us control over the necessary means to live a life they find worth living. The last thing we can do is betray that trust. They must never feel they are being taken advantage of by people with more knowledge than them about how the complicated world of finance works. We must be diligent, not only in our investment recommendations for them, but in our communications with them about those recommendations.
  • We are a platform that facilitates connections and conversations among hundreds of millions of people. Sometimes those communications are wonderful or at least benign. Other times, it is propaganda, lies, and other deceptions that can have disastrous consequences. Insofar as we enable those kinds of communications, we play a crucial role in what people come to believe about the world around them, which informs what they do. We cannot pretend we have nothing to do with these things because the communications are not coming out of our mouths. We are putting those communications in front of people. It’s our responsibility not to put lies in front of them. Not only do we owe our individual users that protection, but we also owe society as a whole that we will not play a role in its deterioration. We have already seen what can happen when disinformation routinely goes viral, and we cannot abide those consequences.

Step 3: Connect your values to what you take to be ethically impermissible

It’s one thing to say you value something. But if that statement is to have substance, it must be connected to an articulation of what courses of action are off the table. Values provide, at a minimum, the guardrails of ethical permissibility. You need to say what those guardrails are in as concrete a way as you can. For example:

  • Doing no harm. We will never use an AI to make recommendations that do not consistently outperform our best doctors.
  • Clear, honest, comprehensive communication. We will always communicate in a way that is clear and easily digestible. This means, for instance, that we will not communicate information that a reasonable person may deem important through, for instance, long text-filled documents written with a lot of jargon. We will ensure that people will know what they need to know when they need to know it, and we’ll even remind them of that information either when appropriate or at regular intervals. In some cases, we’ll go so far as to give our clients quizzes to ensure that they’ve understood what we’ve told them.
  • Communication of (reasonably thought) true claims. We will flag all posts that appear to be going viral. By “appear to be going viral,” we mean any post that is shared or viewed at a rate of x shares or views per minute. When a post is flagged, we will cap the rate at which the post can be shared at y shares or views per minute, and that post will be viewed by at least two people to determine whether it contains misinformation and, if so, the probability of people being wronged if that misinformation is believed by z percent of viewers of that content. In that event, we will freeze sharing of that content or take down the post. Further, if a person posts such content more than x times per week, that person’s account will be frozen for y weeks. A second offense will lead to a z-year ban, and a third offense will lead to a permanent ban.

Step 4: Articulate how you will realize your ethical goals or avoid your ethical nightmares

Now that we know what you value, how it ties into your organizational mission, and what things are off-limits, you need to say something about how you’re going to make all of this happen. The goal here is not to be exhaustive. It is to take your best attempt at articulating the Structure you’ll put in place. For example:

  • Accountable. We will take concrete steps to build organizational awareness around these issues, for instance, by educating our people when onboarding new employees, with seminars, workshops, and other educational and upskilling tools. We will also assign role-specific responsibilities, the discharging of which is relevant to bonuses, raises, and promotions, to all employees involved in the development, procurement, or deployment of AI products, whether those products are used internally or in the service of our clients. A senior executive will be responsible for growing our AI ethics program, including tracking progress against our goals using publicly available KPIs.
  • Due diligence process. We will systematically engage in rigorous ethical risk analyses throughout production and procurement of AI.
  • Monitoring. We will monitor the impacts of our products with an eye toward discovering their unintended consequences.

This is still fairly high level, of course. We haven’t said anything about the content of those role-specific responsibilities, which senior executive will drive the program and what KPIs they may use to track progress, who will engage in the due diligence and monitoring procedures, and so on. Still, we know in broad outline the kinds of things that are already committed to.

We can now see a subset of an AI ethics statement from the organizations I’ve given as examples. Let’s put it all together for one of them—the financial company—so we can see what things look like.

[Your value]: Clear, honest, and comprehensive communication

  • Why [you have that value]. We are financial professionals. People give us their money so that we can protect and grow their wealth. This is money they or their loved ones have worked hard to acquire. This is money for their children’s school and daycare. This is money for their retirement. This is money for life-saving surgeries. This is money for family vacations or a once-in-a-lifetime world tour. Our clients have given us control over the necessary means to live a life they find worth living. The last thing we can do is betray that trust. They must never feel they are being taken advantage of by people with more knowledge than them about how the complicated world of finance works. We must be diligent, not only in our investment recommendations for them, but in our communications with them about those recommendations.
  • What [you do because you value this]. We will always communicate in a way that is clear and easily digestible. This means, for instance, that we will not communicate information that a not unreasonable person may deem important through, for instance, long textual documents written with a lot of jargon. We will ensure that people will know what they need to know when they need to know it, and we’ll even remind them of that information either when appropriate or at regular intervals. In some cases, we’ll go so far as to give our clients quizzes to ensure that they’ve understood what we’ve told them.
  • How [you ensure you will do what you say you will do]. We will take concrete steps to build organizational awareness around this issue, for instance, by educating our people when onboarding new employees, with seminars, workshops, and other educational and upskilling tools. We will also assign role-specific responsibilities, the discharging of which is relevant to bonuses, raises, and promotions, to all employees involved in the development, procurement, or deployment of AI products, whether those products are used internally or in the service of our clients. A senior executive will be responsible for growing our AI ethics program, including tracking progress against our goals using publicly available KPIs.

The value, the why, and the what are all specific to the ethical nightmare of this particular financial services company. The how isn’t specific to it, but it’s nonetheless crucial to specify to internal and external stakeholders that there is thought behind how your goals are going to be achieved.

Advantages to Creating Your Ethical North Star This Way

When your AI ethics statement, principles, values, framework, or whatever you want to call it dives deep on Content in this way, you create numerous advantages.

First, you’ve defined goals and strategies, which enables you to talk tactics and, in some cases, to take action. You’ve articulated the Content in a way that connects it to what is ethically off-limits. And you already have some idea about how to operationalize this. It’s certainly no great mystery, as it would be had you said, “We respect our clients and we always act with integrity.” No clear marching orders there. But “we will always communicate in a way that is clear and easily digestible” gives you something to do: before sending out communications, and when programming your AI to communicate information to your clients, check to ensure that it is easily intelligible and digestible. When should you do this and exactly how should you do it? That will depend on the particulars of your organization and will come out when we start building the customized requisite Structure for achieving your ethical goals. But at least now we know what we’re trying to achieve.

In fact—brace yourself—KPIs are beginning to take shape. Suppose you have a survey that asks your end users how intelligible and digestible they find your communications (though don’t use that language). You can ask test groups how intelligible and digestible they find the communications. Suppose you use an AI to check the reading proficiency grade level required to understand the text. All of these have what people love: numbers to track. And we got there by thinking seriously about Content.

Second, now that you’ve specified your values, you can perform a gap analysis of where your company is relative to where you want it to be. This includes a review of current infrastructure, policies, processes, and people.

Third, if the process by which these values are articulated sufficiently includes members from across the organization—not just cross-functionally across the C-suite but also including more junior members of the organization—you will create organizational awareness and get insights from a diverse array of people, including people of different skill sets, knowledge bases, interests, experiences, and demographics. What is particularly important here is that when you ask people for their help and listen to them, and make changes in light of their feedback, you get their justified buy-in. This is far better than compliance to a set of principles cast down from atop Mount Olympus. These are people who rightly feel they’ve played a part in shaping these values and so feel ownership of the AI ethics program.

Fourth, by articulating what is ethically impermissible and explaining why it is impermissible, you’ve given people a crucial tool for thinking about the ethically tough cases: where it’s not quite clear whether some decision or action or product contravenes the organization’s AI ethical values. We’ll talk more about this when we discuss ethics committees, but for now we only need to notice that explanations for why some things are impermissible and why the organization does X are helpful in deciding cases where it’s difficult to discern the right thing to do.

Last, while this document can be used internally as an AI ethics North Star, you can also use it as a public-facing document for the purposes of branding and public relations. Insofar as the document is far more specific than generic statements of values, it’s more credible. Be warned, though: writing this document and sharing it with your organization or anyone outside the organization is a big commitment. It’s a promise. And if you break that promise, there’s no telling how much trust you’ll lose.

Intent on keeping that promise? We’ll see how in the next chapter.

Recap

  • Standard approaches to creating AI ethical North Stars suffer from four problems:

    – Lumping together Content and Structure

    – Lumping together ethical with nonethical values

    – Lumping together instrumental and noninstrumental values

    – Articulating values too abstractly to guide action

  • A better approach consists of four steps:

    – Step 1: State your values by thinking about your ethical nightmares, where those nightmares are partly informed by the industry you’re in, the particular kind of organization you are, and the kinds of relationships you need to have with your clients, customers, and other stakeholders for things to go well.

    – Step 2: Explain why you value what you do in a way that connects it to your organization’s mission or purpose.

    – Step 3: Connect your values to what you take to be ethically impermissible.

    – Step 4: Articulate, at a high level, how you will realize your ethical goals or avoid your ethical nightmares.

  • That better approach has five advantages:

    – It provides you with clearly defined goals and strategies, and does so in a way that determining KPIs is not far off.

    – You can now perform a gap analysis of your organization in light of these ethical values.

    – If done right, you’ll get insights from across the organization, create organizational awareness, and earn organizational buy-in.

    – You’ll have created a tool that will help you think through the tough ethical cases.

    – You’ve created a credible document for branding and PR purposes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.234.225