CONCLUSION
Two Surprises

If you told me you were going to a conference on AI ethics, and that at a cocktail reception, you walked up to a group of attendees already discussing the topic, I could confidently tell you what was about to happen.

First, you’re going to hear a lot of familiar phrases and buzzwords. You’ll hear Accountability. Transparency. Explainability. Fairness. Surveillance. Governance. Trustworthy. Responsible. Stakeholders. Framework. Someone’s going to say “black box” at some point.

Then, there’ll be hand-wringing over the threats posed by AI gone wild. Biased data sets! Unexplainable algorithms! Invasions of privacy! Self-driving cars killing people!

Finally, the group will land on a healthy skepticism. “You can’t really define AI ethics,” you might hear. And “You can’t plan for everything.” Of course, also, “It’s just people’s own personal view of what’s right and wrong.” Or, more practically, “Look, how do you even operationalize ethical principles?” One person is going to say something about KPIs.

At the end of all this, there will be shrugs. That’s because, for the most part, people are raising issues, the underpinnings of which are not (well) understood, and then, after they’ve put together a conference in which they sling buzzwords back and forth at each other, they declare AI ethics essential, of course, but also really, really difficult.

But not you. Having gotten to this point in the book, you can see those underpinnings. You see what’s at issue, from both a business and an ethical perspective. And now that you understand it, you understand that AI ethics isn’t so difficult after all.

If a colleague tells you, “We need to say something about AI ethics,” you know what a meaningful document looks like and what’s superficial PR.

If your colleague tells you, “This AI ethics is for the AI folks. It’s technical. Tell them to get on it,” you know how impoverished an understanding that is.

Maybe a company comes to you and says, “We have your solution for responsible AI” or, less ambitiously, “We have your solution for bias in AI.” That piece of software, you now see, can’t solve all those problems by itself.

You know that and so much more about, for example, the role of people in creating appropriate metrics for fairness. And about when you need to know what the AIs are doing and when it doesn’t matter so much. You know there are levels of privacy, and it’s not just about anonymity. You know how to build real AI ethics statements, and that just saying you value AI ethics doesn’t mean your employees will take AI ethics seriously. You know you need Structure to make that happen. Most of all, you know software alone can neither handle the substantive ethical issues nor effect the kind of organizational change you need to systematically and comprehensively identify and mitigate AI’s ethical risks. In short, provided the software can do its job, you see the ecosystem in which the software needs to get embedded.

You see the AI ethics landscape. Now, you can navigate it.

And now that you’re here, I’m going to let you in on a secret. Two, actually.

The first secret is that there’s another book in this book. Pretend chapters 1 and 5 through 7 are a single book and delete the word “AI” whenever I said “AI ethics.” What you’ll get, with some minor tweaks here and there, is a book on how to articulate and operationalize the ethical values of your organization. I don’t care if you’re developing AI, putting microchips in peoples’ hands, or just selling bottled coffee: the way to create, scale, and maintain an ethically sound organization is already contained in those pages. If your aim is not only to create ethically sound AI, but to create an organization that takes ethical standards seriously, go back and reread those chapters with that in mind.

The second secret is that this book is about AI ethics, but it’s not only about AI ethics. It’s about the value of ethical inquiry. It’s about the power and importance of philosophical investigation.

The many exercises you’ve gone through to understand AI ethics—pulling apart Structure from Content; distinguishing instrumental from noninstrumental values and ethical from nonethical values; understanding the differences between harming and wronging; distinguishing machine explanations from people explanations and assessing when and why each is important; critically examining the idea that ethics is subjective; analyzing what constitutes a good explanation; identifying the ethical levels of privacy and which levels are appropriate and when; drawing out the tensions between benevolence and respect for autonomy; breaking down the ethically salient questions in product development—all of this is philosophy in action. If you’ve found these distinctions, concepts, and analyses helpful in shedding light on the AI landscape, a landscape you didn’t quite grasp before, then you’ve found philosophical analysis helpful. In understanding, internalizing, and thinking with these concepts, you’re doing philosophy.

The main contention of this book is philosophical in nature. Structure—what to do and how to do it—flows from an understanding of Content, of the ethical risks and how they arise. Ethics looks squishy and subjective, and it’s utterly unclear how to avoid disaster, until you dig deep to understand Content. It’s not enough to understand AI. It’s not even enough to understand risk and compliance. Engaging in robust and effective AI ethical risk mitigation requires understanding ethics at a level that goes well beyond “bias is bad” or “black boxes are scary.”

Despite eye rolls and claims of irrelevance, philosophy turns out to be essential for the kind of progress we all ought to value.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.34.198