The past decade has witnessed a wide adoption of artificial intelligence and machine learning (AI/ML) technologies. However, a lack of oversight into their widespread implementation has resulted in harmful outcomes that could have been avoided with proper oversight. Before we can realize AI/ML's true benefit, practitioners must understand how to mitigate its risks. This book describes responsible AI, a holistic approach for improving AI/ML technology, business processes, and cultural competencies that builds on best practices in risk management, cybersecurity, data privacy, and applied social science.

It's an ambitious undertaking that requires a diverse set of talents, experiences, and perspectives. Data scientists and nontechnical oversight folks alike need to be recruited and empowered to audit and evaluate high-impact AI/ML systems. Authors Patrick Hall and Rumman Chowdhury created this guide for a new generation of auditors and assessors who want to make AI systems better for organizations, consumers, and the public at large.

  • Learn how to create a successful and impactful responsible AI practice
  • Get a guide to existing standards, laws, and assessments for adopting AI technologies
  • Look at how existing roles at companies are evolving to incorporate responsible AI
  • Examine business best practices and recommendations for implementing responsible AI
  • Learn technical approaches for responsible AI at all stages of system development

Table of Contents

  1. Preface
    1. Who Should Read This Book
    2. What Readers Will Learn
    3. Preliminary Book Outline
    4. Bringing it All Together
    5. Conventions Used in This Book
    6. Using Code Examples
    7. O’Reilly Online Learning
    8. How to Contact Us
    9. Acknowledgments
  2. 1. Contemporary Model Governance
    1. Basic Legal Obligations
    2. AI Incidents
    3. Organizational and Cultural Competencies for Responsible AI
    4. Accountability
    5. Drinking Your Own Champagne
    6. Diverse and Experienced Teams
    7. “Going Fast and Breaking Things”
    8. Organizational Processes for Responsible AI
    9. Forecasting Failure Modes
    10. Model Risk Management
    11. Beyond Model Risk Management
    12. Case Study: Death by Autonomous Vehicle
    13. Fallout
    14. An Unprepared Legal System
    15. Lessons Learned
  3. 2. How to Red-Team AI Systems
    1. Security Basics
    2. The Adversarial Mindset
    3. CIA Triad
    4. Best Practices for Data Scientists
    5. Machine Learning Attacks
    6. Integrity Attacks: Manipulated Machine Learning Outputs
    7. Confidentiality Attacks: Extracted Information
    8. General AI Security Concerns
    9. Counter-measures
    10. Model Debugging for Security
    11. Model Monitoring For Security
    12. Privacy-enhancing Technologies
    13. Robust Machine Learning
    14. General Countermeasures
    15. Case Study: Real-world Evasion Attacks
    16. Lessons Learned
    17. Resources
  4. 3. Debugging AI Systems for Safety and Performance
    1. Training
    2. Reproducibility
    3. Data Quality and Feature Engineering
    4. Model Specification
    5. Model Debugging
    6. Software Testing
    7. Traditional Model Assessment
    8. Residual Analysis for Machine Learning
    9. Sensitivity Analysis
    10. Benchmark Models
    11. Machine Learning Bugs
    12. Remediation: Fixing Bugs
    13. Deployment
    14. Domain Safety
    15. Model Monitoring
    16. Case Study: Remediating the Strawman
    17. Resources