- In this repo are resources to learn how to implement ethical AI using machine learning.
- This repo mostly focuses on neural networks, it includes books, papers, talks, videos and tools
- Also, a major source of abuse is personal data harvesting.
- The non-profit
Center for Humane Technology
was founded by ex-tech engineers to find ethical ways forward - They also have a list of actionable steps you can do to reduce this practice --> https://www.humanetech.com/take-control
- The non-profit
What is Ethical AI (good overview) images shown above from this website --> https://devopedia.org/ethical-ai
- Related paper by Ajitesh Kumar --> https://dzone.com/articles/ethical-ai-lessons-from-google-ai-principles
Listed (and some shown) below - also see the academic papers in /papers
folder of this repo.
- 📖 Book: "Weapons of Math Destruction" by Cathy O'Neil - link
- 📖 Book: "The Alignment Problem: Machine Learning and Human Values" by Brian Christian- link (https://us.macmillan.com/books/9781250134769/theaddressbook)
- 📖 Book: "Hello World - How to be human in the age of the machine" by Hannah Fry - link
- 📖 Book: "Calling Bullshit - The Art of Skepticism in a Data-Driven World" by by Bergstrom/West - link
- 📖 Book: "Black and White Thinking: The Burden of a Binary Brain in a Complex World" by Kevin Dutton - link
- 📖 Book: "Raising Heretics - Teaching Kids to Change the World" by Dr. Linda McIver - link
- 📖 Book: "Invisible Women" by Caroline Perez - link
- 📖 Book: "The Address Book: What Street Addresses Reveal About Identity, Race, Wealth, and Power" by Deirdre Mask - link
- 📖 Book: "Practical Fairness: Achieving Fair and Secure Data Models" by Aileen Nielsen - link
- 📚 Papers: Collection of Timnit Gebru's published papers - link
Most major cloud vendors provide guidance and best practices for implementing ethical AI. I am most familiar with Google's guidance.
- ✨ Google's "People + AI Patterns" Guidebook - link
- ✨ Google's "Responsible AI Practices" - link
- ✨ AWS "Fairness and Explanability in AI" - link
- 📚 Site: Coalition for Health AI (CHAI) - link
Authors of the books and papers listed above have also given talks on the focus of their writing. I prefer to read the book first, then watch the talk.
- 🗣️ Talk: "Weapons of Math Destruction" by Cathy O'Neil in 2016 / 58 min.- link
- 🗣️ Talk: "How I am fighting bias in AI" by Joy Buolamwini in 2017 / 9 min. - link
- 🗣️ Presentation: "Fairness and Explanability in Machine Learning" by AWS (shows SageMaker Clarify tool) in 2021 / 27 min. - link
- 📺 YouTube talk: "Ethical ML: Who's Afraid of the Black Box Models? • Prayson Daniel • GOTO 2021" / 38 min. - link
- 🎥 Documentary: "The Social Dilemma on Netflix in 2020 / 1 hour 30 - link
ML Collective was born from Deep Collective, a research group founded by Jason Yosinski and Rosanne Liu at Uber AI Labs in 2017.
- They that group to foster open research collaboration and free sharing of ideas, and in 2020 we moved the group outside Uber and renamed it to MLC.
- Over the years they have aimed to build a culture of open, cross-institutional research collaboration among researchers of diverse and non-traditional backgrounds.
- Their weekly paper reading group,
Deep Learning: Classics and Trends
, has been running since 2018 and is open to the whole community.
ML Collective includes a 'Lab'. At the Lab, experienced researchers looking to dedicate time to mentor projects and give advice to starters should consider joining the lab, with a light commitment of joining our regular research meetings where research updates are presented.
- 🔬 More info about ML Collective Lab --> https://mlcollective.org/community/#lab
- 📺 YouTube channel for ML Collective --> https://www.youtube.com/c/MLCollective/videos
Google has an extensive set of tools to evaluate bias in data used in models for AI. Many tools focus on data that will be used in TensorFlow models.
- 🔍 Google's Responsible AI - tools and practices - link
- 🔍 Data Card example - link
- ✏️ Datasheet Template - link
- 🔍 Know Your Data tool example (celebrity faces) -link
- 🔍 TensorFlow Data Validation tools (skew, drift, more...) - link
- 🔍 Pair Explorables, Measuring Diversity example -link
- 🔍 Pair Explorables, Hidden Bias example - link
Google's model evaluation tools center around models built with TensorFlow. Other vendors also support open source tools for model evaluation and more.
- 🔎 Using MinDiff to do model remediation for TensorFlow - link
- 🔎 Model Card tool (provides context and transparency into a model's development and performance)- link
- 🔎 Example Model Card for face detection - link
- Open Source library 'InterpretML' to explain blackbox systems - link
- Open Source 'responsible AI toolbox' (from Microsoft) - link
- 🔎 Google's 'What If' tool for model understanding, faces examples - link - example image shown below.