ARTIFICIAL INTELLIGENCE IS NOT THE WILD WEST

Companies like Google and IBM have come up with rules and principles to keep AI ethical, honest, trustworthy, and fair. These rules and principles are similar to Isaac Asimov's Three Laws of Robotics. (Sharikov, 2018) In case you're like me and have forgotten what those three laws are, here's a reminder: "Law 1. A robot may not harm a human being, or allow, by inaction, a human being to come to harm. Law 2. A robot must obey orders given by a human being, unless these orders are in conflict with Law 1. Law 3. A robot must protect its own existence, unless doing so is in conflict with Law 1 or Law 2. (Kaminka, Spokoini-Stern, Amir, Agmon, & Bachelet, 2017)." These principles and rules lay the groundwork for the research and development of AI where design, development, implementation, and utilization are concerned.

Blue Lit Keyboard

GOOGLE'S PRINCIPLES ON AI

AI should be respectful of and guard people's privacy, benefit all people, be equitable, secure, and liable for what happens (Rossi, 2018).

MICROSOFT'S SIX RULES OF AI

  1. AI must be designed to assist humanity.
  2. AI must be transparent, being aware of how the technology works and its rules.
  3. AI must maximize effectiveness without destroying the dignity of the people.
  4. AI must be designed for intelligent privacy.
  5. AI must have algorithmic responsibility so that humans can undo the unintended damage.
  6. AI must avoid bias by ensuring adequate and representative research so that an erroneous heuristic cannot be used to discriminate.

Humans should also be able to coexist with AI by being able to relate and see things from the AI perspective, learn about AI, think outside of the box in terms of AI creation, and accept what AI has produced and the decisions that AI has made.

-Taken from an interview with Satya Nadella, CEO of Microsoft
  (Garcia, Nunez-Valdez, Garcia-Diaz, G-Bustelo, & Lovelle, 2019).

Computer Code
Server Room Mining Rig

IBM'S PRINCIPLES ON AI

AI should enhance human intelligence and not eradicate it. Trust is the main component for AI and all information should be transparent for AI to be used effectively (Rossi, 2018).

CONTRIBUTIONS TO THE AI COMMUNITY

IBM has produced an open-source toolkit called "AI Fairness 360" and released it to the AI development community. This toolkit provides information to help developers remove bias from AI creation. It also offers "guidelines, datasets, tutorials, metrics and algorithms" that help them reduce bias in AI creation. IBM has also released a collection of trust and transparency capabilities for AI that cover "explainability, fairness, and traceability". IBM is in development to create an "AI factsheet" that would allow developers to log design decisions and product testing from areas ranging from algorithms, machine learning, and accountability tools. IBM has released a pamphlet entitled "Everyday Ethics for Artificial Intelligence", that helps developers, designers, and engineers to become aware of bias in AI development and to help them to find answers to trust-related issues that might arise in their work. (Rossi, 2018).