This article was originally published on AIDAUG.org in October 2021. Get the content first by becoming an AIDAUG member for free.
A little over ten years ago, I gave a TEDx talk about the trust you give to machines (and technology). I was already working with some AI technologies, such as natural language processing (NLP). It was evident that one impediment to the adoption of new technologies was trust.
As AI ramifies in our society, as citizens, we need to demand explainability to guarantee trust. Explainability will be the way to ensure that ethics are an essential part of the models that will govern more and more our lives.
Ethics in AI is not a new concept. As early as 1942, in a short story titled Runaround, Isaac Asimov reveals the Three Laws of Robotics. As a teenager devouring Asimov’s books, those laws, including the zeroth one, were foundational to my career and morality.
One could argue that we are nowhere near the science fiction master’s vision on robotics. Still, I would disagree: look at the speed at which technologies are reaching us. Today, technologies are still siloed. Robotics engineers are working on smoother and more powerful robots. Experts in NLP, including natural language understanding (NLU) and natural language generation (NLG), work on business bots included in reporting software like IBM Cognos. Another branch of robotics is working on facial expressions describing emotions. The integration is bound to happen.
AI is not only about robots. It is everywhere. Guardrails need to be put in place now to prevent AI from classifying minorities disproportionately, as it happens with darker-skinned females in facial recognition software or HR favoring male employees.
As part of their corporate responsibilities, IBM, as a pioneer in AI, started investing and communicating on this crucial topic in 2015. IBM’s foundations rely upon their corporate value principles and key pillars of trust: explainability, fairness, robustness, transparency, and privacy. Those are not just words: IBM put money on the table and open-sourced key technologies that anyone can use.
IBM is not the only company concerned about ethics in AI. In 2016, Amazon, Google, Facebook, IBM, and Microsoft established a non-profit, the Partnership on AI. It gathers close to 80 partners from the private, academic, and media sectors and other non-profit organizations.
In conclusion, if you are still wondering how important ethical AI is, I’d like to compare the original question in the title to any business. If you’re on the company’s software engineering side, ask yourself why you would care about testing? You’re in finance, ask yourself why you would care about balanced accounts? You’re in HR, ask yourself why you would care about diversity? If you care about those topics in your department, you will agree on how important ethics are to AI.
Resources used to write this article:
- Responsible Use of Technology: The IBM Case Study: https://www.weforum.org/whitepapers/responsible-use-of-technology-the-ibm-case-study.
- AI Can Help Address Inequity — If Companies Earn Users’ Trust: https://hbr.org/2021/09/ai-can-help-address-inequity-if-companies-earn-users-trust.
- Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic: https://futurism.com/the-most-life-life-robots-ever-created.
- Ethics of artificial intelligence: https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence.
- Partnership on AI: https://partnershiponai.org/.
- Is Ethical A.I. Even Possible?: https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html.