AI ethics mainly deal with the moral principle around the development techniques of Artificial Intelligence. With the advent of AI technology in every walk of life, it has become crucial to develop a policy and ethical code around it. Moreover, an AI code of ethics will also help in the development of a policy statement which provides a formal definition of the role of AI in the present-day world and will serve as a model for the future.
How did the need for AI ethics arise?
Isaac Asimov, a popular science fiction writer, had foresight regarding the potential hazards autonomous artificial intelligence agents could create for human society around three to four decades ago. It was long before AI development had reached its peak, and he had created The Three Laws of Robotics, just in case, to contain these dangers in their bud. He intended to nip off the extra power and induce more responsibility in the developers of such advanced technology.
AI ethics will provide the stakeholders of this technology with guidance when they come across a decision that will involve an ethical dilemma. Keeping this policy as a basis, they will figure out how AI will function and its purpose.
The Asimov Blueprint for AI ethics
According to the Asimov code of ethics, there are three laws:
- The first law protects robots from doing any active harm to humans. An AI entity shall also allow no harm to humans by refusing to act in a precarious situation.
- The second law compresses AI robots to follow human orders unless the orders do not deviate from the first law. They should also not break any legal system while following the above two cases.
- The third law asks robots to self-protect if it is following the first two laws.
The three approaches to AI ethics
Three approaches help decide what parameters and approaches should be considered while developing ethical policies for AI and ML technology.
- The first approach: Bottom-up approach
Marcello Guarini, a philosopher from the University of Windsor, in 2006 developed this. This system uses the principle of casuistry, wherein a moral dilemma is solved by using a theoretical rule in a pragmatic setup. A neural network makes ethical decisions by learning from reportedly correct answers to such situations, and it can then modify the technique to solve other ethical dilemmas. But this technique has its drawbacks.
- The top-down approach or the moral decision-making approach
This was developed by Mohammad Morteza Dehghani and combined three ethical theories, utilitarianism, deontology and analogical reasoning. In these situations, which serves a purpose or utility is followed unless they come across a sacred rule. Deontology is followed here, and sensitivity is reduced towards utilitarianism. But this is also not normative in any way.
- The hybrid approach picks the best of both worlds
Wendell Wallach and Colin Allen developed this theory, where the best policies from the bottom-up and top-down approaches are followed, and an ethically compliant machine is built with some degree of moral accountability and responsibility. The result of this is the development of LIDA, an AGI software that follows human cognition as closely as possible.
However, there are still many challenges in this area, mostly centered on reliability and responsibility because there is still rampant misuse of AI technology. The software companies and their founders are backing the development of a uniform code of ethics for AI machines. Among the prominent founders are Max Tegmark (MIT Cosmologist), Jaan Tallinn (Skype co-founder) and Victoria Krakovna (DeepMind Research Scientist).