Artificial Intelligence and Roboethics

Artificial Intelligence topics are usually followed by conspiracy theories and doomsday scenarios. However we might be stuck on the future, now more than ever Roboethics is a major part of our everyday lives.

Posted by Saportif Technology on

How does algorithms result in good and right decisions and actions for society and the individuals? How does the research, development, design and production process of technologies using artificial intelligence proceed well and correctly? These two questions also represent the two main issues of AI and robot ethics: (1) ethical content and (2) ethical process. Ethical content focuses on what needs to be done to make technology researched, developed, designed and produced ethical. The ethical process is concerned with the ethical maintenance of this research, development, design and production process.

As an example, we can take the "mood experiment" (mood experiment) conducted by Facebook in 2012. For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Facebook divided its users into two groups, A and B, for a week, some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves. At the end of a week, those in the group with negative content shared more negative posts, while those exposed to positive content shared more positive posts.

In this experiment, we can talk about both ethical content and ethical process issues. A technology that aims to control the mood of users by manipulating a platform they use for a completely different purpose will bring serious ethical problems with it if it is not used for a good purpose. Although the system developed here is not "bad", it is necessary to impose restrictions on the area of ​​use while developing this system to prevent abuse.

What about the ethical process in developing this system? While Facebook was conducting this experiment, it did not notify users and did not get their consent, and it also ignored the harm that could happen to users. Even if a user with mental difficulties harmed himself as a result of this research that he was involved in without his consent, Facebook took no responsibility for neither following it nor taking precautions in this regard. The ethical problems faced by Facebook in recent months (see: Cambridge Analytica scandal), manipulative technology developed by Google with Duplex, and ethical problems in systems used in criminal law and risk analysis such as Compas, this example remains innocent, but this example is still useful for describing what we have encountered.