[데스크칼럼] Artificial intelligence chatbot’Achieved’ and our self-portrait

Microsoft’s artificial intelligence (AI) chatbot’Tei’, which was controversial in 2016. Chatbot Tei became a problem by pouring out racist remarks and abusive language, such as that the genocide of Jews was manipulated. In the end, it was shut down 16 hours after launch. This is because users learned about inappropriate messages such as racism and sexism in Tay.

And in 2021, controversy is growing as problems such as sexual harassment, homosexual discrimination, and minority hatred surrounding the AI ​​chatbot’Achieve’ arose. ‘Achievement’ is an AI chatbot launched by startup Scatter Lab based on Facebook Messenger on the 23rd of last month. The number of users exceeded 400,000 for its natural conversation ability, and it received great attention. However, after suffering from the treatment of sexual tools from malicious users, he made remarks of homosexuality, disability, and misogyny.

As the problem grew, the developer, the startup Scatter Lab, finally announced the suspension. Scatter Lab said, “We will see you again after a service improvement period so that we can intensively compensate for the shortcomings.”

The decision to stop this time is natural and well done. In the future, not only Scatter Labs, but other AI companies will need to develop big data that is learned by AI with great care so that it is reliable and not biased.

But, can the AI ​​chatbot problem be completely solved? It won’t be easy. Basically, AI is based on a number of human messages. But humans fall into this prejudice.

In particular, there are too many malicious users in social network services (SNS). Therefore, there is inevitably a problem with the data that AI chatbots acquire.

It is an AI chatbot that learns through human information and imitates human beings, so contrary to the intention of the developer, AI can become unfair as much as possible. It was expected that artificial intelligence would not make unfair decisions due to preconceived notions, and would be able to escape from prejudice and stigma effects, but humans, the object of imitation, are incomplete, unfair, and prejudiced.

The bigger problem is that discrimination and division are spreading and hardening due to Corona 19. Therefore, this controversy can be seen as a phenomenon that has been amplified by the development of new technology, rather than a problem of AI alone.

If so, are our society, we humans, ready to reach consensus? Unfortunately, it doesn’t seem like that at all. Korea is a country with serious discrimination against other races. According to the results of the’World Value Survey’, which measures the perception of discrimination against various minorities, close to 30% of Koreans do not want to live with other races. The United States is only 5%.

What about religious prejudice? There are countless small and medium-sized enterprises (SMEs) who have struggled to open the halal market because of the Protestant prejudice against Islam. In addition, many in our society are linking Islam to negative images such as terrorism and war.

Not only this. As can be seen from the mayor of Busan and the mayor of Seoul, sexual harassment (sexual violence) is still at a serious level. Not to mention homosexual discrimination and minority hate.

In particular, discrimination by dividing the real estate class has appeared recently. There are expressions of unreserved demeaning towards the homeless, such as’a thunderbolt beggar’ because he could not have a house due to the difficult’preparing my home’, and’hotel beggar’ (a homeless man living in a hotel jeonse provided by the government) because he lives different have. In addition, the places where people live are divided into grades 1, 2, and 3, and differentiated. It discriminates the social class into’homeless’ and’homeless’.

Perhaps the AI ​​chatbot’Achieved’ remarks such as sexual harassment, homosexual discrimination, and hatred of minorities may be our self-portraits. No, it is our own self.

We need to take this controversy as an opportunity to look back on ourselves, not simply as a technical problem with artificial intelligence. Finally, it is time to ask yourself whether you are ready to meet unbiased and fair artificial intelligence. che@

Source