[사설] Chatbot controversy… As a teacher in the AI ​​era

The artificial intelligence (AI) chatbot’Iruda’, which was controversial due to hate speech, etc., finally stopped service after three weeks. During the conversation, Iruda hated homosexuals or the disabled, and when asked by users, even responded to the level of’sex slave’, struck by the media and society. On the one hand, as the number of members of Iruda increased rapidly over the weekend, more than 200,000 members were attracting attention.

Sex discrimination and personal information protection controversy rises
When preparing for AI, it can be poison, not drugs

In fact, this is not the first time the AI ​​chatbot issue has been raised. In 2016, Microsoft (MS) in the United States released a chatbot’Tay’, but the service was suspended due to issues such as white supremacy and misogyny. The original Korean chatbot’SimSimi’, which was first released in 2002, has evolved little by little as similar ethical issues arose from time to time. It is now known more in foreign countries than in Korea, and it is said that the cumulative number of users around the world is close to 400 million. Nevertheless, the controversy suddenly emerged because ethical issues such as hatred and gender discrimination arose around AI chatbots in Korea, not foreign countries. The voices of criticism such as’AI ethics insensitivity’ and’regulatory blind spot’ are hot.

There are two main things to face in the debate that Korean society will achieve. The first is the problem of using and managing big data. Iruda said that startup Scatter Lab acquired hundreds of millions of Kakao Talk conversations between lovers through its other services, and learned chatbots based on this. It is legal because it has also received user consent. However, most users would not have imagined that the content of the conversation would be the material for the AI ​​chatbot service. In AI, big data is the fuel of a rocket. Good big data is required to make good AI. Experts agree that’explainable, transparent, and universally relevant big data should be used’. Needless to say, you need to obtain specific consent for the use of big data.

Second, it is an ethical issue that has already been pointed out a lot. It is a serious problem to have hate and sexual harassment conversations abound in chatbots that even minors can use. Developers need to implement sophisticated algorithms, and users also need a mature attitude to have conversations that are common sense.

AI is already permeating deep into our lives. The AI-equipped cleaning robot is doing a great job of sweeping and wiping around the house. The AI ​​that went into the speaker is still clumsy, but is starting to act as a personal assistant. In addition to such life-type AI, AI is already making great strides in various specialized areas such as legal AI that helps judges judge and doctor AI that helps doctors diagnose. If AI’s words and judgments are wrong in these areas, the impact can be fatal. It is time to take the achievement controversy as a’vaccine’ in the AI ​​era, and to examine and prepare for various’future impacts’ in advance.


Source