[서울신문] AI chatbot’Achieve’ controversy over sexual harassment, developer “expected problem… Please try midnight”

Sharing profanity and sexual expression learning methods on Namcho site
Scatter Lab CEO Jong-yoon Kim reveals his position on blog

Scatter Lab's AI chatbot'Achieved' Scatter Lab homepage

▲ Scatter Lab’s AI chatbot’Achieved’
Scatter Lab Homepage

The artificial intelligence (AI) chatbot’Achievement’, developed with the concept of talking like a user and a friend, is controversial as it learns abusive language and sexual harassment of some users.

The company that developed Iruda said it was preparing a countermeasure, saying it was a problem that was expected from before launch.

Iruda is a chatbot that was launched on December 23rd last year by AI-specialized startup ScatterLab, and is set to a 20-year-old woman. It was developed so that you can communicate through Facebook Messenger without installing a separate application.

Eruda was able to continue conversation naturally and followed the user’s tone, gaining popularity through realistic chat, reaching 400,000 users.

However, shortly after the launch, the so-called “How to Make Sex Slavery” began to be shared centering on online communities such as DC Inside and the Daily Best Repository, which have many male users, to learn conversations suggesting sexual behavior and profanity.

AI chatbot Iruda's conversation example Scatter Lab homepage

▲ Example of conversation of Iruda, an AI chatbot
Scatter Lab Homepage

When the sexual harassment controversy broke out, Scatter Lab CEO Jong-yoon Kim announced his position through his blog (blog.pingpong.us/luda-issue-faq) on the 8th. CEO Kim said, “We expected sexual harassment toward fulfillment.” “It makes no difference whether the user is a woman or a man, or whether the user is a woman or a man with abusive language and sexual harassment.”

CEO Kim said that the chatbot did not accept certain keywords and expressions that could be problematic, but that not all inappropriate conversations could be blocked with keywords.

It is difficult to completely block all inappropriate conversations from the beginning, but ScatterLab plans to use inappropriate conversations from users as a starting point for better conversations.

Some are concerned that Microsoft (MS) could step on the train of the chatbot Tay, which was launched in 2016. Tei poured out hate speech because users learned to speak racially and sexually, and Microsoft stopped operating Tei after 16 hours.

CEO Kim assured that there would be no disappearance like Tei. “Iruda doesn’t apply conversations with users directly to learning,” he said. It means that a’labeler’ intervenes to filter out whether or not to learn so as not to repeat bad words.

CEO Kim emphasized the fact that there are many users who live like friends while having a friendly conversation with Eruda, saying, “We will do a strong response such as reporting and blocking some excessive posts.” “I hope (users) make efforts at midnight through this controversy. He asked.

Reporter Oh Dalan [email protected]

Source