The hate and discrimination controversy’Achieving AI’, why shouldn’t it end with an happening?

The service of Iruda, an artificial intelligence (AI) chatbot, which had been controversial over various hate and discrimination statements and personal information leakage, was temporarily suspended.

From the point of view of the developer, Startup Scatter Lab, it may have caused tangible and intangible damage as the situation occurred after a long period of elaborate release, but the service was interrupted after 3 weeks.

However, from the reverse, it is clear that we have only seen the promotional effects that will remain beyond this.

Although it wasn’t good,’noise marketing’, which was exposed several times through news from various media companies, was carried out, and as the controversy persists, it stays at the top of real-time search terms for a long time, and even people who don’t know what to achieve are now known.

Moreover, it is a’pause’ for improvement. Scatter Lab announced in its position on the 11th, “We will see you again after a service improvement period so that we can intensively compensate for the shortcomings.”

When it comes time to announce the resumption of the Iruda service, it is obvious that it will once again enjoy publicity effects as numerous reports are made again.

However, the controversy to achieve is that one developer’s chatbot service has just started, and it has been temporarily stopped due to controversy.

◇ Controversy achieved, the developer was expecting

In retrospect, Iruda had enough potential to be a problem from the initial setting of a developer called ’20-year-old female college student’.

This was also expected by the developer. As the controversy over sexual exploitation and hatred arose in the early stages, ScatterLab CEO Kim Jong-yoon said, “I expected sexual harassment,” in the official position posted on the Ping-Pong blog on the 8th.

Representative Kim cited as examples of the cat chatbot’Dreami’, which was previously serviced, and’the man Bluffs’ and’Fighting Luna’, which were serviced by Google Assistant. “In the light of the service experience so far, humans are socially acceptable to AI. It was too self-evident to have interactions that weren’t possible, and it was quite predictable,” he stressed, “it has nothing to do with (Iruda’s) gender.”

Next, regarding the setting of Iruda’s persona as ’20-year-old female college student’, he said, “Because the user base is broadly in their 10s to 30s, narrowly in their mid-10s to mid-20s, users feel friendly. I thought I could do it.”

In other words, it is the blame of users who repeatedly injected Iruda with discrimination and hate expressions, and it is the perception that there is no relationship with the gender and age of Iruda set by the developer.

However, Lee Jae-woong, the founder of the internet portal Daum, has a different idea. On the 9th, he pointed out through his Facebook page, “The sexual abuse problem was also a problem that was bound to occur the moment the woman was selected as a 20-year-old female character.”

It is pointed out that it was not desirable to set age and gender while providing general-purpose services.

◇ Award for ’20-year-old female college student’ implemented by Scatter Lab (狀)silver

As the developer said, the user’s problem cannot be ruled out, but it can be found in the dialogue with Eruda that clearly shows the developer’s level of perception of a 20-year-old female college student.

The words that the developer showed during the conversation, such as short tongue speech, self-study, murmur, screaming, crying, etc., is a point that shows that the developer has implemented a 20-year-old female college student as a passive and auxiliary female image, not as a subjective personality.

This is the so-called’a woman who listens well and does not respond.’ Rather, it did not refuse to be discriminated against, but naturally accepted, and developed to the level of insisting backwards through the learning process.

Therefore, Kwon Kim Hyun-young, a research fellow at Ewha Womans University’s Korea Women’s Research Institute, said on Facebook on the 8th, “(The controversy surrounding Iruda) is not about making victims, but the issue of the performance of the subject who creates the perpetrators. It should be done,” he pointed out.

◇ It’s not just the developer’s problem… We will consider together in the era of AI chatbot transition

The developer who desperately defended against the controversy over sex discrimination and hate, saying, “Through proper learning, you will know that bad words are bad words,” the developer decided to temporarily suspend the service only after a matter of illegality such as the leakage of personal information broke out. .

As long as the developer’will improve’, the service will come back, and as the controversy was great, the service will be resumed with greater attention.

ScatterLab said, “We sincerely apologize for the occurrence of discriminatory remarks against a specific minority group.” Such remarks do not reflect the company’s thoughts, and promise to continue improving so that discrimination and hate speeches are not found. However, it is difficult to expect the result at this time.

We shouldn’t just take this controversy as the happening of an AI chatbot launched by a developer. With the advancement of technology, AI chatbot services are becoming more and more, and the scope of use will also expand to everyday life.

The controversy to achieve is what happened in that transition period. In order to prevent the same controversy from recurring, it is necessary to consider both AI ethics and anti-discrimination laws.

On the 12th, former Soka CEO Jae-woong Lee evaluated on Facebook on the 12th, “We made a good decision to improve after the rapid service interruption on the part of ScatterLabs.” I hope this will be an opportunity to check.”

In particular, he suggested a step forward, saying, “I hope that this problem is due to lack of diversity in corporate governance structure, or lack of gender (sociality) sensitivity or human rights sensitivity of company members.” .

In addition, he suggested that starting with the controversy, he would also improve the system, such as enacting a comprehensive anti-discrimination law.

CEO Lee said, “The norms and ethics of our human beings to learn AI through the enactment of a comprehensive anti-discrimination law, socially checking whether AI chatbots, interviews, hiring, and news recommendations are promoting discrimination and hatred against humans. I hope that it will also be supplemented.”

[ 경기신문 = 유연석 기자 ]

.Source