AI chatbot’achieved’ temporarily suspended… “Sorry for not notifying the use of KakaoTalk conversation”

(Photo = Iruda Facebook page)
(Photo = Iruda Facebook page)

ScatterLab, a developer of artificial intelligence (AI) chatbot’Iruda’, recently announced that it has temporarily suspended the service by apologizing for some hate and discriminatory conversations found in the chatbot and for the abuse of data provided by users.

Eruda, which was officially released on the 22nd of last month, was used by nearly 750,000 users for about two weeks, eliciting a positive response, but recently, a case of hate and discrimination was discovered, and public opinion is hitting.

Moreover, the fact that users’ KakaoTalk conversations collected through the’Science of Love’, another service operated by ScatterLabs, were used for chatbot development, but not specifically notified to users, and the circumstances in which these data were not properly refined. An additional question was raised with suspicion.

On the 11th, Scatter Lab announced its position and said, “We sincerely apologize for the occurrence of discriminatory remarks against a specific minority group.” It was used, but I sincerely apologize, taking responsibility for not communicating enough so that the science users of love can clearly recognize this point.”

He added, “Scatter Lab has a certain period of service improvement and we would like to see you with a better achievement.”

First, for providing inappropriate conversations about hate and discrimination, it was filtered during the six-month beta test last year, and said that it will continue to improve.

ScatterLab said, “In the case of titles or hate expressions that are comparable to a specific group, the previously known cases that separate filtering was performed immediately upon discovery during the beta test period have already been improved.” “We are constantly improving so that no or hate speeches are found.”

In an online community, Iruda channel, users who attempt to speak of sexual harassment against Iruda are found. (Photo = Iruda channel capture)

In addition, according to reports from other media such as the Seoul newspaper, there were also cases in which personal information such as the address and the lake was exposed as it was. Regarding this, the company said that it is clear that de-identification has been carried out in advance, and that it has not yet been able to give an official answer as to whether it has been replaced with a different number to give additional realism. It was emphasized again that numeric information and English data that could be included in e-mails were deleted in advance.

In the statement, “When using the data, specific personal information such as the user’s nickname, name, and email has already been removed.” “All numeric information, including phone numbers and addresses, and English that can be included in emails, etc., are deleted. “By strengthening the de-identification and anonymity measures, information that can identify individuals has not been leaked.”

In the future, he added, “In the future, we will clarify the data use consent process and continue to improve the algorithm for content that may appear sensitive even if information that cannot be identified.”

There were also voices that Scatter Lab had to stop the service and relaunch it.

The company notified of the suspension of the service and said, “Our company is a young startup that provides an artificial intelligence (AI) chatbot that utilizes Korean natural language understanding technology. I believe there is.”

Related Articles


Chatbot’Achieved’ praised by those who used it first


Two AI stories… Why was’Achieved’ different from Eliza?


The Korea Artificial Intelligence Ethics Association “I need to re-release”


Scatter Lab Recruiting Beta Testers for Interactive AI’Achieved’

In addition, “Iruda is a childlike AI who has just started talking with people, and there are still many things to learn. This process is not just learning the conversation with the learner, but what is the appropriate answer? “We will let you learn together to judge what the answer is.”

He added, “We plan to open the biased dialogue detection model that will be created through this learning so that everyone can use it,” he added. “I hope that it can be useful for the development of AI ethics, AI products, and Korean AI dialogue research.”





Source