“I put my old lover’s nickname in it…” Controversy about the use of personal information

The government is launching an investigation into the artificial intelligence (AI) chatbot’Fulda’, which has recently been controversial in the process of using personal information. The focus of the investigation is whether or not the chatbot has enough consent to learn the contents of the user’s message provided in the third app’Science of Love’ made by the developer of Iruda.

Science conversation of love → Learning to achieve… “Focus on whether to consent to personal information”

'Iruda', an AI chatbot (chat robot) with a 20-year-old female gender character, is an AI chatbot launched by startup Scatter Lab on the 23rd of last month. [스캐터랩 홈페이지 캡처]

‘Iruda’, an AI chatbot (chat robot) with a 20-year-old female gender character, is an AI chatbot launched by startup Scatter Lab on the 23rd of last month. [스캐터랩 홈페이지 캡처]

According to an official of the Personal Information Protection Committee (hereinafter referred to as the Personal Information Commission) on the 12th, the Personal Information Commission and the Korea Internet & Security Agency began an investigation into the startup Scatter Lab that developed Eruda. Sang-ho Bae, Director of Investigation 2 of the Personal Information Commission, said, “After receiving the case, a preliminary investigation is in progress, such as checking basic facts.” The focus of” he said.

‘Achieved’ is an AI chatbot that can communicate through Facebook Messenger. It was released on December 23 last year by setting a 20-year-old woman as a character, and gained popularity by recording 750,000 users in two weeks. It was created by learning the conversations people shared with messengers based on deep learning algorithms. For this reason, it’s characteristic to feel like a person while chatting.

The conversations learned while Eruda was developed are the contents of KakaoTalk provided by users of’Science of Love’, a separate app launched by ScatterLab in 2016. In the science of love, when a user inserts a KakaoTalk conversation with a lover, it analyzes the response time and conversation pattern to show the affection level. The amount of conversational learning of Iruda revealed by ScatterLab is about 10 billion.

Preparing for User Class Action… “I don’t tell you I’m going to learn a chatbot”

Purpose of collecting and using personal information in the'Personal Information Handling Policy' posted on the Love Medical Science website.  Users

Purpose of collecting and using personal information in the’Personal Information Handling Policy’ posted on the Love Medical Science website. Users are in a position that “only with the notice of new service development and customized service provision, they cannot know that KakaoTalk content will be used for learning.” [연애의과학 홈페이지 캡처]

The incident came to light when a case of hate speech against the minority or the weak was discovered in the course of dialogue. Since then, centering on romantic science users, suspicion of leaking personal information has spread, such as “I entered the name of an old lover in Iruda, and then mentioned the name of another friend”, and “I put the nickname of the old lover and said it in a real lover’s tone.” .

The question is whether or not the company has sufficiently informed and agreed upon the relevant content by forming the dialogue provided for the science of love and bringing it to development. Scatter Lab said in a statement issued on the 11th that it was “used within the scope of the privacy policy for which consent was obtained in advance.” The purpose of collecting and using personal information specified in the science of love’Personal Information Handling Policy’ is notified as’use for new service development and marketing/advertising’.

However, users are in the position that “the fact that the sentence or expression will be used for learning by AI chatbots has not been explicitly announced.” In addition, it is pointed out that the conversation was held by two people, and it is inappropriate to collect both KakaoTalk conversations with only one consent from two people. Currently, dating science users are preparing a class action suit by creating an open chat room with a total of 140 people.

Interested in violation of the personal information law… “It is difficult to identify individuals by name alone”

Chatbot Iruda and a user's conversation that caused the homophobia controversy. [페이스북 캡쳐]

Chatbot Iruda and a user’s conversation that caused the homophobia controversy. [페이스북 캡쳐]

According to Article 3 (Principle of Personal Information Protection) of the Personal Information Protection Act, information controllers such as app developers must legally process minimum personal information within the scope necessary for the processing purpose, and cannot use it for purposes other than the purpose. According to Article 22 (How to Obtain Consent), consent should be classified and informed so that the information subject can clearly recognize it, and it should be clearly recognizable using font size, color, and underline.

However, there is also a view that it is difficult to make this problem legally. Opennet lawyer Kim Ga-yeon said, “Personal information in the legal sense must be able to identify a specific individual by combining multiple pieces of information.” I have also received consent,” he said. However, he explained, “If Scatter Lab does not properly de-identify during the app development process, and is still able to identify a specific person, the story may be different,” he said. “This is a part to be revealed through future investigations.”

Reporter Heo Jeongwon [email protected]


Source