
ScatterLab, a developer of artificial intelligence (AI) chatbot’Iruda’, which caused controversy over KakaoTalk conversation abuse and personal information leakage, discards the database and related conversation model used in Eruda.
In the process of operating the chatbot Iruda, which was officially launched on the 22nd of last month, for about three weeks, the conversations that Iruda talked about were mixed with suspected personal information, which sparked controversy.
In a statement on the 11th, Scatter Lab acknowledged that it was correct to use KakaoTalk conversations collected through the company’s other service’Science of Love’ for the development of chatbots, and announced that it would stop the service.
On the 15th, three days after the announcement of the statement, the suspicion of personal information leakage gradually increased, and the Scatter Lab announced that it plans to abolish the DB and the dialogue model as soon as the investigation ends.
Currently, the Personal Information Protection Committee and the Korea Internet & Security Agency (KISA) are investigating this matter. In addition, it is known that Kakao Talk, the science of love, is preparing a lawsuit by sharing cases of personal information leakage through Kakao Talk open chat rooms. At the same time, they have argued that ScatterLab should delete KakaoTalk conversation data altogether.
Scatter Lab said, “In consideration of users’ anxiety, we decided to discard the entire DB and deep learning conversation model of this artificial intelligence’Iruda’.” After receiving an application from a user who does not want to use it, we plan to delete all of the user’s data, and this will not be used in future deep learning conversation models.”
He added, “In the future, when new subscriptions and services are used, we plan to strengthen the procedure for collecting personal information and consenting to use,” he added. “The related follow-up measures can be guided through each application notice.”

The company claims that it has been used within the science and privacy policy of love, but does not specifically state that it will be used for chatbot development other than the sentence that it will be used for the development of new services, resulting in a backlash from users. In addition, there are suspicions of personal information leakage due to the fact that the person’s real name and address that are not properly de-identified are included.
Scatterlab argued that the Iruda DB was formed in individual and independent sentences through a de-identification procedure, and thus did not contain personally identifiable data. In addition, he emphasized that there is no risk of personal information leakage because it only learns conversation patterns based on clearly de-identified data, and AI remembers the data as vector values.