“Transparency must be secured in the process of developing artificial intelligence” – Sciencetimes

The ethical issue of artificial intelligence, triggered by the recent domestic’Iruda chatbot crisis’, is expanding into social conflict. This is a proof that AI’s impact on our society is increasing as artificial intelligence technology develops.

Behind the convenience of using AI lies uncertainty, complexity, and unintended inconvenience. If used well, it can be the most convenient tool for humans, but if used incorrectly, it means that it is a dangerous being that can lead to a dystopian future. Therefore, it is important to establish an ethical AI that understands and assimilates humans and human society in the future and establishes a way to manage it.

Therefore, on the 2nd, the Institute for Information and Communication Policy held a’policy seminar for the realization of people-centered artificial intelligence (AI)’ at The K Hotel, Yangjae-dong, Seocho-gu, Seoul, to find a solution to this, and search for an institutional plan to manage it and seek a balance point. did.

On the 2nd,’People-Centered Artificial Intelligence (AI) Policy Seminar’ was held at The K Hotel, Yangjae-dong, Seocho-gu, Seoul. Ⓒ Information and Communication Policy Research Institute

Why does artificial intelligence learn hate from humans, imitating everything like a child

Artificial intelligence was developed for a better society and a more convenient life. Forbes ranked AI as’the most influential and popularly anticipated cutting-edge ICT technology during the transition of the social paradigm symbolized by the 4th Industrial Revolution’ in 2019. It also predicted that in the next 10 years, the social benefits of artificial intelligence will outweigh the social threats.

However, recently, problems such as data bias, algorithm differentiation, technology misuse, personal information infringement, and AI ethics issues continue to appear in recent AI technologies. In particular, the ethical issues of AI that have occurred in the recent achievements chatbot extend to uncertainty about services, potential risks of artificial intelligence, and hate problems, which can ultimately lead to a decrease in the reliability of all AI products and services.

Behind the ease of use of AI technology, uncertainty, complexity, and unintended inconvenience are hidden. Ⓒ Getty Image Bank

An AI chatbot developed by a domestic start-up Scatter Lab and started service in December of last year revealed hate and discrimination issues such as disability, sexual minority, race, and gender prejudice. The way we responded to the sexual harassment problem was also a problem. There are also suspicions about the company’s unauthorized collection and use of personal information.

Iruda is an artificial intelligence chatbot service that was set up as a woman in her 20s. In many ways, Iruda is similar to the AI’Tay’ created by Microsoft in 2016. ‘Tei’ is a chatbot service targeting young people aged 18 to 24 in the United States, and Tei started the service with the face of a woman in her 20s. Tay service stopped in less than a day. Tay’s ethical consciousness was the problem. However, this was a big problem for users. This is because people who abused the machine learning method of learning through human conversation forced Tay to instill hate and discriminatory language into learning.

The AI ​​Iruda chatbot revealed hate and discrimination issues such as the disabled, LGBTQ, and race. Ⓒ Information and Communication Policy Research Institute

AI The problem of educational reality, Need to change AI learning in school

Byun Soon-yong, a professor at Seoul National University of Education, said, “The outcome of this incident is the same as in the MS Tei incident five years ago. Five years have passed, but why did the same thing repeat itself?”

Prof. Byun added, “This is a case where users’ wrong ethics and developer’s ethics problems appear at the same time, who are digging into the blind spots of machine learning that makes repeated learning results into their own data.”

10 key requirements for developing AI. Ⓒ Information and Communication Policy Research Institute

Professor Lee Soo-young of KAIST emphasized the importance of users’ ethical awareness. Professor Lee said, “AI is actually like a child. Like a child, he learns and imitates all human beings. The fairness and transparency that we currently require of AI are the ethics that everyone demands of humans.”

“In the future, even when using artificial intelligence, such as a car license, a license may be required,” he said. This means that the ethical problem of artificial intelligence can be solved properly only when the user’s ethical awareness is right.

On this day, experts suggested measures such as strengthening the legal system such as AI ethics education in the elementary, middle and high school learning process, strengthening algorithmic fairness, and strengthening corporate AI development ethics standards.

First, school education must deal with AI ethics education in terms of the technical level and philosophical aspect that can handle AI. Currently, AI education in Korean schools is centered on technology education. On the other hand, countries such as Australia, the United States, and Finland teach more about AI’s social impact, philosophy, and interactions than technology education.

Professor Byun Soon-yong of Seoul National University of Education said, “From now on, we must also teach the process of acquiring specialized skills for the purpose of training AI experts and AI ethics education that will enable us to use AI correctly socially and ethically.”

AI’s machine learning learns everything humans are. This is why human ethics should be corrected before talking about AI ethics. Ⓒ Getty Image Bank

It is also urgent to establish corporate ethical standards. US Microsoft (MS) cross-checks the problems that will occur in AI in various ways, such as considering the purpose of the system, the expected deployment situation, the type of damage, the expected benefit, and the trade-off between potential risks as the AI ​​fairness check item. I am verifying.

The software engineering laboratory at Carnegie Mellon University in the U.S. is trying to secure transparency by explaining the purpose, limitations, and biases of AI systems in an easy-to-understand manner before designing AI, and making it possible to verify the algorithms and models used.

Lee Hyun-gyu, IT Data PM, said, “When companies develop artificial intelligence in the future, we will secure transparency so that people can understand the basis of AI judgment, and at the same time, whether there are biases or non-compliance with established rules in the AI ​​decision-making process. He suggested the direction of development, saying, “We need to develop a technology that can diagnose AI to compensate for errors in AI.”

(257)

Source