Artificial Intelligence in South Korea

Regulatory guidance / voluntary codes in South Korea

Various governmental authorities such as MSIT, the Personal Information Protection Commission (PIPC) and the Korea Communications Commission (KCC) are creating regulatory guidance. For general guidance on AI, the following can be considered:

'National Guidelines for AI Ethics' were prepared by MSIT in 2020, to provide comprehensive standards that should be followed by all members of society to implement 'human-centered AI'. The National Guidelines for AI Ethics highlight three basic principles that should be considered during the development and utilisation of AI to achieve 'AI for humanity':

  1. Respect for human dignity;
  2. The common good of society; and
  3. Proper use of technology.

They also list ten key requirements that should be met throughout the AI system lifecycle to abide by the three basic principles, including safeguarding human rights, protection of privacy, prevention of harm, transparency, respect for diversity, and accountability, among others.

'AI Ethics Self-Checklists' were also prepared by MSIT and the Korea Information Society Development Institute (KISDI) in 2023 to help AI actors examine their adherence to the National Guidelines for AI Ethics in practice. They cover philosophical and social disclosures, including ethical considerations concerning the development and utilisation of AI, as well as social norms and values to be pursued. The AI Ethics Self-Checklists provide both a general-purpose checklist and a field-specific checklist that can be used by different AI actors, with the latter covering fields of AI chatbot, AI for writing, and AI image recognition systems. 

'Guidebooks for Development of Trustworthy AI' were prepared by MSIT and Telecommunications Technology Association (TTA) in 2023 and 2024 providing development requirements and verification items to be used as reference materials for ensuring trustworthiness in the process of developing AI products and services. The eight sector-specific versions of the Guidebooks for Development of Trustworthy AI provide sector-specific specialised use cases based on requirements and assessment questions of the general version of the same to enhance practical use. The sector-specific versions recommend selecting appropriate sector-specific requirements and assessment questions considering the characteristics of AI services during AI trustworthiness assurance activities, covering the medical, autonomous driving, public and social, general AI, smart security, and hiring sectors.

A 'Strategy to Realize Artificial Intelligence Trustworthy for Everyone' was also announced by the MSIT in 2021. It seeks to realise trustworthy AI for everyone applying three pillars (of technology, system and ethics) across ten action plans.

Continue reading

  • no results

Previous topic
Back to top