Artificial Intelligence in South Korea
Prohibited activities in South Korea
Law / proposed law in South Korea
On December 26, 2024, the 'Framework Act on the Development of Artificial Intelligence and the Establishment of Foundation for Reliability' (AI Act) passed the plenary session of the National Assembly. The AI Act is expected to take effect around January 2026.
The AI Act is prepared to advance AI development and promote self-regulation by establishing a framework of the following initiatives:(i) formulating a master plan for AI by the Minister of the Ministry of Science and ICT (MSIT), creating the National AI Committee/NAIC under the President’s office, establishing the AI Policy Center, and establishing the legal foundation for the AI Safety Institute’s operations, (ii) supporting industries related to the development and promotion of AI technology, including establishing standards for AI technology, and (iii) enacting and announcing the “AI Ethics Principles” to support self-verification and certification by AI-related organizations, thus ensuring the safety and reliability of AI, and establishing the legal basis for autonomous ethics committees in the private sector.
Furthermore, the AI Act stipulates various obligations for AI business operators such as operators involved with high-impact AI, businesses offering generative AI products or services, and operators whose AI training compute usage surpasses. It also requires that operators lacking a domicile or business location within Korea must appoint a domestic agent to comply with the proposed regulatory framework and empowers the Minister of MSIT with the authority to conduct fact-finding inspections and to issue suspension or corrective orders where necessary.
The AI Act is the first statute to govern legal requirements specific to AI technologies and products in Korea.
Please note that AI in South Korea will still be regulated by existing rules governing personal information, copyright, and telecommunications. Therefore, the existing obligations and requirements under these laws and regulations will continue to apply to AI-related business and services.
Regulatory guidance / voluntary codes in South Korea
Various governmental authorities such as MSIT, the Personal Information Protection Commission (PIPC) and the Korea Communications Commission (KCC) are creating regulatory guidance. For general guidance on AI, the following can be considered:
'National Guidelines for AI Ethics' were prepared by MSIT in 2020, to provide comprehensive standards that should be followed by all members of society to implement 'human-centered AI'. The National Guidelines for AI Ethics highlight three basic principles that should be considered during the development and utilisation of AI to achieve 'AI for humanity':
- Respect for human dignity;
- The common good of society; and
- Proper use of technology.
They also list ten key requirements that should be met throughout the AI system lifecycle to abide by the three basic principles, including safeguarding human rights, protection of privacy, prevention of harm, transparency, respect for diversity, and accountability, among others.
'AI Ethics Self-Checklists' were also prepared by MSIT and the Korea Information Society Development Institute (KISDI) in 2023 to help AI actors examine their adherence to the National Guidelines for AI Ethics in practice. They cover philosophical and social disclosures, including ethical considerations concerning the development and utilisation of AI, as well as social norms and values to be pursued. The AI Ethics Self-Checklists provide both a general-purpose checklist and a field-specific checklist that can be used by different AI actors, with the latter covering fields of AI chatbot, AI for writing, and AI image recognition systems.
'Guidebooks for Development of Trustworthy AI' were prepared by MSIT and Telecommunications Technology Association (TTA) in 2023 and 2024 providing development requirements and verification items to be used as reference materials for ensuring trustworthiness in the process of developing AI products and services. The eight sector-specific versions of the Guidebooks for Development of Trustworthy AI provide sector-specific specialised use cases based on requirements and assessment questions of the general version of the same to enhance practical use. The sector-specific versions recommend selecting appropriate sector-specific requirements and assessment questions considering the characteristics of AI services during AI trustworthiness assurance activities, covering the medical, autonomous driving, public and social, general AI, smart security, and hiring sectors.
A 'Strategy to Realize Artificial Intelligence Trustworthy for Everyone' was also announced by the MSIT in 2021. It seeks to realise trustworthy AI for everyone applying three pillars (of technology, system and ethics) across ten action plans.
Appointed supervisory authority in South Korea
There is currently no primary and/or general supervisory authority under the AI Act in Korea. It is expected that regulatory agencies will handle issues according to their respective domains. For example, PIPC will handle personal information-related issues, KCC communication regulation issues, and the Korea Fair Trade Commission (KFTC) fair trade issues.
Definitions in South Korea
Under the AI Act, the term “artificial intelligence” or “AI” is defined as the electronic manifestation of human intellectual capabilities such as learning, inference, perception, judgement, and language comprehension.
“AI system,” the principal subject of regulation, is defined as an AI-based system with varying degrees of autonomy and adaptability, capable of influencing physical or virtual environments through its predictions, recommendations, and decisions.
Furthermore, the AI Act applies to AI business operators, defined as individuals or entities engaged in AI-related activities. These operators are categorised into two groups (Article 2, Item 7): (i) 'AI Developers' (corporations, organisations, individuals, and national institutions involved in the development and provision of AI); and (ii) 'AI User Businesses' (corporations, organisations, individuals, and national institutions that offer AI products or services using AI developed by others).
Prohibited activities in South Korea
The AI Act does not specifically enumerate or stipulate any specific prohibited actions (in contrast to the treatment of prohibited AI practices under the EU AI Act). However, actions that are already prohibited under existing laws and regulations, such as infringement of copyright or privacy, and distribution and publication of illegal information and contents, may still be problematic in relation to AI-related services.
High-risk AI in South Korea
The AI Act outlines several key obligations for AI business operators who aim to provide high-impact AI systems or products or services utilising such technology.
- High-Impact AI Definition: "High-Impact AI" systems are those that significantly influence or pose risks to the safety and fundamental rights of individuals. These are typically employed in critical decision-making or assessments with substantial impact on someone’s rights and responsibilities. Examples include applications in medical device development, recruitment processes, loan assessments, and educational evaluations (Article 2, Item 4).
- Preliminary Review Obligation: AI business operators must assess whether their AI technology qualifies as high-impact before deployment. They may seek confirmation from the Minister of MSIT if there is uncertainty regarding the classification of their AI system (Article 33). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
- Advance Notification Obligation: AI business operators intending to deploy products or services using high-impact AI are obligated to inform users in advance (Article 31, Paragraph (1)). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
- Safety and Reliability Measures: A comprehensive framework of safety and reliability measures must be implemented by operators offering high-impact AI systems to ensure these systems operate as intended without undue risk (Article 34).
- Impact Assessment Obligation: AI business operators are expected to proactively assess the potential impact of their high-impact AI on individuals’ fundamental rights. Public institutions, including national and local government entities, must prioritize AI solutions that have undergone such assessments (Article 35).
- Right to Explanation: Individuals affected by AI systems including high-impact AI have the right to request clear explanations of the logic and principles behind AI-generated outcomes, to the extent that this is technically and reasonably feasible (Article 3, Paragraph (2)).
Controls on generative AI in South Korea
The AI Act mandates several obligations on AI business operators that intend to offer products or services utilising generative AI.
- Definition of Generative AI: This term refers to AI systems that produce content such as text, audio, images, and other outputs by mimicking the structure of input data (Article 2, Item 5).
- Advance Notification Obligation: AI business operators must notify users in advance that their products or services are powered by generative AI (Article 31, Paragraph (1)). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
- Labelling Obligation: Products or services must be clearly labelled as being created by generative AI (Article 31, Paragraph (2)).
- Deepfake Content: AI business operators providing virtual outputs that may be mistaken for real (often referred to as “deepfakes”), must ensure these are clearly labelled. If labelled content qualifies as artistic or creative expression, the manner of labelling should not hinder its appreciation (Article 31, Paragraph (3)).
- Compliance Guidance: The specifics of notification and labelling, including potential exceptions, will be detailed in a forthcoming Presidential Decree (Article 31, Paragraph (4)).
Enforcement / fines in South Korea
The AI Act provides authority to the Minister of MSIT to initiate investigations in cases where (i) MSIT learns of any actual or potential violation of the following obligations under the AI Act, or (ii) MSIT receives a report or civil complaint of such a violation: (i) obligation to label content created using generative AIs (Article 31, Paragraph (2)); (ii) obligation to provide notice to viewers or to label “deepfakes” (Article 31, Paragraph (3)); (iii) obligation to secure safety when the cumulative compute usage in the AI system training surpasses a designated threshold and/or duty to report on measures taken by the service provider to secure such safety (Articles 32, Paragraphs (1) and (2)); and (iv) obligation to secure safety and reliability for high-impact AIs (Article 34, Paragraph (1)). Upon finding of any violation listed above, the Minister of MSIT may issue an order to suspend or correct the action in violation against the violator (Article 40, Paragraph (3)).
Furthermore, administrative fines may be imposed for the following: (i) failure to appoint a domestic agent may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 2); (ii) failure to comply with the advance notification obligation for the high-impact AI or generative AI may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1); and (iii) failure to comply with the corrective orders may result in administrative fines of up to KRW 30 million (Article 43, Paragraph (1), Item 3).
User transparency in South Korea
Certain notification, labelling and/or explanation obligations are required for high-impact AI and generative AI, as commented above.
Fairness / unlawful bias in South Korea
Currently, the AI Act does not clearly stipulate this, but it is recommended in the above-mentioned National Guidelines for AI Ethics and other similar documents.
Human oversight in South Korea
The AI Act does not specifically mandate human oversight as an obligation. However, it does require certain safety and reliability measures in relation to high-impact AI. For AI systems where the cumulative compute used for training surpasses a certain threshold (to be provided under the Presidential Decree), the AI Act requires AI business operators to identify, assess, and mitigate risks throughout the AI life cycle, and establish a risk management system (Article 32). While the specific details of these safety and reliability measures and risk management systems have not yet been stipulated, there is a possibility that certain level of human oversight and monitoring may be introduced in the future through Presidential Decrees or separate regulations.
The AI Act does not specifically enumerate or stipulate any specific prohibited actions (in contrast to the treatment of prohibited AI practices under the EU AI Act). However, actions that are already prohibited under existing laws and regulations, such as infringement of copyright or privacy, and distribution and publication of illegal information and contents, may still be problematic in relation to AI-related services.