As deepfakes and the creation and spread of sexual images of minors using generative artificial intelligence (AI) services, including Grok, have emerged as a global problem, international data protection authorities have launched a joint response.
The Personal Information Protection Commission said on the 23rd that it took part in adopting the Joint statement on AI-generated content and data protection at the Global Privacy Assembly (GPA).
The joint statement includes four core principles that organizations developing and using AI systems must follow. Specifically, it sets out the implementation of safeguards to prevent the misuse of personal information and the creation of sexual content without consent; ensuring transparency about the scope of availability of AI systems; establishing effective redress procedures for prompt reporting and removal; and implementing strengthened protections for children and adolescents, such as providing age-appropriate information.
The authorities in each country also agreed to actively share their experiences in policy, enforcement, and education and to strengthen solidarity to realize the shared value of "trustworthy AI innovation."
The statement was prepared under the leadership of the International Enforcement Cooperation Working Group of the GPA, in which the Personal Information Protection Commission participates. The Personal Information Protection Commission said there was broad support across the international community, with data protection authorities from more than 50 member countries, recognizing the urgency of the matter, taking part in the signing.
Song Gyeong-hee, Chairperson of the Personal Information Protection Commission, said, "We will work with the international community to respond to the risks of personal information infringements caused by the misuse of AI content generation technologies such as deepfakes," adding, "We will continue to lead the creation of a trust-based environment for the use of artificial intelligence at home and abroad."