"These days, children reportedly ask the 'artificial intelligence (AI) chatbot' when they encounter problems they don't know. I'm worried that as the AI writes everything from math reasoning problems to Korean logical problems, the process of thinking for themselves is disappearing."

A parent surnamed Kim, 45, who has a fourth-grade elementary school child living in Gangseo District, Seoul, was recently surprised while reviewing her child's homework. Kim said, "I asked how she wrote such mature and logical answers to the descriptive questions, and the child hesitated before saying, 'The AI chatbot told me.' She remarked, 'These days, children turn to AI first when they have questions, and I worry they might lose their ability to think and express themselves.'

Illustration=ChatGPT

As generative AI advances, it is permeating children's daily lives. Global corporations are changing policies or launching dedicated AIs to allow children to use AI in an effort to capture future customers. However, there is a lack of AI literacy education compared to the pace of AI development. Children are now accustomed to asking AI for answers instead of thinking through problems themselves. Experts warn that using AI at a time when critical thinking has not yet fully developed may negatively impact their learning habits and overall cognitive development.

According to the IT industry on the 22nd, Elon Musk, founder of the AI company xAI, announced through X on the 19th (local time) that he would create an AI chatbot specialized for children called 'Baby Grok.' Grok is an interactive AI chatbot service launched by xAI, and it has faced continuous controversies. The recently launched Grok4 has been criticized for promoting inappropriate hatred and sexual commodification since its release. It appears Musk announced a separate development plan for the children's Grok in response to this. However, Musk did not disclose specific information regarding the launch date of Baby Grok.

Other AI corporations are changing their policies to offer generative AI services to children under 13. The fastest to change its policy was Anthropic. In May of last year, Anthropic allowed teenage users to utilize apps powered by AI models. Anthropic also revealed safety measures that developers creating AI-based apps for minors must undertake. These include implementing age verification systems, content moderation and filtering, and educational materials regarding AI use for minors.

Google also changed its policy in May to allow children under 13 to use 'Gemini.' Google has enabled children using accounts with 'Family Link' to access Gemini. Family Link is a service that helps parents create Gmail accounts for their children and manage their use of platforms like YouTube. Through this, Google has set up separate safety measures for Gemini to prevent children from generating inappropriate content and stated that this data will not be used for AI training.

With AI corporations relaxing age restrictions, the number of children using generative AI is increasing. According to a survey conducted last year by the Pew Research Center, about 26% of teenagers aged 13 to 17 reported having used ChatGPT for school assignments. This figure has nearly doubled from the previous year (13%). In Korea, the use of generative AI among teenagers has become commonplace. A survey conducted by the National Youth Policy Institute (NYPI) from May to July last year found that 67.9% of respondents had experience using generative AI.

However, experts warn that early exposure to AI may negatively impact the development of children's critical thinking and judgment. Kim Myung-joo, a professor of information security at Seoul Women's University and director of the AI Safety Research Institute, stated, "For children, this is a stage where their critical thinking and judgment develop, and exposure to AI could diminish those abilities. Traditional educational methods, such as thinking for themselves and interacting with peers, are essential. There is a risk that children may accept hallucinations caused by AI as reality when they lack judgment skills," she added.

Countries around the world are mandating AI literacy education. In the United States, the White House emphasizes AI literacy as a key area of focus through the National Artificial Intelligence Advisory Committee. The committee proposed that AI foundational education programs for students be implemented and that online lectures explaining AI concepts in simple terms be created and distributed. China is making AI education compulsory in primary and secondary education curriculums to enhance AI literacy. Starting from the fall semester of 2025, primary and secondary schools in major cities, including Beijing, must offer at least eight hours of AI education annually. Additionally, the Ministry of Education plans to publish a '2025 AI Education White Paper' to outline AI education strategies and long-term goals.

On the other hand, Korea lacks related discussions. According to a survey by the National Youth Policy Institute (NYPI) last year, most respondents scored low in measuring 'experience with generative AI-related education' on a four-point scale (1 point for never experienced, 4 points for frequently experienced). Specifically, the scores were ▲ Understanding the operational principles of generative AI (2.25 points) ▲ Education on how to utilize generative AI (2.24 points) ▲ Education related to privacy and copyright infringement (2.33 points) ▲ Education on information errors and biases (2.19 points).

※ This article has been translated by AI. Share your feedback here.