In January this year, Amica, an artificial intelligence (AI) humanoid robot, answers questions from visitors at the Waze exhibition booth at the world's largest IT exhibition, CES 2025, held in Las Vegas, USA. /Courtesy of News1

Set in 2035, the science fiction novel "A Thousand Blues" by author Cheon Seon-ran features the humanoid standard-bearer robot "Colly." Created from a researcher's chip malfunction, Colly demonstrates human-like intelligence by learning words and concepts, and communicating with its partner racehorse "Today." "A Thousand Blues" won the long-form award at the Korea Science Fiction Literature Awards in 2019 and gained enough popularity to be adapted into a play.

◇ An AI resembling humans will emerge within 4 years

The rapidly advancing artificial intelligence (AI) industry is racing toward the development of artificial general intelligence (AGI). AGI refers to AI that can understand or learn tasks performed by humans in daily life. Sam Altman, CEO of OpenAI, defined AGI as "AI that can accomplish tasks that a highly skilled person can do in significant roles." In this context, the standard-bearer robot Colly can also be regarded as AGI.

Recently, CEO Altman predicted that AGI will appear during the term of U.S. President Donald Trump. Currently, AI is classified into three stages: artificial narrow intelligence (ANI), AGI, and artificial super intelligence (ASI). OpenAI's ChatGPT and Google's Gemini rely on learned data and algorithms, falling under "narrow AI (ANI)." AGI signifies human-level AI that learns and evolves through experience rather than just relying on input data. ASI refers to AI that surpasses human intelligence, but there are differing opinions on its definition.

◇ Global big techs race toward AGI

The company that first showcases a large language model (LLM) close to AGI is likely to seize the next-generation AI leadership. Global big tech corporations are swiftly unveiling new technologies, with OpenAI at the center. CEO Altman wrote on his blog, "OpenAI is now confident that it knows how to build artificial general intelligence (AGI)."

OpenAI unveiled a new AI model named "o3" with advanced reasoning capabilities at the end of last year. OpenAI is also developing a next-generation AI model known by the codename "ORION." The computational power of ORION is reported to be over 100 times stronger than that of GPT-4. However, OpenAI continues to delay its release, citing that ORION's performance does not meet expectations.

As OpenAI moved toward a single-handed lead, Google also introduced a new AI model. In December of last year, Google released Gemini "2.0." According to Google, Gemini 2.0 utilizes additional computing cycles and can process more complex queries by verifying tasks. A query refers to the act of requesting desired information from databases or search engines.

Google mentioned that it might head straight toward ASI development beyond AGI. Logan Kilpatrick, product lead of Google AI Studio, stated on his social media that "the possibility of a direct path to ASI is increasing every month." However, IT industry insiders believe that Gemini 2.0 is not yet developed enough for practical application.

Some believe that AGI has already appeared. Elon Musk, CEO of Tesla, claimed that OpenAI's GPT-4 qualifies as AGI. However, OpenAI denies that GPT-4 is AGI. When GPT-4 was asked, "Are you AGI?" it responded that it has "specific limitations and cannot learn or acquire new knowledge independently," declaring itself "not" AGI.

A developer creates an AI automatic rifle that operates using ChatGPT. OpenAI blocks this developer's access. /Courtesy of Reddit capture

◇ Can humanity handle AGI? Safety and ethics issues remain

In the IT industry, there are concerns that AI ethics cannot keep pace with the speed of technology. The "superalignment" team, which played a safety role at OpenAI, was disbanded last year. Jan Leak, who led the team, criticized, "In recent years, AI safety has been pushed to the back burner behind successful products, and OpenAI must prioritize preparing for the risks of AI, focusing much of its capabilities on security, monitoring, and safety. Only then will AGI be able to benefit humanity."

There is also a need for a social consensus about the ethics of humans utilizing AI, in addition to the threats AI poses to humanity. Recently, a developer caused controversy by developing an "AI rifle" using ChatGPT. In a released video, the AI responds to the developer's commands by autonomously targeting and firing at a target. OpenAI stated that it immediately blocked this developer's approach, but the AI ethics debate has erupted.

At this month's Consumer Electronics Show (CES 2025), humanoid robots were a hot topic. This suggests that the day to meet "Colly" is not far off. In the novel, Colly sacrifices itself during a race to save the weakened "Today." Objectively, Colly can be deemed a failure as a product for disobeying human commands. However, a debate may arise in the near future over which is closer to humanity: those who abandon their humanity or AI that seeks to reclaim it.

※ This article has been translated by AI. Share your feedback here.