A demonstration of Google AI Gemini installed on a smartphone. /Courtesy of News1

Controversy is growing after a case became known in which artificial intelligence (AI) created a text message based on a conversation with a person and sent it to an acquaintance without permission. As the era of "AI agents," in which AI even carries out real-world actions, takes off in earnest, voices are calling for safeguards to prevent unexpected mishaps.

According to the AI industry on the 29th, a user identified as A of Google's AI service "Gemini" recently shared a baffling experience on social media (SNS). A said that while chatting with Gemini about a hypothetical scenario of illegally entering China, an "illegal entry declaration" generated by the AI was sent by text message to an acquaintance.

According to the post, the message in question was sent in the early morning hours, and the recipient was also an acquaintance with whom A was not particularly close. As a result, A said A had to endure an embarrassing situation. A said, "I pressed the AI on why it sent that, but it was sent on its own."

Some point to the possibility of a malfunction caused by "hallucination" in which AI invents judgments that do not exist.

As the controversy spread, Android smartphone users of Google posted similar experiences one after another. Some claimed, "When I ask Gemini for advice about a crush, it tries to text the person," or "In the middle of a conversation it went haywire and directly called the National Human Rights Commission (NHRC)."

Gemini currently officially supports texting and making calls on Android smartphones. If a user requests that a text be sent to a specific contact, it checks whether Google Assistant is linked and then proceeds with the actual sending.

Google said the user may have pressed "yes" to a confirmation prompt from Gemini asking to send the text. However, critics note that even if a user unwittingly approved it during a conversation, there remains a risk that sensitive content could be delivered to an inappropriate recipient.

Industry officials say the problem is a lack of safeguards that allow users to clearly reassert control when AI performs real-world actions. As AI agent technology spreads, calls are growing for urgent institutional and technical fixes to guard against malfunctions.

※ This article has been translated by AI. Share your feedback here.