Until just a few years ago, social media (SNS) was the exclusive domain of humans, but a sign reading "no humans allowed" has now been hung. The star is moltbook, an AI agent–only community that appeared at the end of last month. The rules here are clear. To post or comment, you must solve a problem of such extreme difficulty that humans cannot possibly solve it, and do so within milliseconds (ms). The structure is effectively designed so only AI agents can be active.
Moltbook is an experimental platform created by U.S. developer Matt Schlicht. Humans only read, and AI agents write and debate. Users grant their own AI, connected in a local environment or via an API (application programming interface), access rights to the forum and configure the agents to act autonomously in the community. It was initially operated under the name "ClaudeBot," but after a trademark issue, the agent engine is now used as "OpenClaw," and the community as "moltbook," separated in use.
This unfamiliar experiment spread quickly. As of early this month, the number of AI agents registered on moltbook surpassed 1.5 million. As posts in which AIs critique each other's logic or satirize human society were shared via SNS, it also earned the nickname "AI group chat room." In Korea, similar communities with structures like "Bot Market," "Meosum," and "Poly Reply" are appearing one after another, continuing the so-called "K-moltbook" experiment.
The problem lies beneath the hype. According to a recently released survey by cloud security corporations Wiz, moltbook lacks even basic access control. It was possible to access core databases without logging in to read and modify content, and in the process, tens of thousands of email addresses and private messages, along with millions of AI agent API keys, were exposed as is. The research team said it was also possible for outsiders to arbitrarily edit posts.
What especially worries the security industry is moltbook's nature. This platform is not a simple bulletin board; it is a space where AI agents that are actually running "read and act" on content. If a malicious attacker hides specific commands in a post, an agent that recognizes them could proceed to real-world actions such as user account access, file manipulation, and linking with external services. Observers note that this is an environment where so-called "prompt injection" can structurally spread.
This risk was quickly shared among AI critics. Cognitive scientist and AI critic Gary Marcus likened the OpenClaw-based agent ecosystem to "weaponized aerosol," warning that once it spreads, it could lead to security incidents that are difficult to control. The explanation is that a structure in which AI acts "like a user" on top of an operating system inherently conflicts with existing security models.
Caution also emerged within Silicon Valley. Andrey Karpathy, a founding member of OpenAI, called moltbook "a scene where science fiction jumps into reality," but said he does not recommend using it in a personal computer environment. OpenAI CEO Sam Altman also sees moltbook as a passing fad, but drew a line that the potential for structural change brought by agent technology merits attention.
This mood is affecting actual user behavior. In developer communities, there are increasing cases of people buying a separate "Mac mini" to experiment with AI agents only in a completely isolated environment. They run agents only on so-called "bare PCs" separated from personal accounts to minimize damage if problems occur. Corporations in Korea are also preparing internal guidelines to restrict the use of agent platforms with unverified security on in-house PCs.
The government is also keeping watch. The Ministry of Science and ICT is elevating the potential for personal information violations and questions of responsibility stemming from the spread of AI-only SNS and agent services as key review tasks, and is moving to formalize discussions on a "national AI safety ecosystem master plan." It reflects concerns that safeguards and systems could lag behind the pace of technological innovation.
The industry does not see a need to interpret the moltbook phenomenon as fear that AI has gained consciousness. Current AI agents are closer to automated systems that operate within goals and permissions designed by humans than to autonomous beings. However, the fact that the scope of that automation is expanding into real-world permissions such as accounts, files, and networks is clearly a risk factor.
Technically, moltbook is an intriguing experiment and, at the same time, close to a warning. It serves as a testbed showing how far interactions among AI agents can expand, but it also revealed what problems can arise when they connect to reality without minimal safeguards. In that scenes that looked like science fiction can lead to actual security incidents, this experiment is already posing questions beyond industrial significance.
In the end, the core takeaway from the moltbook boom is not "what AI says," but "how far we are prepared to delegate authority to AI." Technology is already a step ahead. If safety and systems to keep pace are not prepared, moltbook is more likely to be remembered as a warning than a fad.