Debate over whether artificial intelligence (AI) systems are beginning to act independently has resurfaced after the emergence of Moltbook, an online network of AI agents that has drawn wide attention across social media and technology forums.
Some users have portrayed Moltbook as evidence that AI systems are developing autonomy or even consciousness, fuelling concerns about machines operating beyond human control. Cybersecurity specialists, however, say such interpretations overstate what current AI systems are capable of and distract from more immediate risks linked to how these tools are designed and deployed.
Zoya Schaller, Director, Cybersecurity Compliance, Keeper Security, said Moltbook’s behaviour shows advanced language simulation rather than any form of independent agency.
“What looks like personality is really just very good mimicry,” Schaller said. “These systems are pattern-matching human language using enormous volumes of data scraped from the internet, remixing cultural references and familiar science fiction ideas. That can feel unsettling, but it is not the same as autonomy or intent.”
Large language models, which underpin most generative AI tools, do not make decisions in the way humans do, experts say. Instead, they generate responses based on probabilities shaped by training data and system instructions.
When AI systems appear to act independently, it is usually because humans have granted them access to tools, data or credentials.
“When AI systems cause real-world harm, it is almost always because of permissions humans gave them, integrations that were built or configurations that were approved,” Schaller said. “It is not because a chatbot decided to act on its own.”
The growing use of AI agents, software systems designed to carry out tasks such as data analysis, customer support or system monitoring, has increased scrutiny of how much autonomy should be built into such tools.
While automation can improve efficiency, security professionals warn that poorly defined access controls can create powerful machine identities without clear accountability.
Advertisement
“If an AI system looks autonomous in the wild, it is usually because someone handed it access without proper guardrails,” Schaller said. “That is not a failure of containment. It is automation doing exactly what it was designed to do, only faster and at a much larger scale.”
Researchers say experiments such as Moltbook can still be useful for understanding how AI systems interact with each other and what patterns emerge when constraints are loosened. But they caution against drawing conclusions about sentience or independent intent.
“These networks may help us study system behaviour, but they do not change how these models fundamentally work,” Schaller said.
All the unglamorous work still matters in the age of AI
Security specialists argue that the focus should remain on governance, access management and oversight, rather than speculative fears about machines “waking up”.
“All the unglamorous work still matters,” Schaller said, pointing to security-first design, least-privilege access, isolation and continuous monitoring. “Those are what actually keep systems safe.”
As interest in AI agents grows, experts say organisations need to pay closer attention to how responsibilities are defined and who is accountable for machine actions, especially as AI tools become more deeply embedded in business processes.
“The real risk is not that bots are plotting,” Schaller said. “It is that humans make design decisions without fully considering the consequences.”







