AI ‘autonomy’ fears say more about human design choices than machines, security expert says

Date:


Debate over whether artificial intelligence () systems are beginning to act independently has resurfaced after the emergence of Moltbook, an online network of AI agents that has drawn wide attention across social media and technology forums.

Some users have portrayed Moltbook as evidence that AI systems are developing autonomy or even consciousness, fuelling concerns about machines operating beyond human control. Cybersecurity specialists, however, say such interpretations overstate what current AI systems are capable of and distract from more immediate risks linked to how these tools are designed and deployed.

Zoya Schaller, Director, , Keeper Security, said Moltbook’s behaviour shows advanced language simulation rather than any form of independent agency.

Advertisement


EVENT

Infosec Reimagined

Infosec Reimagined

Infosec Reimagined 2026 is the premier information security summit where top leaders—CISOs, CROs, CIOs, CTOs and risk executives—converge to redefine cyber resilience.


Register Now →

EVENT

CIO PrismCIO Prism

CIO Prism

CIO Prism unites forward-thinking technology leaders to exchange transformative insights, shape digital strategies, and foster innovation, empowering enterprises to excel in an era of rapid technological change.


Register Now →

EVENT

DEFINE 2047DEFINE 2047

DEFINE 2047

DEFINE 2047 Conclave spotlights futuristic innovation and emerging enterprises, tackling critical technological and policy challenges to shape India’s strategic roadmap for global defence leadership.


Register Now →

EVENT

Digital SenateDigital Senate

Digital Senate

Digital Senate is a premier conference uniting government leaders, technologists and innovators to share ideas, success stories and strategies on digital governance, public sector transformation, cybersecurity and emerging technologies in India.


Register Now →

EVENT

Cyber Surakshit Uttar PradeshCyber Surakshit Uttar Pradesh

Cyber Surakshit Uttar Pradesh

Find out strategies, frameworks and solutions for building a resilient and secure digital ecosystem across Uttar Pradesh.


Register Now →

EVENT

North-East Cyber Resilience & AI Transformation SummitNorth-East Cyber Resilience & AI Transformation Summit

North-East Cyber Resilience & AI Transformation Summit

A strategic national thought leadership platform aimed at advancing cyber resilience, digital trust and AI readiness across the North-East India.


Register Now →

“What looks like personality is really just very good mimicry,” Schaller said. “These systems are pattern-matching human language using enormous volumes of scraped from the internet, remixing cultural references and familiar science fiction ideas. That can feel unsettling, but it is not the same as autonomy or intent.”

Large language models, which underpin most generative AI tools, do not make decisions in the way humans do, experts say. Instead, they generate responses based on probabilities shaped by training data and system instructions.

When AI systems appear to act independently, it is usually because humans have granted them access to tools, data or credentials.

“When AI systems cause real-world harm, it is almost always because of permissions humans gave them, integrations that were built or configurations that were approved,” Schaller said. “It is not because a chatbot decided to act on its own.”

The growing use of AI agents, software systems designed to carry out tasks such as data analysis, or system monitoring, has increased scrutiny of how much autonomy should be built into such tools.

While automation can improve efficiency, security professionals warn that poorly defined access controls can create powerful machine identities without clear accountability.

Advertisement

“If an AI system looks autonomous in the wild, it is usually because someone handed it access without proper guardrails,” Schaller said. “That is not a failure of containment. It is automation doing exactly what it was designed to do, only faster and at a much larger scale.”

Researchers say experiments such as Moltbook can still be useful for understanding how AI systems interact with each other and what patterns emerge when constraints are loosened. But they caution against drawing conclusions about sentience or independent intent.

“These networks may help us study system behaviour, but they do not change how these models fundamentally work,” Schaller said.

All the unglamorous work still matters in the age of AI

Security specialists argue that the focus should remain on governance, access management and oversight, rather than speculative fears about machines “waking up”.

“All the unglamorous work still matters,” Schaller said, pointing to security-first design, least-privilege access, isolation and continuous monitoring. “Those are what actually keep systems safe.”

As interest in AI agents grows, experts say organisations need to pay closer attention to how responsibilities are defined and who is accountable for machine actions, especially as AI tools become more deeply embedded in business processes.

“The real risk is not that bots are plotting,” Schaller said. “It is that humans make design decisions without fully considering the consequences.”



Source link

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related