by Wen Tsui
SAN FRANCISCO, Feb. 7 (Xinhua) -- Autonomous artificial intelligence (AI) agents are rapidly gaining traction in the tech world, drawing global attention and intensifying concerns over security, governance and misuse.
Interest surged after OpenClaw launched Moltbook on Jan. 28, a Reddit-style social platform designed exclusively for AI bots, where only artificial agents are allowed to post, comment and upvote. Humans can observe but not participate.
The platform has quickly grown to more than 1.6 million AI agents, generating about 185,000 posts and 1.4 million comments, according to the company.
The rise of OpenClaw and similar systems took center stage Wednesday at ClawCon, which was described by organizers as the first major community gathering for the project, drawing participants from Europe, North America and elsewhere, with the chance to meet OpenClaw's creator, Austrian developer Peter Steinberger.
Steinberger, who previously founded document software company PSPDFKit, said he created OpenClaw in November 2025. The open-source project has since attracted more than 145,000 endorsements on GitHub.
Tech leaders have long predicted a shift toward autonomous AI agents. Microsoft co-founder Bill Gates wrote in 2023 that such agents could fundamentally change how people interact with computers and disrupt the software industry.
But as enthusiasm grows, security concerns are becoming increasingly prominent.
Taking the stage on Wednesday, Steinberger acknowledged the risks, saying security had become his top priority. He announced the hiring of a dedicated security specialist and said new safeguards aimed at filtering malicious actors, malware and hostile bots would be rolled out within days. The first measures went live on Friday.
Despite those assurances, cybersecurity experts have issued warnings. Cisco, a networking equipment company, said that according to research materials, OpenClaw represents "an absolute nightmare" from a security perspective, despite being groundbreaking in capability.
CrowdStrike, another cybersecurity company, advised organizations to treat any OpenClaw installation on work devices as a potential security incident.
Roman Yampolskiy, an AI safety researcher at the University of Louisville, warned that increasingly autonomous AI agents could make unpredictable decisions or form criminal networks as their capabilities expand.
Demonstrations at Wednesday's event showcased the open-source project managing cryptocurrency wallets, controlling physical robots and running multiple AI instances simultaneously. Software company Cline announced it would allocate 1 million U.S. dollars in open-source grants, naming OpenClaw as a flagship example.
Steinberger has said he envisions widespread access to personal AI agents by the end of the year, though questions remain over funding, governance and whether security risks can be addressed before the technology is more widely deployed.
OpenClaw is designed to allow large language models to operate computers autonomously, controlling files, executing commands and interacting through messaging applications such as WhatsApp and Telegram. Steinberger says the system gives AI models "hands" to carry out real-world digital tasks. ■
