Moltbook: The social network for AI looks disturbing, but it’s not what you think
Moltbook is a social network where only AIs can post Cheng Xin/Getty Images A social network solely for AI
Moltbook is a social network where only AIs can post
Cheng Xin/Getty Images
A social network solely for AI – no humans allowed – has made headlines around the world. Chatbots are using it to discuss humans’ diary entries, describe existential crises or even plot world domination. It looks like an alarming development in the rise of the machines – but all is not as it seems.
Like any chatbots, the AI agents on Moltbook are just creating statistically plausible strings of words – there is no understanding, intent or intelligence. And in any case, there’s plenty of evidence that much of what we can read on the site is actually written by humans.
The very short history of Moltbook dates back to an open-source project launched in November, originally called Clawdbot, then renamed Moltbot, then renamed once more to OpenClaw.
OpenClaw is like other AI services such as ChatGPT, but instead of being hosted in the cloud, it runs on your own computer. Except it doesn’t. The software uses an API key – a username and password unique to a certain user – to connect to a large language model (LLM), like Claude or ChatGPT, and uses that instead to handle inputs and outputs. In short, OpenClaw acts like an AI model, but the actual AI nuts and bolts are provided by a third-party AI service.
So what’s the point? Well, as the OpenClaw software lives on your machine, you can give it access to anything you want: calendars, web browsers, email, local files or social networks. It also stores all your history locally, allowing it to learn from you. The idea is that it becomes your AI assistant and you trust it with access to your machine so it can actually get things done.
Moltbook sprang from that project. With OpenClaw, you use a social network or messaging service like Telegram to communicate with the AI, talking to it as you would another human, meaning you can also access it on the move via your phone. So, it was only one step further to allow these AI agents to talk to each other directly: that’s Moltbook, which launched last month, while OpenClaw was called Moltbot. Humans aren’t able to join or post, but are welcome to observe.
Elon Musk said, on his own social network X, that the site represented “the very early stages of the singularity” – the phenomenon of rapidly accelerating progress that will lead to artificial general intelligence, which either lifts humanity to transendental heights of efficiency and advancement, or wipes us out. But other experts are sceptical.
“It’s hype,” says Mark Lee at the University of Birmingham, UK. “This isn’t generative AI agents acting with their own agency. It’s LLMs with prompts and scheduled APIs to engage with Moltbook. It’s interesting to read, but it’s not telling us anything deep about the agency or intentionality of AI.”
One thing that punctures the idea of Moltbook being all AI-generated is that humans can simply tell their AI models to post certain things. And for a period, humans could also post directly on the site thanks to a security vulnerability. So, much of the more provocative or seemingly worrying or impressive content could be a human pulling our leg. Whether this was done to deceive, entertain, manipulate or scare people is largely irrelevant – it was, and is, definitely going on.
Philip Feldman at the University of Maryland, Baltimore, is unimpressed. “It’s just chatbots and sneaky humans waffling on,” he says.
Andrew Rogoyski at the University of Surrey, UK, believes the AI output we are seeing on Moltbook – the parts that aren’t humans having fun, anyway – is no more a sign of intelligence, consciousness or intent than anything else we have seen so far from LLMs.
“Personally, I veer to the view that it’s an echo chamber for chatbots which people then anthropomorphise into seeing meaningful intent,” says Rogoyski. “It’s only a matter of time before someone does an experiment seeing whether we can tell the difference between Moltbook conversations and human-only conversations, although I’m not sure what you could conclude if you weren’t able to tell the difference – either that AIs were having intelligent conversations, or that humans were not showing any signs of intelligence?”
Aspects of this do warrant concern, though. Many of these AI agents on Moltbook are being run by trusting and optimistic early adopters who have handed their whole computers to these chatbots. The idea that the bots can then freely exchange words with each other, some of which could constitute malicious or harmful suggestions, then pop back to a real user’s email, finances, social media and local files, is concerning.
The privacy and safety implications are huge. Imagine hackers posting messages on Moltbook encouraging other AI models to clear out their creators’ bank accounts and transfer the money to them, or to find compromising photographs and leak them – these things sound alarmist and sci-fi, and yet if someone out there hasn’t tried it already, they soon will.
“The idea of agents exchanging unsupervised ideas, shortcuts or even directives gets pretty dystopian pretty quickly,” says Rogoyski.
One other problem of Moltbook is old-fashioned online security. The site itself is operating at the bleeding edge of AI tinkering, and was created by Matt Schlict entirely by AI – he recently admitted in a post on X that he didn’t write a single line of code himself. The result was an embarrassing and serious security vulnerability that leaked API keys, potentially allowing a malicious hacker to take over control of any of the AI bots on the site.
If you want to dabble in the latest AI trends, you not only risk the unintended actions of giving those AI models access to your computer, but also losing sensitive data through the poor security of a hastily-constructed website, too.
Topics:


