If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Moltbot (formerly Clawdbot) has been in the news lately – and the headlines have been a debate-stirring mix of celebratory and doom-laiden.
It’s an AI agent that does what you need it to do. It can manage your work schedule and personal life through WhatsApp; it can act automatically once workflows or triggers are set.
Moltbot has gone viral in tech and developer circles. But there’s a reason some experts (including security leaders) are a little concerned.
It’s an open-source AI agent that you can interact with through everyday messaging apps – including WhatsApp, Telegram, Signal, Discord, and iMessage.
Instead of living only in a browser tab, it can sit closer to your workflows: calendars, messages, documents, and (if you let it) the operating system itself. The Verge notes it can route requests through the AI provider you choose (OpenAI, Anthropic, or Google) and handle practical tasks like filling forms, sending emails, and managing calendars.
It’s also had a very public growth spurt. Anthropic requested a rename due to trademark similarities, and creator Peter Steinberger said he was ‘forced’ to change the name from Clawdbot to Moltbot.
That rename triggered the kind of chaos you only get when software turns into a social phenomenon: harassment, scams, and copycats rushing in to capitalise on confusion.
So what should we learn from it?
It’s both. It depends on how we deploy.
Moltbot is a glimpse of the optimistic path: open-source tooling that makes personal automation feel genuinely useful, and lets users choose providers and shape behaviours.
But it also demonstrates the risk path: when an agent can read your messages, touch your files, and act on your behalf, the cost of a mistake (or a successful manipulation) rises sharply.
The real AI coworker breakthrough will be the first ecosystem that makes it easy to delegate safely: think least-privilege access by default, clear permissioning, human approval gates for high-impact actions, and security guidance that’s as prominent as the install command.
If an AI coworker could do 30% of your daily tasks, which 30% would you delegate – and which would you never hand over?
Open this newsletter on LinkedIn and tell us what you think. We’ll see you back here next week.
How geometric deep learning forecasts cell development
How geometric deep learning forecasts cell development