The pros and cons of an advanced AI coworker

The pros and cons of an advanced AI coworker

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

Your weekly immersion in AI 

Moltbot (formerly Clawdbot) has been in the news lately – and the headlines have been a debate-stirring mix of celebratory and doom-laiden. 

It’s an AI agent that does what you need it to do. It can manage your work schedule and personal life through WhatsApp; it can act automatically once workflows or triggers are set.

Moltbot has gone viral in tech and developer circles. But there’s a reason some experts (including security leaders) are a little concerned. 

What is Moltbot, exactly? 

It’s an open-source AI agent that you can interact with through everyday messaging apps – including WhatsApp, Telegram, Signal, Discord, and iMessage.

Instead of living only in a browser tab, it can sit closer to your workflows: calendars, messages, documents, and (if you let it) the operating system itself. The Verge notes it can route requests through the AI provider you choose (OpenAI, Anthropic, or Google) and handle practical tasks like filling forms, sending emails, and managing calendars.

It’s also had a very public growth spurt. Anthropic requested a rename due to trademark similarities, and creator Peter Steinberger said he was ‘forced’ to change the name from Clawdbot to Moltbot.

That rename triggered the kind of chaos you only get when software turns into a social phenomenon: harassment, scams, and copycats rushing in to capitalise on confusion.

So what should we learn from it? 

3 lessons from Moltbot: the AI coworker trade-off

  • Lesson 1: The UX breakthrough is where it lives, not just what it says.
    The jump from chatbot to coworker isn’t only about smarter models. It’s about presence. Moltbot lives in the channels where work already happens – the message thread, the calendar ping, the “can you just…” request that usually steals 20 minutes. That’s why the idea of ‘AI that actually does things’ is resonating with people: it reduces the distance between intent and action. For teams, this hints at a future where work is less about juggling apps and more about delegating outcomes – with humans staying responsible for judgement calls.
  • Lesson 2: Every new capability is also a new attack surface.
    Moltbot can be granted broad permissions – you can ask it to read and write files, run shell commands, and execute scripts, for example. This is the difference between a helpful assistant and a privileged operator. Security experts have been clear about the risk of prompt injection – where an attacker manipulates an AI via malicious text embedded in a message, file, or webpage. The Verge quotes security CEO Rachel Tobac, warning that if an autonomous agent has admin access and can be reached via direct messages, an attacker may attempt hijacking via a simple DM. The broader industry pattern is already familiar: powerful integrations arrive, then security catches up later. And it’s a pattern we need to change.
  • Lesson 3: ‘Open’ wins adoption – but governance must ship with it.
    VentureBeat argues that Moltbot’s virality collided with a wider ecosystem issue around Model Context Protocol (MCP), warning that optional authentication defaults can become “effectively no authentication” in real deployments. VentureBeat reports scans finding 1,862 exposed MCP servers with no authentication, and warns that “anything [the agent] can automate, attackers can weaponise.” The direction of travel is that agents compress workflows – and also compress time-to-incident when people deploy them quickly.

So… are agents like Moltbot good or bad?

It’s both. It depends on how we deploy. 

Moltbot is a glimpse of the optimistic path: open-source tooling that makes personal automation feel genuinely useful, and lets users choose providers and shape behaviours.

But it also demonstrates the risk path: when an agent can read your messages, touch your files, and act on your behalf, the cost of a mistake (or a successful manipulation) rises sharply.

The real AI coworker breakthrough will be the first ecosystem that makes it easy to delegate safely: think least-privilege access by default, clear permissioning, human approval gates for high-impact actions, and security guidance that’s as prominent as the install command.

We want to know what you think 

If an AI coworker could do 30% of your daily tasks, which 30% would you delegate – and which would you never hand over?

Open this newsletter on LinkedIn and tell us what you think. We’ll see you back here next week. 

Related
articles