Moltbot AI assistant is everywhere—but security worries aren't going away
Moltbot, an open-source AI assistant from Austrian developer Peter Steinberger, is blowing up online with over 30,000 GitHub stars since its rebrand from Clawdbot.
It helps you clear your inbox, manage calendars, and check flights right from apps like WhatsApp or Discord.
But as more people try it out, security concerns are making some users pause.
What makes Moltbot different?
Unlike most AI assistants that run on the cloud, Moltbot works directly on your own device and keeps all its memory in simple Markdown files you can edit.
If it doesn't know how to do something yet, it can actually write new skills for itself—no coding needed.
Plus, you can instantly switch between language models like Claude or ChatGPT.
Why are people worried about using it?
Because Moltbot gets deep access to your system so it can help with tasks across different apps, it's also at risk for prompt injection attacks through messaging platforms.
Right now, the safest way to use it means running it on a separate computer with a throwaway account—which kind of defeats the point for most people.
Users have been pretty vocal about wanting better safeguards before they fully trust it.