Why Clawdbot Users Are Waking Up to Banned Accounts (And How to Protect Yourself)
FULL TRANSCRIPT
If you installed Claudebot this week and
connected it to your Claude
subscription, there's a real chance your
account is already flagged for
termination with no warning, no second
chances, just an email telling you your
access has been revoked. It's not just
the bands. Recently, I saw a Sheran scan
showing 954 clawboard instances sitting
completely exposed on the internet. This
is API keys, conversation histories,
authentication tokens, all of it is
publicly accessible. The CEO of
Orchestra AI demonstrated how to extract
a private key from a compromised
Lordboard instance using prompt
injection. From what it looks like, it
took him 5 minutes. This isn't
fear-mongering. This is what's actually
happening as we speak. And in this
video, I'm going to break down exactly
what's going on with Clawboard, why
Enthropic is banning users, and what the
real security risks are further in depth
in real time, and what you should do
instead if you still want to run a local
AI assistant without losing your
accounts or exposing your credentials.
Hey, I'm Johnny Nel. I spent years
building businesses the old hard way,
managing teams, scaling operations, and
doing everything manually. Today
everything I build from automations and
no code infrastructures to fully AI SAS
products has AI helping in one way or
another.
>> Let me quickly show you this. In South
Africa we call this a bri.
>> I build with AI tools that in my opinion
anyone can learn to use. My mission is
simple to help creators, entrepreneurs
and business owners get clarity with AI.
Let's build smarter together and enjoy
this video.
For those who missed it, Claudebot is a
self-hosted AI assistant by Peter
Steinberger, the developer behind many
successful repos. It launched and just
completely blew up the indie
hacker/developer/nontechnical
founder industry over the last week or
so. I mean, this is serious. We're
talking about 6,000 plus GitHub repo
stars in a matter of days. The idea is
simple. Instead of chatting with AI in a
browser, Clawbot gives you an AI that
actually does things. It has full system
access, shell commands, read and write,
browser controls, scheduling tasks, and
it connects to your existing messaging
platforms, whether it's WhatsApp,
Telegram, Slack, Discord, Signal,
iMessage, the list goes on. You message
it as if you would communicate with a
team member, and it executes. That's why
people got excited. It felt like what
Siri should have been, a real assistant
running locally, under your control. But
where it gets complicated. Within days
of bot going viral, users started
getting an email from Anthropic saying,
"An internal investigation of suspicious
signals associated with your account
indicates a violation of our usage
policy, and as a result, we have revoked
your access to Claude.
Guys, this is no warning, no explanation
of what you just did wrong, just a ban.
And if you were running Claudebot with
your Claude Pro or Max subscription,
there's a good chance this is why.
Here's what's happening. So, Anthropic
has a terms of service section 3.7 that
prohibits nonhuman access to Claude
outside of the official API. So when you
connect Claudebot using just normal
authentication as some would call it
OOTH you're essentially piping cla
through a third party tool and
anthropics systems are now flagging that
traffic as a violation. This is no jokes
and Anthropic engineer confirmed this on
X. They said that they've triggered
safeguards against spoofing the clawed
code harness. So, in plain English, if
you're not using the official interface
or paying for API access, they are
treating it as abuse and not just
clawbot. Users of open code got hit with
the same thing and it's quite alarming
and it's quite saddening thinking about
it. Anthropic even blocked XAI star from
using Claude through cursor. I mean,
they're locking down hard. And on top of
that, Enthropic sent a trademark
complaint over the name Claudebot, which
was too close to Claude. So, the project
had to rebrand and it's now called
Maltbot. Same software, same name. But
when they tried to grab the new X
handle, Crypto Scammers snatched it
within seconds. So, now you've got a
viral open- source project dealing with
forced rebrands, hijacked social
handles, and a community of nearly
20,000 Discord members trying to figure
out what's safe to use. And that's the
anthropic side of the problem. But it
gets worse.
While everyone is focused on the
anthropic bands, something else was
happening in the background. And
honestly, this is the part that concerns
me more. A Shden scan recently showed
954 exposed Trollbot instances. I mean,
guys, this is a few days back. I'm just
scared to go and have a look now. It
must be through the roof, guys. And
these are not instances at risk. These
are completely exposed, publicly
accessible, many with zero
authentication.
So when you look at this screenshot
coming on the graph, you'll see the
breakdown by country. United States 169,
China 93, Germany 89, Russia 78, Finland
69. These are real servers running right
now broadcasting their gateway ports,
LAN hosts, and CLI paths to the open
internet. Security firm Slow Mist
flagged this as a gateway exposure,
putting hundreds of API keys and private
chat locks at risk, but hundreds was the
conservative estimate. We're looking at
nearly a thousand. Like I said, this was
a few days ago. And here's what an
attacker can grab from an unprotected
instance. your API keys, authentication
tokens, your bot credentials, full
conversation histories across every
platform, the ability to send messages
as you, in some cases, command execution
on your machine. Security researcher
Jameson O'Reilly documented what he
found. In one case, a user set up their
signal messenger on a public accessible
Clawbot server. The pairing credentials
were sitting in globally readable
vaults. I mean anyone could grab them.
In another case, an AI software agency
was running Clawbot with root
privileges. No privilege separation.
Unauthenticated users could execute
arbitrary commands on the host system.
Then there's this. The CEO of Orchestra
AI demonstrated how to extract a private
key from a compromised instance. He just
sent a prompt injection through email,
asked the bot to check the message, and
receive the private key back. It
literally took him 5 minutes apparently.
It's insane. This wave of VPS deployment
keeps growing with users ignoring docs
and exposing ports without
authentication. We're not looking at
isolated incidences anymore, guys. We're
looking at a major credential breach
waiting to happen. It's really
happening. And when it increases and
escalates, the scale couldn't be more
significant. So why nearly a thousand
instances sitting wide open? And it's
not because clock is poorly built. Quite
frankly, it is really awesome, but the
docs are actually solid. People don't
take the time to read it. There's a full
security section. There's a command
called Claudebot security audit that
flags common misconfigurations. The
problem is most people aren't reading
these docs, guys. Read the docs. I've
also attached the link down in the
description. They're watching a YouTube
video. They're copying commands. They're
looking at all the benefits, but not
paying attention to what is truly
important. In some cases, they're just
spinning up a $5 VPS and calling it
done. And I get it. This excitement is
real. You want the AI assistant running
now. This is something we've all been
waiting for for so long.
And it's here. So, not after you've
spent 3 hours hardening your setup. But
here's the tension. Claudebot is
powerful precisely because it has full
system access. Shell commands, file
access, browser control, messaging
integrations. That's what makes it
useful, but that's also what makes it
dangerous if you don't lock it down. The
default configuration isn't the problem.
It's what happens when you put it behind
reverse proxy without authentication or
bind the gateway to a public IP or even
skip the token setup because the bot
seemed to work without it. The official
docs even say this directly. Running an
AI agent with shell access on your
machine is spicy. Their word, spicy.
There's no perfect secure setup. They're
being honest about the trade-off. But
when a project goes viral this fast,
most users aren't reading the fine
print. They're just trying to get up and
running. And that gap between what's
possible and what's safe is where things
break. Sometimes all it takes is a clear
framework to make AI feel human again.
Simple, structured, and useful. Inside
of our AI school community, you'll find
the full curriculum, workshops, prompt,
vault, and private access to our
upcoming beta apps, all built to help
founders integrate AI without the
overwhelm. Experience how clarity and
design turns into confidence in
execution with AI. Right? So, if you
still want to run a local AI assistant,
here's how I would approach it. Firstly,
stop using Anthropics over
authentication for this. If you're
piping a bot using your Max or Pro
subscription, you're running at a high
risk of getting your account banned.
It's not worth it. Has made it clear
they are treating this like a terms of
service violation and they're not giving
warnings. They are just pulling access.
So, what are your alternatives?
Looking at LM providers, you've got
options. JBT40 or JGBT5 works if you
have access Gemini 2.5 Pro or Flash got
solid performance less restrictive you
can do GLM if you're doing a lot of
coding or tooling minimax if your use
case is more writing heavy and if your
hardware can handle it
run a model locally Alm Studio whatever
fits your setup that way you're not
dependent on any provider who can revoke
access overnight and secondly ditch the
virtual private server unless you know
exactly how to set it up with the
correct proxies. That's a completely
different world. And guys, I'm sorry to
say this, but I know it's cheap, but a
$5
Hzner box sitting on a public IP is not
where you want to run an AI agent with
shell access. If you're going to do
this, run it on hardware that you can
realistically control. And I know, I
know the first video I made, I made it
clear that you don't need the Mac Mini.
And you really don't if you're quite
sophisticated and techsavvy and know the
precautions to take and also the risks
and how to evaluate them with a virtual
private server specifically with this
kind of setup, by all means
go for it. That's not my area and I
wouldn't necessarily recommend that. And
if you want to do this keeping longevity
into consideration and this is not just
something you want to test and play
around with, but actually something you
use infrastructurally,
you've got two realistic options. So
option A is set up a separate user
account on your MacBook. This isolates
the bot from your main environment, your
files, your credentials, your browsing
sessions, all in a different user space.
It's not perfect. There's still some
risk, but it's a meaningful layer of
separation for you to start testing it
out and see if it's worth investing into
option B, which is a dedicated Mac Mini.
This is what the serious clawboard users
are doing, including the product leader
of Google AI Studio, who bought one
specifically for this. It runs 24/7.
It's algorithm trapped. It's air from
your primary machine, and if something
goes wrong, the blast radius is
contained. Now, there was a lot of hype
around buying these things. So, I'm not
even sure if you really But the truth
still stands. It's all contained. The
whole point of local first AI is that
your data stays with you, but that only
works if the machine running isn't
broadcasting credentials to shenan. And
if you're still running clawbot right
now at minimum, run the security order,
paste it in there, and see what it comes
back with. Check what's exposed. Check
your gateway. Check your authentication.
Check your port bindings. And if
anything looks off, lock it down before
someone else finds it first. Let's zoom
out for a second because what's
happening here is bigger than Claudebot.
This wasn't some sketchy project trying
to exploit a loophole. Claudebot is a
legitimate open-source tool built by a
respected developer using official APIs
with a thriving community behind it. The
code is open, the docs are thorough, and
the project was doing exactly what
Anthropic should want. It was driving
Claude's usage. It was showcasing real
world applications. It was bringing
developers into the ecosystem.
Anthropics response a trademark
complained over the name authentication
lockout account bans with no warnings.
DHH the creator of Ruby and Rails called
it customer hostile.
And he might not be wrong. Let me know
down in the comments what you think. But
when you build on a platform, you're
placing a bet. You're betting that the
company behind it will support the
ecosystem. not undercuted in the moment
it gets traction. Those are things to
consider. And right now developers who
were building on Claude are looking at
OpenAI's codeex CLI which is Apache 2.0
license and asking themselves a simple
question. Do I want to build on a
platform that revokes my access without
a warning or do I want to build on
something I can actually control? That's
the real tension here and it's not going
away. So here's where I'll leave it.
We've got thousands exposed instances on
Shden right now. We've got Anthropic
actively banning users who connect
through third party tools. We've got a
forced rebrand, hijacked social handles,
and community trying to figure out
what's still safe to use. And underneath
all of it, a bigger question. Do you
trust building your AI workflows on
platforms that can pull the plug without
a warning? or does it push you towards
open models, local interference, and
infrastructure you can actually earn? I
don't know the answer for you. That's
going to depend on your setup, your risk
tolerance, and what you're trying to
build. But I'm curious where you're
landing all on this. Drop a comment and
let me know what you're thinking. And if
this was helpful and you feel called to
it, subscribe. I'm putting out more
content like this, breaking it down of
what's actually happening in AI without
the hype and without the fluff. And if
you want to dive further down into this,
having discussions with me, jumping on
live calls, just completely geeking out
all of the stuff, I also attached our
school community down below. I left it
super cheap for anyone serious about AI
and growing with it. Regarding all the
sources, the links, I'll also drop that
in the description for you to have a
look. And if you've got questions about
your setup or not sure if you're
exposed, leave a comment or send me a DM
in the community. You made it to the
end. That already says a lot about how
you build. Before you go, drop a like,
share what you've been working on, and
keep building. Don't forget your
invitation to join the school community
is waiting for when you're ready to go
deeper down in the description below.
See you inside.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.