TRANSCRIPTEnglish

Why Clawdbot Users Are Waking Up to Banned Accounts (And How to Protect Yourself)

16m 2s2,566 words402 segmentsEnglish

FULL TRANSCRIPT

0:00

If you installed Claudebot this week and

0:03

connected it to your Claude

0:04

subscription, there's a real chance your

0:06

account is already flagged for

0:08

termination with no warning, no second

0:11

chances, just an email telling you your

0:14

access has been revoked. It's not just

0:17

the bands. Recently, I saw a Sheran scan

0:20

showing 954 clawboard instances sitting

0:23

completely exposed on the internet. This

0:25

is API keys, conversation histories,

0:28

authentication tokens, all of it is

0:31

publicly accessible. The CEO of

0:34

Orchestra AI demonstrated how to extract

0:38

a private key from a compromised

0:40

Lordboard instance using prompt

0:42

injection. From what it looks like, it

0:44

took him 5 minutes. This isn't

0:46

fear-mongering. This is what's actually

0:48

happening as we speak. And in this

0:50

video, I'm going to break down exactly

0:52

what's going on with Clawboard, why

0:54

Enthropic is banning users, and what the

0:56

real security risks are further in depth

1:00

in real time, and what you should do

1:01

instead if you still want to run a local

1:04

AI assistant without losing your

1:06

accounts or exposing your credentials.

1:07

Hey, I'm Johnny Nel. I spent years

1:09

building businesses the old hard way,

1:11

managing teams, scaling operations, and

1:14

doing everything manually. Today

1:16

everything I build from automations and

1:19

no code infrastructures to fully AI SAS

1:21

products has AI helping in one way or

1:24

another.

1:25

>> Let me quickly show you this. In South

1:26

Africa we call this a bri.

1:29

>> I build with AI tools that in my opinion

1:31

anyone can learn to use. My mission is

1:33

simple to help creators, entrepreneurs

1:35

and business owners get clarity with AI.

1:38

Let's build smarter together and enjoy

1:39

this video.

1:43

For those who missed it, Claudebot is a

1:46

self-hosted AI assistant by Peter

1:48

Steinberger, the developer behind many

1:51

successful repos. It launched and just

1:54

completely blew up the indie

1:58

hacker/developer/nontechnical

2:01

founder industry over the last week or

2:04

so. I mean, this is serious. We're

2:06

talking about 6,000 plus GitHub repo

2:09

stars in a matter of days. The idea is

2:11

simple. Instead of chatting with AI in a

2:14

browser, Clawbot gives you an AI that

2:17

actually does things. It has full system

2:20

access, shell commands, read and write,

2:22

browser controls, scheduling tasks, and

2:24

it connects to your existing messaging

2:26

platforms, whether it's WhatsApp,

2:28

Telegram, Slack, Discord, Signal,

2:30

iMessage, the list goes on. You message

2:32

it as if you would communicate with a

2:34

team member, and it executes. That's why

2:36

people got excited. It felt like what

2:39

Siri should have been, a real assistant

2:42

running locally, under your control. But

2:44

where it gets complicated. Within days

2:47

of bot going viral, users started

2:50

getting an email from Anthropic saying,

2:54

"An internal investigation of suspicious

2:57

signals associated with your account

3:00

indicates a violation of our usage

3:02

policy, and as a result, we have revoked

3:06

your access to Claude.

3:08

Guys, this is no warning, no explanation

3:10

of what you just did wrong, just a ban.

3:13

And if you were running Claudebot with

3:15

your Claude Pro or Max subscription,

3:18

there's a good chance this is why.

3:20

Here's what's happening. So, Anthropic

3:22

has a terms of service section 3.7 that

3:25

prohibits nonhuman access to Claude

3:28

outside of the official API. So when you

3:31

connect Claudebot using just normal

3:33

authentication as some would call it

3:35

OOTH you're essentially piping cla

3:38

through a third party tool and

3:39

anthropics systems are now flagging that

3:41

traffic as a violation. This is no jokes

3:43

and Anthropic engineer confirmed this on

3:45

X. They said that they've triggered

3:48

safeguards against spoofing the clawed

3:51

code harness. So, in plain English, if

3:54

you're not using the official interface

3:56

or paying for API access, they are

3:58

treating it as abuse and not just

4:00

clawbot. Users of open code got hit with

4:03

the same thing and it's quite alarming

4:06

and it's quite saddening thinking about

4:08

it. Anthropic even blocked XAI star from

4:11

using Claude through cursor. I mean,

4:14

they're locking down hard. And on top of

4:16

that, Enthropic sent a trademark

4:17

complaint over the name Claudebot, which

4:20

was too close to Claude. So, the project

4:22

had to rebrand and it's now called

4:25

Maltbot. Same software, same name. But

4:29

when they tried to grab the new X

4:31

handle, Crypto Scammers snatched it

4:33

within seconds. So, now you've got a

4:35

viral open- source project dealing with

4:37

forced rebrands, hijacked social

4:39

handles, and a community of nearly

4:42

20,000 Discord members trying to figure

4:44

out what's safe to use. And that's the

4:48

anthropic side of the problem. But it

4:50

gets worse.

4:51

While everyone is focused on the

4:53

anthropic bands, something else was

4:55

happening in the background. And

4:56

honestly, this is the part that concerns

4:59

me more. A Shden scan recently showed

5:03

954 exposed Trollbot instances. I mean,

5:06

guys, this is a few days back. I'm just

5:08

scared to go and have a look now. It

5:10

must be through the roof, guys. And

5:12

these are not instances at risk. These

5:14

are completely exposed, publicly

5:16

accessible, many with zero

5:19

authentication.

5:20

So when you look at this screenshot

5:23

coming on the graph, you'll see the

5:25

breakdown by country. United States 169,

5:29

China 93, Germany 89, Russia 78, Finland

5:34

69. These are real servers running right

5:37

now broadcasting their gateway ports,

5:40

LAN hosts, and CLI paths to the open

5:43

internet. Security firm Slow Mist

5:45

flagged this as a gateway exposure,

5:47

putting hundreds of API keys and private

5:50

chat locks at risk, but hundreds was the

5:53

conservative estimate. We're looking at

5:55

nearly a thousand. Like I said, this was

5:56

a few days ago. And here's what an

6:00

attacker can grab from an unprotected

6:03

instance. your API keys, authentication

6:06

tokens, your bot credentials, full

6:08

conversation histories across every

6:10

platform, the ability to send messages

6:12

as you, in some cases, command execution

6:15

on your machine. Security researcher

6:18

Jameson O'Reilly documented what he

6:20

found. In one case, a user set up their

6:23

signal messenger on a public accessible

6:25

Clawbot server. The pairing credentials

6:28

were sitting in globally readable

6:32

vaults. I mean anyone could grab them.

6:34

In another case, an AI software agency

6:37

was running Clawbot with root

6:39

privileges. No privilege separation.

6:41

Unauthenticated users could execute

6:43

arbitrary commands on the host system.

6:47

Then there's this. The CEO of Orchestra

6:50

AI demonstrated how to extract a private

6:53

key from a compromised instance. He just

6:55

sent a prompt injection through email,

6:58

asked the bot to check the message, and

7:00

receive the private key back. It

7:02

literally took him 5 minutes apparently.

7:04

It's insane. This wave of VPS deployment

7:07

keeps growing with users ignoring docs

7:10

and exposing ports without

7:11

authentication. We're not looking at

7:13

isolated incidences anymore, guys. We're

7:16

looking at a major credential breach

7:18

waiting to happen. It's really

7:20

happening. And when it increases and

7:22

escalates, the scale couldn't be more

7:23

significant. So why nearly a thousand

7:28

instances sitting wide open? And it's

7:30

not because clock is poorly built. Quite

7:32

frankly, it is really awesome, but the

7:35

docs are actually solid. People don't

7:37

take the time to read it. There's a full

7:39

security section. There's a command

7:41

called Claudebot security audit that

7:44

flags common misconfigurations. The

7:46

problem is most people aren't reading

7:48

these docs, guys. Read the docs. I've

7:50

also attached the link down in the

7:51

description. They're watching a YouTube

7:53

video. They're copying commands. They're

7:55

looking at all the benefits, but not

7:57

paying attention to what is truly

7:59

important. In some cases, they're just

8:02

spinning up a $5 VPS and calling it

8:04

done. And I get it. This excitement is

8:06

real. You want the AI assistant running

8:09

now. This is something we've all been

8:10

waiting for for so long.

8:13

And it's here. So, not after you've

8:16

spent 3 hours hardening your setup. But

8:18

here's the tension. Claudebot is

8:21

powerful precisely because it has full

8:23

system access. Shell commands, file

8:25

access, browser control, messaging

8:27

integrations. That's what makes it

8:30

useful, but that's also what makes it

8:32

dangerous if you don't lock it down. The

8:34

default configuration isn't the problem.

8:37

It's what happens when you put it behind

8:39

reverse proxy without authentication or

8:42

bind the gateway to a public IP or even

8:45

skip the token setup because the bot

8:47

seemed to work without it. The official

8:49

docs even say this directly. Running an

8:52

AI agent with shell access on your

8:54

machine is spicy. Their word, spicy.

8:58

There's no perfect secure setup. They're

9:00

being honest about the trade-off. But

9:02

when a project goes viral this fast,

9:05

most users aren't reading the fine

9:07

print. They're just trying to get up and

9:08

running. And that gap between what's

9:11

possible and what's safe is where things

9:14

break. Sometimes all it takes is a clear

9:17

framework to make AI feel human again.

9:19

Simple, structured, and useful. Inside

9:21

of our AI school community, you'll find

9:24

the full curriculum, workshops, prompt,

9:27

vault, and private access to our

9:29

upcoming beta apps, all built to help

9:32

founders integrate AI without the

9:34

overwhelm. Experience how clarity and

9:37

design turns into confidence in

9:40

execution with AI. Right? So, if you

9:43

still want to run a local AI assistant,

9:45

here's how I would approach it. Firstly,

9:48

stop using Anthropics over

9:49

authentication for this. If you're

9:51

piping a bot using your Max or Pro

9:54

subscription, you're running at a high

9:57

risk of getting your account banned.

9:59

It's not worth it. Has made it clear

10:02

they are treating this like a terms of

10:05

service violation and they're not giving

10:07

warnings. They are just pulling access.

10:10

So, what are your alternatives?

10:12

Looking at LM providers, you've got

10:14

options. JBT40 or JGBT5 works if you

10:18

have access Gemini 2.5 Pro or Flash got

10:22

solid performance less restrictive you

10:24

can do GLM if you're doing a lot of

10:26

coding or tooling minimax if your use

10:29

case is more writing heavy and if your

10:31

hardware can handle it

10:34

run a model locally Alm Studio whatever

10:38

fits your setup that way you're not

10:40

dependent on any provider who can revoke

10:42

access overnight and secondly ditch the

10:45

virtual private server unless you know

10:48

exactly how to set it up with the

10:50

correct proxies. That's a completely

10:53

different world. And guys, I'm sorry to

10:56

say this, but I know it's cheap, but a

10:58

$5

11:00

Hzner box sitting on a public IP is not

11:04

where you want to run an AI agent with

11:06

shell access. If you're going to do

11:08

this, run it on hardware that you can

11:11

realistically control. And I know, I

11:13

know the first video I made, I made it

11:15

clear that you don't need the Mac Mini.

11:18

And you really don't if you're quite

11:19

sophisticated and techsavvy and know the

11:22

precautions to take and also the risks

11:24

and how to evaluate them with a virtual

11:26

private server specifically with this

11:28

kind of setup, by all means

11:32

go for it. That's not my area and I

11:34

wouldn't necessarily recommend that. And

11:37

if you want to do this keeping longevity

11:39

into consideration and this is not just

11:40

something you want to test and play

11:42

around with, but actually something you

11:43

use infrastructurally,

11:46

you've got two realistic options. So

11:48

option A is set up a separate user

11:51

account on your MacBook. This isolates

11:52

the bot from your main environment, your

11:54

files, your credentials, your browsing

11:56

sessions, all in a different user space.

11:59

It's not perfect. There's still some

12:00

risk, but it's a meaningful layer of

12:02

separation for you to start testing it

12:04

out and see if it's worth investing into

12:08

option B, which is a dedicated Mac Mini.

12:11

This is what the serious clawboard users

12:13

are doing, including the product leader

12:15

of Google AI Studio, who bought one

12:17

specifically for this. It runs 24/7.

12:20

It's algorithm trapped. It's air from

12:23

your primary machine, and if something

12:24

goes wrong, the blast radius is

12:27

contained. Now, there was a lot of hype

12:29

around buying these things. So, I'm not

12:30

even sure if you really But the truth

12:32

still stands. It's all contained. The

12:34

whole point of local first AI is that

12:36

your data stays with you, but that only

12:38

works if the machine running isn't

12:41

broadcasting credentials to shenan. And

12:44

if you're still running clawbot right

12:45

now at minimum, run the security order,

12:48

paste it in there, and see what it comes

12:49

back with. Check what's exposed. Check

12:51

your gateway. Check your authentication.

12:54

Check your port bindings. And if

12:56

anything looks off, lock it down before

12:58

someone else finds it first. Let's zoom

13:01

out for a second because what's

13:03

happening here is bigger than Claudebot.

13:05

This wasn't some sketchy project trying

13:07

to exploit a loophole. Claudebot is a

13:10

legitimate open-source tool built by a

13:12

respected developer using official APIs

13:15

with a thriving community behind it. The

13:18

code is open, the docs are thorough, and

13:20

the project was doing exactly what

13:21

Anthropic should want. It was driving

13:24

Claude's usage. It was showcasing real

13:27

world applications. It was bringing

13:28

developers into the ecosystem.

13:30

Anthropics response a trademark

13:32

complained over the name authentication

13:35

lockout account bans with no warnings.

13:37

DHH the creator of Ruby and Rails called

13:40

it customer hostile.

13:44

And he might not be wrong. Let me know

13:46

down in the comments what you think. But

13:47

when you build on a platform, you're

13:50

placing a bet. You're betting that the

13:52

company behind it will support the

13:54

ecosystem. not undercuted in the moment

13:56

it gets traction. Those are things to

13:59

consider. And right now developers who

14:01

were building on Claude are looking at

14:04

OpenAI's codeex CLI which is Apache 2.0

14:07

license and asking themselves a simple

14:09

question. Do I want to build on a

14:12

platform that revokes my access without

14:14

a warning or do I want to build on

14:16

something I can actually control? That's

14:19

the real tension here and it's not going

14:21

away. So here's where I'll leave it.

14:23

We've got thousands exposed instances on

14:26

Shden right now. We've got Anthropic

14:29

actively banning users who connect

14:31

through third party tools. We've got a

14:33

forced rebrand, hijacked social handles,

14:36

and community trying to figure out

14:38

what's still safe to use. And underneath

14:40

all of it, a bigger question. Do you

14:43

trust building your AI workflows on

14:46

platforms that can pull the plug without

14:48

a warning? or does it push you towards

14:50

open models, local interference, and

14:52

infrastructure you can actually earn? I

14:54

don't know the answer for you. That's

14:57

going to depend on your setup, your risk

14:58

tolerance, and what you're trying to

15:00

build. But I'm curious where you're

15:02

landing all on this. Drop a comment and

15:05

let me know what you're thinking. And if

15:06

this was helpful and you feel called to

15:07

it, subscribe. I'm putting out more

15:09

content like this, breaking it down of

15:10

what's actually happening in AI without

15:12

the hype and without the fluff. And if

15:14

you want to dive further down into this,

15:16

having discussions with me, jumping on

15:18

live calls, just completely geeking out

15:21

all of the stuff, I also attached our

15:23

school community down below. I left it

15:25

super cheap for anyone serious about AI

15:28

and growing with it. Regarding all the

15:31

sources, the links, I'll also drop that

15:32

in the description for you to have a

15:34

look. And if you've got questions about

15:35

your setup or not sure if you're

15:38

exposed, leave a comment or send me a DM

15:40

in the community. You made it to the

15:43

end. That already says a lot about how

15:45

you build. Before you go, drop a like,

15:47

share what you've been working on, and

15:49

keep building. Don't forget your

15:52

invitation to join the school community

15:53

is waiting for when you're ready to go

15:55

deeper down in the description below.

15:58

See you inside.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.