TRANSCRIPTEnglish

18,000 Users from College Student to Minors Caught in #lovable #vibecoding #vibehacking

6m 19s1,260 words199 segmentsEnglish

FULL TRANSCRIPT

0:00

18,000 students and their teachers just

0:02

had their personal data exposed. Names,

0:04

emails, grades, credit balances, all of

0:07

it, wide open. Not because of some

0:09

sophisticated hack, but because an AI

0:11

wrote code with a logic error so basic

0:13

that any junior developer would have

0:15

caught it in their sleep. Welcome to

0:17

Vibe hacking. So, here's what happened.

0:19

Last week, The Register published

0:21

findings from security researcher Tai

0:22

Murk. He'd been probing apps featured on

0:24

Lovable, the vibe coding platform that

0:27

went from 1 million AR to 100 million in

0:29

error in eight months. He found one of

0:31

their showcased edtech apps, an exam

0:33

questioning platform with over a 100,000

0:35

views on Lovable's own discover page,

0:37

and it was a disaster. 16

0:39

vulnerabilities, six of them critical,

0:41

over 18,000 user records, teachers,

0:44

students from UC Berkeley, UC Davis, K

0:46

to2 schools with minors on the platform

0:49

completely exposed. Anyone can view all

0:52

their user data, delete accounts, change

0:54

credit balances, send bulk emails, and

0:56

access grade submissions with no login

0:58

required. And here's the kicker. The

0:59

core bug, the AI that vibe coded the

1:01

Superbase backend, which handles off,

1:03

file storage, and database connections.

1:05

And I love Superbase, so not criticizing

1:06

them here, but the AI implemented

1:08

authentication logic that was literally

1:10

inverted. It blocked authenticated users

1:12

and allowed access to unauthenticated

1:14

users. According to the register, the

1:16

intent was to block non-admins, but the

1:18

AI's implementation blocked all logged

1:20

in users instead, and this logic

1:22

inversion was repeated across multiple

1:24

critical functions. Khan called it out

1:27

directly, a classic logic inversion that

1:29

a human security reviewer would have

1:30

caught in seconds. But the AI code

1:32

generator optimizing for code that

1:34

works, produced, and deployed it to

1:35

production. And this isn't a one-off.

1:37

Security firm Escape Tech scanned 1645

1:40

lovable built apps from the platform's

1:42

discover page and found 170 with

1:44

critical data exposure flaws, more than

1:46

10%. A separate Veraricode study found

1:48

45% of AI generated code contains

1:50

security flaws. And Code Rabbit's

1:52

December 2025 analysis of 470 real world

1:55

GitHub pull requests published in their

1:56

state of AI versus human code generation

1:58

report found AI generated pull requests

2:00

contain 1.7 times more issues overall

2:02

with cross-sight scripting

2:03

vulnerabilities appearing 2.74 times

2:06

more than the rate of human written code

2:07

and logic error 75% more frequently. And

2:10

so Khan coined a term for this vibe

2:12

hacking and I think it's going to stick.

2:14

The idea is simple. If vibe coding means

2:15

you describe what you want and AI builds

2:17

it without you reading the code, then

2:18

vibe hacking means you exploit AI

2:20

generated code knowing it was never

2:22

properly reviewed. You're not looking

2:23

for sophisticated zero days. You're

2:25

looking for the dumb stuff. Broken

2:27

access controls, exposed API keys,

2:28

missing authentication because you know

2:30

the builder probably never checked.

2:32

Think about it. According to a Microsoft

2:33

case study, 75% of Replet's enterprise

2:36

users aren't even software engineers.

2:38

And their CEO says the AI agent has

2:40

built over 2 million apps without users

2:41

writing a single line of code. Y

2:43

Combinator's managing partner Jared

2:45

Freriedman told TechCrunch that 25% of

2:46

their winning 2025 startups had code

2:48

bases that were 95% AI generated. That's

2:51

a lot of code that nobody has actually

2:53

read. And the damage isn't just to

2:54

individual apps. The open source

2:56

ecosystem is getting wrecked too. Daniel

2:58

Stenberg, the guy who maintains curl,

2:59

you know, the command line tool that

3:01

basically powers half of the internet,

3:03

he shut down his bug bounty program in

3:05

January after AI generated submissions

3:06

overwhelmed his team. He said about 20%

3:08

of reports were AI generated and the

3:10

rate of valid vulnerabilities dropped

3:11

from roughly 1 in6 to 1 in 20. That's

3:14

from his FOST 2026 talk covered by the

3:16

new stack. Tailwind CSS tells an even

3:19

scarier story. This is Tailwind CSS. I

3:21

love Tailwind. I use it everywhere.

3:23

Creator Adam Wa revealed in a GitHub

3:25

comment in January on a GitHub thread

3:28

about making the documentation more

3:29

accessible to LLMs that despite Tailwind

3:32

being more popular than ever, 75 million

3:35

monthly downloads, documentation traffic

3:37

fell 40% from early 2023 and revenue

3:40

dropped nearly 80%. He had to lay off

3:42

three of his four engineers. People are

3:43

using the tools, but nobody's reading

3:44

the docs. Finally, real bugs are

3:46

contributing back. An academic paper

3:47

literally titled Vibe Coding Kills Open

3:49

Source argues that this creates a death

3:51

spiral. Less engagement, less

3:52

maintenance, worse offer for everyone.

3:54

Okay, so here's why I could just leave

3:55

you scared and tell you to subscribe,

3:57

but that's not what we do on this

3:58

channel. The answer to insecure AI

4:00

generated code isn't just stop vibe

4:01

coding. That genie is out of the bottle.

4:03

There is no going back. The answer is

4:05

use AI to audit AI. Have your AI act as

4:08

a security auditor. Point it at your

4:09

codebase, whether it's a lovable

4:11

project, something from cursor, or

4:12

whatever, and have it do a structured

4:14

security review. This doesn't replace a

4:16

human, but it can help cover a lot of

4:17

things. make it for the exact types of

4:19

vulnerabilities that AI code is most

4:21

likely to produce. Broken access

4:23

controls like the levelable bug, exposed

4:25

API keys and secrets, missing

4:27

authentication on endpoints, insecure

4:29

data handling, SQL injection and

4:31

cross-ite scripting vulnerabilities,

4:33

logic errors and permission workflows.

4:35

Make it not just flag problems. Have it

4:37

explained in plain English and suggest

4:38

fixes. Because if you're vibe coding,

4:40

you don't want to read a SAS report. You

4:42

want someone to tell you what's wrong

4:43

and how to fix it. Because here's the

4:45

thing, the 8020 problem in bip coding

4:47

isn't going away. AI gets you 80% of a

4:49

working app in minutes. But that last

4:50

20% security, edge cases, production

4:53

readiness, that's where things break.

4:55

And right now, most people are shipping

4:57

the 80% and hoping for the best. Now, to

4:59

be clear, this strategy is a starting

5:01

point. It's not a full security audit or

5:03

a replacement from a person doing the

5:05

actual review for you. It's going to

5:07

catch the obvious stuff, the exposed

5:08

keys, the broken off, the logic and

5:10

versions like the lovable buck. For vibe

5:12

coder, shipping side projects and MVPs,

5:14

it does close a massive gap. But if

5:16

you're building something that handles

5:17

real user data at scale, you need

5:19

professional help. But you can use this

5:21

strategy to get from completely unreed

5:23

to caught the basics. And that alone

5:25

would have prevented this bug that

5:26

affected 18,000 users. So here's the

5:29

bottom line. There's a difference

5:30

between building fast and building

5:31

reckless. And right now, the vibe coding

5:33

ecosystem is leaning reckless. I'm not

5:35

anti- vibe coding. Speed is a feature.

5:38

But shipping code nobody's reviewed to

5:40

users who trust you with their data.

5:42

That's not speed. That's negligence. If

5:44

you're building anything that touches

5:46

user data, you need a security review in

5:48

your workflow. Period. Whether that's an

5:50

agent, a manual audit, or something

5:52

else, just don't ship what you haven't

5:53

checked. We are building agentic

5:55

infrastructure at Zorus that handles

5:57

exactly this kind of problem. AI agents

5:58

that work alongside your development

6:00

workflow to catch what humans miss and

6:02

what AI introduces. Security review is

6:04

just one piece. If that sounds

6:05

interesting, check the links below and

6:06

hit us up. And if you want to see me

6:08

actually vibe hack lovable app to show

6:10

you how easy it is, let me know in the

6:12

comments. That might be the next video.

6:13

Thanks everyone. I'll see you soon.

6:15

Cheers.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.