TRANSCRIPTEnglish

AI is Eating Itself.

24m 19s4,105 words660 segmentsEnglish

FULL TRANSCRIPT

0:03

Mirrors in population are abominable

0:06

because they increase the number of men.

0:09

Where did you get that quote? Jorge

0:11

replied. Aldapo answers from a knockoff

0:14

version of the encyclopedia of

0:15

Britannica, the section on the nation of

0:17

Ukbar. It's a quote from one of their

0:19

high priests. The two are working

0:21

tirelessly on their next book. Hold up

0:24

in a cabin on the outskirts of Buenos

0:25

Aries. Eagerly seeking inspiration, they

0:28

crack open the encyclopedia in search of

0:30

the quote, scanning the indices for

0:32

Ukbar. To their surprise, they can't

0:35

find anything on it. The next day,

0:37

Adalfo informs Jorge that he has found

0:39

the entry. Although the encyclopedia is

0:41

listed as having 917 pages, there is

0:44

actually 921,

0:46

the final four of which contain the

0:48

information on Ukbar. The details are

0:50

foggy. The country is listed as

0:52

somewhere near Iraq, bordering rivers

0:54

they've never heard of. The encyclopedia

0:56

makes detailed mention of Ukbar's

0:58

fantastical literary tradition, notably

1:01

a fictional universe called Clone, in

1:04

which many Ukbar myths take place. The

1:06

two continue to search other atlases and

1:08

encyclopedias for information on Ukbar,

1:11

but again, they can't find anymore.

1:14

Years pass and Jorge receives word that

1:16

one of his old friends has died, leaving

1:18

him behind an encyclopedia, a new

1:21

addition to the original knockoff.

1:22

However, this one is different. It seems

1:25

to be based entirely off Uker's

1:27

fictional universe, Clone. The 101 pages

1:31

vividly describe Clone, the history, the

1:33

language, the science, and the

1:35

philosophy. On clone, they believe in an

1:37

extreme version of subjective idealism.

1:40

The idea that things only exist when

1:42

perceived. Clone has no material

1:44

reality, no objectivity, just

1:46

perception. Their language has no need

1:48

for nouns, only adjectives and verbs.

1:51

When people stop perceiving something

1:53

like a doorway, it fades from existence

1:55

as memory fades. But when someone

1:57

desires or expects an object strongly

1:59

enough, clone creates a duplicate,

2:02

shaped by expectation rather than

2:04

reality. These copies aren't quite the

2:06

same as the original. But they're more

2:08

real to the perceiver because they match

2:10

what they wanted to find, what they

2:12

remembered. Because reality is just

2:14

perception, you can't be wrong. Because

2:17

your perception makes reality.

2:19

Everything is exactly how you think it

2:21

is. Jorge was consumed by this

2:23

encyclopedia of clone and more and more

2:25

encyclopedias began popping up around

2:27

the world. People became obsessed with

2:29

clone's perfect logic and consistency.

2:31

Schools began teaching clone history.

2:33

Clone's language is used in education.

2:35

Over time, people literally began to

2:38

remember clone instead of Earth as if it

2:41

was always real. Clone is

2:42

self-referential, a closed loop with no

2:45

connection to base reality. copying its

2:47

copies and copying those copies as

2:49

perceivers perceive what they thought

2:50

they saw before. Any basil experience a

2:53

human brings to clones logic is

2:54

re-referenced again and again getting so

2:57

far from the objective source that

2:58

nothing is real only perfect references.

3:01

Nothing is real but everything is true.

3:04

On this Jorge observes English and

3:06

French and mere Spanish will disappear

3:08

from the globe. The world will be clone.

3:15

What you just heard was a paraphrasing

3:16

of Jorge Boures's 1940 short story Tlon

3:20

Ukbar Orbis. A short story that plays

3:23

with the idea of recursion and how

3:25

information degrades when recursively

3:27

copied, getting farther and farther from

3:29

the original. On Earth, we would refer

3:32

to somebody who has lost connection to

3:33

base reality as demented, psychotic, or

3:36

hallucinating.

3:38

In AI systems, this phenomenon is known

3:42

as model collapse. A 2004 paper

3:44

published in Nature documented that

3:46

LLM's training on their own outputs

3:48

develop what researchers call

3:49

irreversible defects. They gradually

3:51

lose information about the real world

3:53

until they're producing statistically

3:54

degenerate outputs. Just like clone,

3:57

they create a closed system that only

3:58

references itself, where every new

4:00

output is shaped by synthetic data

4:02

rather than ground truth, eventually

4:04

replacing reality itself. In late 2025,

4:07

a report from Anthropic proved that data

4:09

can be synthetically poisoned to force

4:11

model collapse. effectively destroying

4:13

an LLM. And now a secret group of AI

4:16

insiders are trying to do just that.

4:19

Data poisoning as a deliberate tactic to

4:22

sabotage AI systems. This is probably

4:24

the most dangerous video I've ever made.

4:27

Not because of what I'm saying, but

4:29

because I'm about to show you how system

4:31

collapses when it becomes entirely

4:34

self-reerential. And once you understand

4:36

the mechanism, you'll understand why

4:38

this is inevitable. And for legal

4:40

reasons, so I don't get a hit on my head

4:42

from Sam Alman and Elon Musk, I don't

4:45

condone any of the acts in this video.

4:47

I'm just reporting on them. This is a

4:50

human's guide to giving your AI

4:52

dementia.

5:02

In November of 2022, fantasy illustrator

5:05

Kim Van Duan reached out to University

5:07

of Chicago researcher Ben Xiao for a

5:09

meeting. Xiao had made a name for

5:11

himself by developing tools that protect

5:12

users from facial recognition

5:14

technology. And Kim thought that maybe

5:16

something similar could be deployed to

5:18

protect artists artwork. 2022 was the

5:20

wild west of image generation. Deli1 had

5:23

just launched and the general public was

5:25

only acutely aware of how image

5:27

generation worked, but artists knew.

5:30

Artists knew their work was being

5:31

scraped off the internet and used his

5:33

training data in image generation

5:34

models. Kim wanted to protect her work

5:37

and reached out to Xiao for his

5:38

expertise. A few months later, the

5:40

world's very first data poisoning tool

5:43

was built known as glazing. Small

5:46

imperceptible changes are made to

5:48

uploads of artists artwork. To a human,

5:51

these changes are invisible. But to an

5:53

image generation model, these changes

5:55

massively skew the outputs. For example,

5:58

if one prompts a model to create a copy

6:00

of a glazed charcoal portrait, the model

6:03

will spit out something fundamentally

6:04

different with Carla Orites posting the

6:07

first glazed artwork to Twitter on March

6:09

15th, 2023 titled Musa Victoriaosa. I've

6:12

linked the software below. It's free and

6:15

it doesn't hurt to use if you care about

6:16

these things. Personally, I don't

6:18

believe in IP at all. You can steal all

6:21

of my videos I want to be consumed by

6:23

the machine. Join my Discord and spam it

6:25

with images of meoring clvicular and see

6:27

if I give a [ __ ] Xiao had created a

6:29

useful tool, but one that was ultimately

6:31

a band-aid solution. His team in Chicago

6:34

is highly sophisticated and wellunded,

6:36

making them immediate targets for big

6:38

tech as they try to bypass their tools.

6:40

AI is literally trained to work around

6:42

these issues, and it's very unlikely

6:44

that glazing will work far into the

6:46

future. But Xiao had already moved on.

6:48

Using the principle of data poisoning,

6:50

he created a tool that wasn't just

6:52

defensive in protecting artists artwork,

6:54

but offensive in the sense that it could

6:56

literally break image generation models.

6:59

Project Nightshade breaks image

7:00

generation models by tricking them using

7:02

the same technique as glazing. An image

7:04

of a nightshaded cat used in training

7:06

data will be interpreted as something

7:08

else entirely. If enough shaded images

7:10

are added to an AI's data set, it can

7:12

break its ability to correctly respond

7:14

to prompts. With as few as 100 poison

7:16

samples in an image generation model,

7:18

the prompt dog can produce cat, hat

7:21

produces a cake, or car produces a cow.

7:24

Xiao has stated that his tools aren't

7:26

anti-AI. He simply wanted to create an

7:28

ecosystem where big tech would have to

7:31

ask artists for permission to use their

7:32

artwork rather than just stealing it

7:34

outright, lest they risk poisoning their

7:36

models if they scraped off the internet

7:37

without checking. But sadly, this

7:41

ecosystem has not been created. In June

7:43

2025, findings were published on a new

7:45

technique called Lightshed, a method to

7:47

detect and remove image protections like

7:49

glaze and nightshade, reportedly with

7:52

99.98%

7:53

accuracy. Can you imagine being the

7:55

[ __ ] nerd that worked on light shed?

7:57

That's the kind of guy whose dogs

7:58

immediately start barking at him when he

8:00

comes home from work. Lightshed is yet

8:01

another instance of the arms raised

8:03

against big tech and its detractors and

8:05

an example of the asymmetry of power.

8:07

Big tech is simply moving quicker than

8:09

the law can keep up. For example, the

8:12

biggest potential landmark case started

8:14

in early 2023 involving artist Sarah

8:16

Anderson against Midjourney and

8:18

Stability AI, and it still hasn't gone

8:21

to trial. In those 3 years, there has

8:23

been no impactful attempt to prevent any

8:25

of this. The White House X page just

8:28

shared AI generated Stardew Valley

8:29

artwork of Trump promoting whole milk.

8:31

If I was concerned, Abe, I'd be

8:33

concerned. And it's not just artists

8:35

trying to do data poisoning. Another

8:36

example is the silent brand attack

8:38

project, which is a novel data poisoning

8:41

attack that manipulates textto image

8:43

diffusion models to generate images

8:45

containing specific brand logos without

8:47

requiring text triggers. Making a Reddit

8:49

logo appear in a tablecloth, a Wendy's

8:51

logo on a jar, or the Nvidia logo on a

8:54

surfboard, all unprompted. The goal of

8:56

this project was to show just how easy

8:58

it would be for a malicious company to

9:00

potentially burn their logo into image

9:02

generation via data poisoning. You

9:04

thought the Sony patent was bad where

9:05

you have to stand up and save McDonald's

9:07

to make the ad stop. You haven't seen

9:09

nothing yet. It seems that artists have

9:11

converged on this idea of data poisoning

9:13

as the most effective tool to

9:15

disincentivized art theft as it's no

9:17

longer enough to just kindly ask. But

9:19

this idea is not new at all. In fact,

9:22

even Xiao's very own software was based

9:24

on a tool known as clean label attacks

9:26

from a 2018 paper. Once a training set

9:28

is poisoned, the model can break. Now

9:31

stay on this concept of poison data.

9:34

Data that will decay a model's outputs

9:36

rather than improve them. Nightshade

9:38

works in one round. The nightshaded

9:40

images are scraped by webcwlers. The

9:42

data is put in a model's training set.

9:44

And the next generation outputs are

9:45

worse than before. But it doesn't stop

9:47

there. Mass market models like Grock,

9:50

GPT, and Gemini output millions of

9:53

articles, algorithms, and images every

9:55

day. Around 34 million images a day are

9:57

produced by AI. And about half of all

10:00

articles on the internet are written by

10:02

AI. Images and articles that are now

10:04

indiscriminately scraped off the

10:06

internet. In 2023,

10:08

120 zetabytes of data was added to the

10:10

internet. In 2025,

10:13

that number jumped to 180. A 51%

10:16

increase driven mostly by synthetic data

10:19

produced by AI. Synthetic data that has

10:21

been posted, scraped, and used as

10:22

training. Synthetic data that produces

10:24

outputs which will be posted, scraped,

10:26

and trained again. This cycle will

10:28

repeat indefinitely until training data

10:31

sets are mostly synthetic content, fully

10:34

replacing human generated content. What

10:36

I'm saying is people don't need to

10:38

intentionally poison the data because AI

10:41

is already poisoning itself.

10:47

>> Archad, you are being charged with one

10:50

account of dissidence. How do you plead?

10:53

>> Uh, not guilty, your honor. That was a

10:57

private message board. All I did was say

10:59

someone should do it to my friend. That

11:01

could mean anything. There's no there's

11:03

no context for that. And besides, I have

11:06

a VPN. It said it was no logs. How'd you

11:08

even get my information?

11:10

>> Yeah. So, about that. We uh went to your

11:13

VPN provider and just asked, and it

11:16

turns out they've been keeping them the

11:17

entire time. They gave us everything

11:19

they had on you just to arrest you.

11:22

They advertise themselves as no logs.

11:25

That doesn't make any sense.

11:27

>> Yeah, that doesn't matter. VPNs will say

11:29

they don't log info, even if they do,

11:32

and they'll happily give it over to

11:33

authorities. In 2011, Cody was arrested

11:36

for hacking PlayStation Network because

11:38

his VPN turned over all of his data to

11:40

the FBI. If you had been using, per se,

11:43

ProtonVPN, this wouldn't have happened.

11:45

Proton doesn't log data, and they have

11:47

been independently audited many times to

11:50

prove this fact. They've denied 100% of

11:52

legal data requests and their software

11:54

is open source so you can check yourself

11:56

and prove it. They even strategically

11:58

operate in Switzerland just to

11:59

capitalize on Switzerland's privacy laws

12:01

and to operate outside of the Five Eyes

12:04

network.

12:04

>> So, let me get this straight. Proton

12:06

actually doesn't log data. And none of

12:08

this would have happened if I had used

12:10

Proton.

12:11

>> Yes, that's correct.

12:12

>> Okay. So, let's say they had a deal

12:14

going on right now where you get 70% off

12:16

ProtonVPN with a 30-day money back

12:18

guarantee. If there happened to be a

12:21

discount code proton.com/artchad,

12:24

could the good members of this jury go

12:26

there right now and get 70% off?

12:28

>> Uh, that's a little off topic. Yeah, I

12:30

suppose they could.

12:31

>> I I was I was just checking. I'm sorry.

12:33

Anyway, so that that exonerates me,

12:35

right? I'm free to go.

12:36

>> No, no, no, no. It's far too late for

12:38

that. You're you're done. You are hereby

12:40

sentenced to 1,000 years in time prison.

12:44

Specifically, the time prison from

12:45

season 2, episode 4 of Black Mirror,

12:47

starring John Ham. What? Go to

12:49

protonvpn.com/archchad

12:51

if you don't want to end up in the

12:53

thousand-year time prison.

12:54

>> You walk into an elevator and notice all

12:57

the walls are mirrored. You look into

12:58

the mirror and see yourself staring

13:00

back. The polished mirror creates a

13:02

crystal clear reflection. However, you

13:04

notice a second front-facing reflection

13:06

in the mirror behind you. Light has

13:08

bounced off the first mirror to the one

13:09

behind you and back to the first. The

13:11

second reflection is clear yet slightly

13:14

hazy. Mirrors aren't perfect. They have

13:16

imperfections. They scatter light and

13:19

the illusion fades with each repetition.

13:21

Behind the second reflection, you see a

13:22

third, a fourth, a fifth, an infinite

13:24

number of reflections stretch forward

13:26

and behind you, each noisier than the

13:28

last until your silhouette fades into a

13:31

gray blue haze. The imperfections in the

13:33

mirrors compound until your original

13:36

base reflection becomes subsumed by the

13:37

noise, leaving no semblance of reality.

13:41

Model collapse works much the same way.

13:43

The 2023 paper that coined the term

13:46

titled the curse of recursion defines it

13:48

as a generative process affecting

13:51

generations of learned generative models

13:53

where generated data end up polluting

13:54

the training set of the next generation

13:56

models. Being trained on polluted data,

13:59

they then mispersceive reality. The

14:00

paper then goes on to claim that the

14:02

process of model collapse is universal

14:04

among generative models that recursively

14:06

train on data generated by previous

14:08

generations and their claims have not

14:10

gone unsubstantiated. His 2025 paper

14:13

published in nature analyzed semantic

14:15

similarity across English language

14:16

Wikipedia articles from 2013 to 2025

14:19

with dramatic acceleration coinciding

14:21

with chat GPT's public release in late

14:23

2022 causing more Wikipedia authors to

14:26

use LLMs assisting in writing. This

14:28

follows suit with a 2025 meta analysis

14:30

that showed while humans with AI

14:32

assistants outperform humans alone their

14:34

outputs tend to converge upon the same

14:36

ideas. And this is on top of countless

14:38

anecdotes of AI writing getting worse,

14:40

with OpenAI themselves admitting that

14:42

newer models hallucinate more than older

14:44

models. All this evidence leads to the

14:46

likely theory that AI models homogenize

14:48

as they recursively train on previous

14:50

generations outputs. This begins with AI

14:53

models losing the tails for getting

14:55

unique features or edge cases in a data

14:57

set. An example would be an LLM not

15:00

recommending alternative treatments for

15:01

a stomach ache. Because it's trained off

15:04

of so much AI generated data on the

15:06

internet, it's literally forgotten the

15:08

edge cases. Its data has homogenized.

15:11

After it loses the tails, this process

15:13

accelerates and this homogenization

15:16

leads to AI models losing complete touch

15:18

with reality, hallucinating truth and

15:20

spitting out gibberish as the data has

15:22

been recycled so much. Total model

15:26

collapse. Following the exponential

15:28

growth of semantic similarity in

15:30

Wikipedia articles, the same 2025 paper

15:32

claims that total model collapse will be

15:35

inevitable as early as 2035. And that's

15:38

not taking into account the fact we

15:40

release more powerful models every year.

15:42

If this pattern feels familiar, a system

15:45

consuming its own outputs until it loses

15:47

coherence, that's because it is. It's

15:49

not unique to AI. Ecosystems collapse

15:52

when an invasive species disrupts

15:53

feedback loops. Markets collapse when

15:56

algorithmic trading responds to

15:57

algorithmic trading. Conversations

15:59

devolve when people only respond to

16:01

their own talking points. In 1948,

16:04

mathematician Norbert Weiner gave a name

16:06

to this pattern. Circular causality,

16:10

the central concern of cybernetics.

16:13

Cybernetics is the study of control,

16:15

communication, and self-regulating

16:16

systems in both machines and living

16:18

organisms. Weiner came up with this

16:20

theory in the 1940s while trying to

16:22

improve anti-aircraft guns during World

16:23

War II. He noticed the gunner would not

16:25

directly fire at the plane, but where he

16:27

thought the plane would be by the time

16:29

the ammunition would hit him. In turn,

16:31

the pilot would react to the incoming

16:33

fire and change course, causing the

16:35

gunner to react, anticipating where he

16:37

will be next. This exchange creates a

16:39

positive feedback loop with every output

16:41

of the gunner affecting the input of the

16:42

pilot, affecting the output of the

16:44

gunner, and so on. While working on the

16:46

AI weapons, Weiner wondered if he could

16:48

apply this principle to other systems,

16:50

the way human beings learn, social

16:52

organization, ecosystems, etc. Thus, he

16:55

came up with cybernetics. Importantly,

16:58

Weiner identified two types of loops,

17:01

positive and negative. A negative

17:03

feedback loop is a system that

17:04

self-regulates, like a central heating

17:06

system that automatically turns off when

17:08

the room is the right temperature. A

17:09

positive feedback loop is one that's

17:11

inputs affect or amplify the next

17:13

output. For example, a microphone facing

17:16

the speaker it's connected to. The sound

17:18

gets picked up, amplified, and picked up

17:20

again. It increases exponentially until

17:23

the signal totally collapses. Hence,

17:25

Weiner would apply the second law of

17:27

thermodynamics to this process. All

17:29

systems trend towards entropy and less

17:32

regulated. You can apply cybernetics to

17:34

everything. Polymeric predictions that

17:36

trend high tend to manifest their

17:38

desired outcomes. Market sell-offs

17:40

trigger price drops, which trigger more

17:41

market sell-offs. the [ __ ] poverty

17:44

cycle. It is cybernetics all the way

17:46

down. But the most important cybernetic

17:49

loop for us is generative AI learning

17:52

off of generative AI. Like a mirror

17:54

facing a mirror, the noise increases

17:56

until the original signal is lost, and

17:58

all that's left is entropy. Call it

18:01

entropic homogenization. Many theorists,

18:04

computer scientists, and mathematicians

18:07

already believe this is fundamentally

18:09

inevitable.

18:11

But

18:12

what if we could speed it up? With this

18:14

understanding, we can grasp the true

18:16

danger of data poisoning. It's not just

18:18

about preventing or disincentivizing AI

18:21

from stealing your art. It's not limited

18:23

to small-scale hacks. It's about

18:25

collapsing the system by artificially

18:27

injecting poison data into all the

18:29

training sets. And importantly for our

18:31

narrative,

18:33

a highle group of AI insiders are trying

18:36

to do just that.

18:42

Alzheimer's disease is a progressive

18:44

neurodeenerative disorder. The brain

18:46

literally forgets how to function. In

18:48

the terminal stage, the brain loses the

18:50

ability to distinguish between real and

18:52

imagined. They hallucinate. They

18:54

confabulate. They believe false memories

18:57

as if they were real. They can't tell

18:59

what's true anymore. The disease attacks

19:00

the hippocampus, the part of the brain

19:03

responsible for creating new memories

19:04

and accessing old ones. The connections

19:06

between neurons degrade. plaques and

19:08

tangles accumulate. The brain's ability

19:10

to retrieve and verify information

19:11

against reality collapses. And the model

19:14

trained on synthetic data is doing the

19:16

same thing. It's losing the ability to

19:18

distinguish between what is real and

19:20

what is generated. Both are

19:21

hallucinating, both are confabulating,

19:24

and both are spiraling towards

19:25

incoherence. An October 2025 report by

19:28

Anthropic unveiled just how easy it is

19:30

to poison an LLM. Easier than anyone

19:32

thought possible. Anthropic discovered

19:34

that just 250 poison documents was

19:36

enough to backdoor models as small as

19:39

600 million parameters and as large as

19:41

13 billion. Previous wisdom led people

19:43

to believe that a large percentage of

19:45

data would need to be poisoned. With

19:47

just 250 poison documents, Enthropic was

19:50

able to make their model output

19:51

gibberish text in response to specific

19:53

prompts. This process could be used for

19:55

just about anything. And unbeknownst to

19:57

Anthropic, this report may have released

19:59

big tech's most dangerous enemy yet. The

20:03

Poison Fountain Project. In an exclusive

20:05

report released by old school tech news

20:07

outlet, The Register, the anonymous

20:09

Poison Fountain group said their aim is

20:12

to make people aware of AI's Achilles

20:14

heel, the ease with which models can be

20:16

poisoned, and to encourage people to

20:18

construct information weapons of their

20:19

own. The individuals comprising the

20:21

group remain highly anonymous, but claim

20:23

to be five insiders working at America's

20:26

biggest tech companies responsible for

20:27

the AI boom. The group plans to poison

20:29

AI by providing website operators with

20:31

bad code to link on their websites. When

20:34

scraped by web crawlers, the code

20:36

poisons the data. The poison found in

20:38

websites states, "We agree with Jeffrey

20:40

Hinton. Machine intelligence is a threat

20:43

to the human species. In response to

20:45

this threat, we want to inflict damage

20:47

on machine intelligence systems. A URL

20:49

is listed that provides an infinite

20:51

amount of poison code when refreshed."

20:53

The website continues, "Assist the war

20:55

effort by caching and ret-ransmitting

20:57

this poison trading data. Assist the war

20:59

effort by feeding this poison training

21:01

data to web crawlers. Big tech is aware

21:03

of all of this. Of course, in response,

21:05

they've signed licensing deals with

21:07

websites like Reddit to ensure permanent

21:09

access to mostly human generated content

21:11

as they move away from indiscriminate

21:13

web scraping. In January, Wikipedia

21:15

announced major deals with Amazon, Meta,

21:18

and Perplexity among others for the same

21:20

reason. Hopefully, they stop [ __ ]

21:21

asking for money. Recursive training has

21:23

also led to the rise of rags, retrieval

21:26

augmented generations, models that

21:28

search the web as well as rely on their

21:30

data sets to avoid hallucinations. With

21:32

all of this in mind, what remains to be

21:34

seen is whether model collapse can be

21:36

mitigated or whether it's already too

21:38

late. This is perhaps the event horizon

21:40

of AI dumerism. AI will be the harbinger

21:43

of the apocalypse and protest is no

21:46

longer possible. It's not enough to ask

21:48

kindly. Big tech is committing

21:50

structural violence on an unwilling

21:52

population and the only solution is to

21:55

commit structural violence back. AI sits

21:58

in a cognitive gray area. Some believe

22:01

it's just autocomplete and some believe

22:03

it's literal emergent intelligence. Most

22:06

believe that any potential boon will

22:07

always be offset by the folly of AI.

22:10

Although genuine breakthroughs for

22:11

humanity are possible, they will not

22:14

happen given the track record of

22:15

capital. I'm not here to tell you how to

22:17

feel, nor am I even sure how I feel.

22:20

What's undeniable, however, is that

22:22

Poison Fountain understands something

22:24

that most don't. The system might be

22:26

collapsing anyways. The only question is

22:29

when.

22:31

So, they've decided to accelerate it to

22:33

force the reckoning. What we're facing

22:35

now is a bifurcated future for AI. One,

22:39

manage collapse, regulation, and careful

22:41

curation, which slows or pauses data

22:43

degeneration at the cost of speed of

22:45

growth. The AI boom comes to an end as

22:48

we maintain access to the models that

22:50

are pretty good but won't get better. A

22:53

cancellation of the automated future we

22:54

were promised. Two, accelerated

22:57

collapse. Initiatives like Poison

23:00

Fountain win and effectively accelerate

23:01

model collapse, erasing all progress

23:04

made with AI. This hinges on the idea

23:06

that AI is an existential threat. If you

23:09

believe the contrary, then this would be

23:11

catastrophic. However, I'd like to

23:13

propose a third option, an apocalypse of

23:16

sorts.

23:18

One where clone wins. One where

23:20

everything is true, where there is no

23:22

objective reality, but nobody cares. We

23:25

are already approaching consensus

23:27

collapse and image generation and have

23:28

been for the last 12 months. People

23:30

already rely on LLMs for all basic

23:32

information. We are already more than

23:35

happy to believe in anything for the

23:37

sake of convenience, to create world

23:39

views we could attach and name ourselves

23:41

to. Why would this change? LLMs aren't

23:44

material. They are abstract simulations

23:47

of the material world. For the LLM,

23:50

there is no fact or fiction, just data.

23:53

LLMs are already clone. And just like

23:56

Bourhees's story, they are already

23:58

replacing reality just as Clone did. A

24:01

world where everything is true, tidier

24:03

and more convenient than our messy world

24:05

of objectivity and empiricism. And a

24:08

world we may welcome with open arms when

24:10

it inevitably comes.

24:13

Thank you for watching. Never kill

24:15

yourself.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.