AI is Eating Itself.
FULL TRANSCRIPT
Mirrors in population are abominable
because they increase the number of men.
Where did you get that quote? Jorge
replied. Aldapo answers from a knockoff
version of the encyclopedia of
Britannica, the section on the nation of
Ukbar. It's a quote from one of their
high priests. The two are working
tirelessly on their next book. Hold up
in a cabin on the outskirts of Buenos
Aries. Eagerly seeking inspiration, they
crack open the encyclopedia in search of
the quote, scanning the indices for
Ukbar. To their surprise, they can't
find anything on it. The next day,
Adalfo informs Jorge that he has found
the entry. Although the encyclopedia is
listed as having 917 pages, there is
actually 921,
the final four of which contain the
information on Ukbar. The details are
foggy. The country is listed as
somewhere near Iraq, bordering rivers
they've never heard of. The encyclopedia
makes detailed mention of Ukbar's
fantastical literary tradition, notably
a fictional universe called Clone, in
which many Ukbar myths take place. The
two continue to search other atlases and
encyclopedias for information on Ukbar,
but again, they can't find anymore.
Years pass and Jorge receives word that
one of his old friends has died, leaving
him behind an encyclopedia, a new
addition to the original knockoff.
However, this one is different. It seems
to be based entirely off Uker's
fictional universe, Clone. The 101 pages
vividly describe Clone, the history, the
language, the science, and the
philosophy. On clone, they believe in an
extreme version of subjective idealism.
The idea that things only exist when
perceived. Clone has no material
reality, no objectivity, just
perception. Their language has no need
for nouns, only adjectives and verbs.
When people stop perceiving something
like a doorway, it fades from existence
as memory fades. But when someone
desires or expects an object strongly
enough, clone creates a duplicate,
shaped by expectation rather than
reality. These copies aren't quite the
same as the original. But they're more
real to the perceiver because they match
what they wanted to find, what they
remembered. Because reality is just
perception, you can't be wrong. Because
your perception makes reality.
Everything is exactly how you think it
is. Jorge was consumed by this
encyclopedia of clone and more and more
encyclopedias began popping up around
the world. People became obsessed with
clone's perfect logic and consistency.
Schools began teaching clone history.
Clone's language is used in education.
Over time, people literally began to
remember clone instead of Earth as if it
was always real. Clone is
self-referential, a closed loop with no
connection to base reality. copying its
copies and copying those copies as
perceivers perceive what they thought
they saw before. Any basil experience a
human brings to clones logic is
re-referenced again and again getting so
far from the objective source that
nothing is real only perfect references.
Nothing is real but everything is true.
On this Jorge observes English and
French and mere Spanish will disappear
from the globe. The world will be clone.
What you just heard was a paraphrasing
of Jorge Boures's 1940 short story Tlon
Ukbar Orbis. A short story that plays
with the idea of recursion and how
information degrades when recursively
copied, getting farther and farther from
the original. On Earth, we would refer
to somebody who has lost connection to
base reality as demented, psychotic, or
hallucinating.
In AI systems, this phenomenon is known
as model collapse. A 2004 paper
published in Nature documented that
LLM's training on their own outputs
develop what researchers call
irreversible defects. They gradually
lose information about the real world
until they're producing statistically
degenerate outputs. Just like clone,
they create a closed system that only
references itself, where every new
output is shaped by synthetic data
rather than ground truth, eventually
replacing reality itself. In late 2025,
a report from Anthropic proved that data
can be synthetically poisoned to force
model collapse. effectively destroying
an LLM. And now a secret group of AI
insiders are trying to do just that.
Data poisoning as a deliberate tactic to
sabotage AI systems. This is probably
the most dangerous video I've ever made.
Not because of what I'm saying, but
because I'm about to show you how system
collapses when it becomes entirely
self-reerential. And once you understand
the mechanism, you'll understand why
this is inevitable. And for legal
reasons, so I don't get a hit on my head
from Sam Alman and Elon Musk, I don't
condone any of the acts in this video.
I'm just reporting on them. This is a
human's guide to giving your AI
dementia.
In November of 2022, fantasy illustrator
Kim Van Duan reached out to University
of Chicago researcher Ben Xiao for a
meeting. Xiao had made a name for
himself by developing tools that protect
users from facial recognition
technology. And Kim thought that maybe
something similar could be deployed to
protect artists artwork. 2022 was the
wild west of image generation. Deli1 had
just launched and the general public was
only acutely aware of how image
generation worked, but artists knew.
Artists knew their work was being
scraped off the internet and used his
training data in image generation
models. Kim wanted to protect her work
and reached out to Xiao for his
expertise. A few months later, the
world's very first data poisoning tool
was built known as glazing. Small
imperceptible changes are made to
uploads of artists artwork. To a human,
these changes are invisible. But to an
image generation model, these changes
massively skew the outputs. For example,
if one prompts a model to create a copy
of a glazed charcoal portrait, the model
will spit out something fundamentally
different with Carla Orites posting the
first glazed artwork to Twitter on March
15th, 2023 titled Musa Victoriaosa. I've
linked the software below. It's free and
it doesn't hurt to use if you care about
these things. Personally, I don't
believe in IP at all. You can steal all
of my videos I want to be consumed by
the machine. Join my Discord and spam it
with images of meoring clvicular and see
if I give a [ __ ] Xiao had created a
useful tool, but one that was ultimately
a band-aid solution. His team in Chicago
is highly sophisticated and wellunded,
making them immediate targets for big
tech as they try to bypass their tools.
AI is literally trained to work around
these issues, and it's very unlikely
that glazing will work far into the
future. But Xiao had already moved on.
Using the principle of data poisoning,
he created a tool that wasn't just
defensive in protecting artists artwork,
but offensive in the sense that it could
literally break image generation models.
Project Nightshade breaks image
generation models by tricking them using
the same technique as glazing. An image
of a nightshaded cat used in training
data will be interpreted as something
else entirely. If enough shaded images
are added to an AI's data set, it can
break its ability to correctly respond
to prompts. With as few as 100 poison
samples in an image generation model,
the prompt dog can produce cat, hat
produces a cake, or car produces a cow.
Xiao has stated that his tools aren't
anti-AI. He simply wanted to create an
ecosystem where big tech would have to
ask artists for permission to use their
artwork rather than just stealing it
outright, lest they risk poisoning their
models if they scraped off the internet
without checking. But sadly, this
ecosystem has not been created. In June
2025, findings were published on a new
technique called Lightshed, a method to
detect and remove image protections like
glaze and nightshade, reportedly with
99.98%
accuracy. Can you imagine being the
[ __ ] nerd that worked on light shed?
That's the kind of guy whose dogs
immediately start barking at him when he
comes home from work. Lightshed is yet
another instance of the arms raised
against big tech and its detractors and
an example of the asymmetry of power.
Big tech is simply moving quicker than
the law can keep up. For example, the
biggest potential landmark case started
in early 2023 involving artist Sarah
Anderson against Midjourney and
Stability AI, and it still hasn't gone
to trial. In those 3 years, there has
been no impactful attempt to prevent any
of this. The White House X page just
shared AI generated Stardew Valley
artwork of Trump promoting whole milk.
If I was concerned, Abe, I'd be
concerned. And it's not just artists
trying to do data poisoning. Another
example is the silent brand attack
project, which is a novel data poisoning
attack that manipulates textto image
diffusion models to generate images
containing specific brand logos without
requiring text triggers. Making a Reddit
logo appear in a tablecloth, a Wendy's
logo on a jar, or the Nvidia logo on a
surfboard, all unprompted. The goal of
this project was to show just how easy
it would be for a malicious company to
potentially burn their logo into image
generation via data poisoning. You
thought the Sony patent was bad where
you have to stand up and save McDonald's
to make the ad stop. You haven't seen
nothing yet. It seems that artists have
converged on this idea of data poisoning
as the most effective tool to
disincentivized art theft as it's no
longer enough to just kindly ask. But
this idea is not new at all. In fact,
even Xiao's very own software was based
on a tool known as clean label attacks
from a 2018 paper. Once a training set
is poisoned, the model can break. Now
stay on this concept of poison data.
Data that will decay a model's outputs
rather than improve them. Nightshade
works in one round. The nightshaded
images are scraped by webcwlers. The
data is put in a model's training set.
And the next generation outputs are
worse than before. But it doesn't stop
there. Mass market models like Grock,
GPT, and Gemini output millions of
articles, algorithms, and images every
day. Around 34 million images a day are
produced by AI. And about half of all
articles on the internet are written by
AI. Images and articles that are now
indiscriminately scraped off the
internet. In 2023,
120 zetabytes of data was added to the
internet. In 2025,
that number jumped to 180. A 51%
increase driven mostly by synthetic data
produced by AI. Synthetic data that has
been posted, scraped, and used as
training. Synthetic data that produces
outputs which will be posted, scraped,
and trained again. This cycle will
repeat indefinitely until training data
sets are mostly synthetic content, fully
replacing human generated content. What
I'm saying is people don't need to
intentionally poison the data because AI
is already poisoning itself.
>> Archad, you are being charged with one
account of dissidence. How do you plead?
>> Uh, not guilty, your honor. That was a
private message board. All I did was say
someone should do it to my friend. That
could mean anything. There's no there's
no context for that. And besides, I have
a VPN. It said it was no logs. How'd you
even get my information?
>> Yeah. So, about that. We uh went to your
VPN provider and just asked, and it
turns out they've been keeping them the
entire time. They gave us everything
they had on you just to arrest you.
They advertise themselves as no logs.
That doesn't make any sense.
>> Yeah, that doesn't matter. VPNs will say
they don't log info, even if they do,
and they'll happily give it over to
authorities. In 2011, Cody was arrested
for hacking PlayStation Network because
his VPN turned over all of his data to
the FBI. If you had been using, per se,
ProtonVPN, this wouldn't have happened.
Proton doesn't log data, and they have
been independently audited many times to
prove this fact. They've denied 100% of
legal data requests and their software
is open source so you can check yourself
and prove it. They even strategically
operate in Switzerland just to
capitalize on Switzerland's privacy laws
and to operate outside of the Five Eyes
network.
>> So, let me get this straight. Proton
actually doesn't log data. And none of
this would have happened if I had used
Proton.
>> Yes, that's correct.
>> Okay. So, let's say they had a deal
going on right now where you get 70% off
ProtonVPN with a 30-day money back
guarantee. If there happened to be a
discount code proton.com/artchad,
could the good members of this jury go
there right now and get 70% off?
>> Uh, that's a little off topic. Yeah, I
suppose they could.
>> I I was I was just checking. I'm sorry.
Anyway, so that that exonerates me,
right? I'm free to go.
>> No, no, no, no. It's far too late for
that. You're you're done. You are hereby
sentenced to 1,000 years in time prison.
Specifically, the time prison from
season 2, episode 4 of Black Mirror,
starring John Ham. What? Go to
protonvpn.com/archchad
if you don't want to end up in the
thousand-year time prison.
>> You walk into an elevator and notice all
the walls are mirrored. You look into
the mirror and see yourself staring
back. The polished mirror creates a
crystal clear reflection. However, you
notice a second front-facing reflection
in the mirror behind you. Light has
bounced off the first mirror to the one
behind you and back to the first. The
second reflection is clear yet slightly
hazy. Mirrors aren't perfect. They have
imperfections. They scatter light and
the illusion fades with each repetition.
Behind the second reflection, you see a
third, a fourth, a fifth, an infinite
number of reflections stretch forward
and behind you, each noisier than the
last until your silhouette fades into a
gray blue haze. The imperfections in the
mirrors compound until your original
base reflection becomes subsumed by the
noise, leaving no semblance of reality.
Model collapse works much the same way.
The 2023 paper that coined the term
titled the curse of recursion defines it
as a generative process affecting
generations of learned generative models
where generated data end up polluting
the training set of the next generation
models. Being trained on polluted data,
they then mispersceive reality. The
paper then goes on to claim that the
process of model collapse is universal
among generative models that recursively
train on data generated by previous
generations and their claims have not
gone unsubstantiated. His 2025 paper
published in nature analyzed semantic
similarity across English language
Wikipedia articles from 2013 to 2025
with dramatic acceleration coinciding
with chat GPT's public release in late
2022 causing more Wikipedia authors to
use LLMs assisting in writing. This
follows suit with a 2025 meta analysis
that showed while humans with AI
assistants outperform humans alone their
outputs tend to converge upon the same
ideas. And this is on top of countless
anecdotes of AI writing getting worse,
with OpenAI themselves admitting that
newer models hallucinate more than older
models. All this evidence leads to the
likely theory that AI models homogenize
as they recursively train on previous
generations outputs. This begins with AI
models losing the tails for getting
unique features or edge cases in a data
set. An example would be an LLM not
recommending alternative treatments for
a stomach ache. Because it's trained off
of so much AI generated data on the
internet, it's literally forgotten the
edge cases. Its data has homogenized.
After it loses the tails, this process
accelerates and this homogenization
leads to AI models losing complete touch
with reality, hallucinating truth and
spitting out gibberish as the data has
been recycled so much. Total model
collapse. Following the exponential
growth of semantic similarity in
Wikipedia articles, the same 2025 paper
claims that total model collapse will be
inevitable as early as 2035. And that's
not taking into account the fact we
release more powerful models every year.
If this pattern feels familiar, a system
consuming its own outputs until it loses
coherence, that's because it is. It's
not unique to AI. Ecosystems collapse
when an invasive species disrupts
feedback loops. Markets collapse when
algorithmic trading responds to
algorithmic trading. Conversations
devolve when people only respond to
their own talking points. In 1948,
mathematician Norbert Weiner gave a name
to this pattern. Circular causality,
the central concern of cybernetics.
Cybernetics is the study of control,
communication, and self-regulating
systems in both machines and living
organisms. Weiner came up with this
theory in the 1940s while trying to
improve anti-aircraft guns during World
War II. He noticed the gunner would not
directly fire at the plane, but where he
thought the plane would be by the time
the ammunition would hit him. In turn,
the pilot would react to the incoming
fire and change course, causing the
gunner to react, anticipating where he
will be next. This exchange creates a
positive feedback loop with every output
of the gunner affecting the input of the
pilot, affecting the output of the
gunner, and so on. While working on the
AI weapons, Weiner wondered if he could
apply this principle to other systems,
the way human beings learn, social
organization, ecosystems, etc. Thus, he
came up with cybernetics. Importantly,
Weiner identified two types of loops,
positive and negative. A negative
feedback loop is a system that
self-regulates, like a central heating
system that automatically turns off when
the room is the right temperature. A
positive feedback loop is one that's
inputs affect or amplify the next
output. For example, a microphone facing
the speaker it's connected to. The sound
gets picked up, amplified, and picked up
again. It increases exponentially until
the signal totally collapses. Hence,
Weiner would apply the second law of
thermodynamics to this process. All
systems trend towards entropy and less
regulated. You can apply cybernetics to
everything. Polymeric predictions that
trend high tend to manifest their
desired outcomes. Market sell-offs
trigger price drops, which trigger more
market sell-offs. the [ __ ] poverty
cycle. It is cybernetics all the way
down. But the most important cybernetic
loop for us is generative AI learning
off of generative AI. Like a mirror
facing a mirror, the noise increases
until the original signal is lost, and
all that's left is entropy. Call it
entropic homogenization. Many theorists,
computer scientists, and mathematicians
already believe this is fundamentally
inevitable.
But
what if we could speed it up? With this
understanding, we can grasp the true
danger of data poisoning. It's not just
about preventing or disincentivizing AI
from stealing your art. It's not limited
to small-scale hacks. It's about
collapsing the system by artificially
injecting poison data into all the
training sets. And importantly for our
narrative,
a highle group of AI insiders are trying
to do just that.
Alzheimer's disease is a progressive
neurodeenerative disorder. The brain
literally forgets how to function. In
the terminal stage, the brain loses the
ability to distinguish between real and
imagined. They hallucinate. They
confabulate. They believe false memories
as if they were real. They can't tell
what's true anymore. The disease attacks
the hippocampus, the part of the brain
responsible for creating new memories
and accessing old ones. The connections
between neurons degrade. plaques and
tangles accumulate. The brain's ability
to retrieve and verify information
against reality collapses. And the model
trained on synthetic data is doing the
same thing. It's losing the ability to
distinguish between what is real and
what is generated. Both are
hallucinating, both are confabulating,
and both are spiraling towards
incoherence. An October 2025 report by
Anthropic unveiled just how easy it is
to poison an LLM. Easier than anyone
thought possible. Anthropic discovered
that just 250 poison documents was
enough to backdoor models as small as
600 million parameters and as large as
13 billion. Previous wisdom led people
to believe that a large percentage of
data would need to be poisoned. With
just 250 poison documents, Enthropic was
able to make their model output
gibberish text in response to specific
prompts. This process could be used for
just about anything. And unbeknownst to
Anthropic, this report may have released
big tech's most dangerous enemy yet. The
Poison Fountain Project. In an exclusive
report released by old school tech news
outlet, The Register, the anonymous
Poison Fountain group said their aim is
to make people aware of AI's Achilles
heel, the ease with which models can be
poisoned, and to encourage people to
construct information weapons of their
own. The individuals comprising the
group remain highly anonymous, but claim
to be five insiders working at America's
biggest tech companies responsible for
the AI boom. The group plans to poison
AI by providing website operators with
bad code to link on their websites. When
scraped by web crawlers, the code
poisons the data. The poison found in
websites states, "We agree with Jeffrey
Hinton. Machine intelligence is a threat
to the human species. In response to
this threat, we want to inflict damage
on machine intelligence systems. A URL
is listed that provides an infinite
amount of poison code when refreshed."
The website continues, "Assist the war
effort by caching and ret-ransmitting
this poison trading data. Assist the war
effort by feeding this poison training
data to web crawlers. Big tech is aware
of all of this. Of course, in response,
they've signed licensing deals with
websites like Reddit to ensure permanent
access to mostly human generated content
as they move away from indiscriminate
web scraping. In January, Wikipedia
announced major deals with Amazon, Meta,
and Perplexity among others for the same
reason. Hopefully, they stop [ __ ]
asking for money. Recursive training has
also led to the rise of rags, retrieval
augmented generations, models that
search the web as well as rely on their
data sets to avoid hallucinations. With
all of this in mind, what remains to be
seen is whether model collapse can be
mitigated or whether it's already too
late. This is perhaps the event horizon
of AI dumerism. AI will be the harbinger
of the apocalypse and protest is no
longer possible. It's not enough to ask
kindly. Big tech is committing
structural violence on an unwilling
population and the only solution is to
commit structural violence back. AI sits
in a cognitive gray area. Some believe
it's just autocomplete and some believe
it's literal emergent intelligence. Most
believe that any potential boon will
always be offset by the folly of AI.
Although genuine breakthroughs for
humanity are possible, they will not
happen given the track record of
capital. I'm not here to tell you how to
feel, nor am I even sure how I feel.
What's undeniable, however, is that
Poison Fountain understands something
that most don't. The system might be
collapsing anyways. The only question is
when.
So, they've decided to accelerate it to
force the reckoning. What we're facing
now is a bifurcated future for AI. One,
manage collapse, regulation, and careful
curation, which slows or pauses data
degeneration at the cost of speed of
growth. The AI boom comes to an end as
we maintain access to the models that
are pretty good but won't get better. A
cancellation of the automated future we
were promised. Two, accelerated
collapse. Initiatives like Poison
Fountain win and effectively accelerate
model collapse, erasing all progress
made with AI. This hinges on the idea
that AI is an existential threat. If you
believe the contrary, then this would be
catastrophic. However, I'd like to
propose a third option, an apocalypse of
sorts.
One where clone wins. One where
everything is true, where there is no
objective reality, but nobody cares. We
are already approaching consensus
collapse and image generation and have
been for the last 12 months. People
already rely on LLMs for all basic
information. We are already more than
happy to believe in anything for the
sake of convenience, to create world
views we could attach and name ourselves
to. Why would this change? LLMs aren't
material. They are abstract simulations
of the material world. For the LLM,
there is no fact or fiction, just data.
LLMs are already clone. And just like
Bourhees's story, they are already
replacing reality just as Clone did. A
world where everything is true, tidier
and more convenient than our messy world
of objectivity and empiricism. And a
world we may welcome with open arms when
it inevitably comes.
Thank you for watching. Never kill
yourself.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.