AI, Gods, and Selves: Incredibly Effective Illusions
FULLSTÄNDIGT TRANSKRIPT
The rise of AI makes people worry. It
causes moral concern. We fear a loss of
agency, a kind of existential
humiliation.
On the one hand, we treat AI as if it
was an intelligent agent who understands
and guides us. When talking to an LLM
like ChatGpt or DeepSeek, it feels like
we're talking to someone. But on the
other hand, we know this someone isn't
like us. That's quite uncanny.
AI challenges our sense of self and our
existential orientation.
In this video, I'll trace the history of
our sense of self and of our sense of AI
to point out that AI, the self and the
gods, for that matter, are incredibly
effective illusions which don't really
exist.
I will also argue at the end of the
video against dystopian visions of a
singularity
and question some current approaches to
AI ethics.
The video is inspired by a recent
conversation with Iad Rafa, director of
the center for humans and machines at
the Maxplank Institute for Human
Development in Berlin. And it's also
inspired by the many conversations with
Milo Jeanich.
Why do we speak of artificial
intelligence although we know very well
it's not really intelligent in the sense
of a thinking thing. It even tells us
so. Deepseek says do I have
consciousness, emotions or
self-awareness? No. And it adds that it
doesn't think that this is just a figure
of speech.
When I use words like understanding,
thinking or knowing, we are using them
as metaphors or analogies.
Interesting by the way that it says I
here first and then we. And in fact,
it's I is actually nothing but our we.
But I will get to this issue later.
Of course, we know that it is not anyone
and doesn't think. But we still feel
understood by it. Why? When prompted,
and I edited out here all the flattering
language meant to seduce us and keep us
on the platform, Deep Seek says what we
already know. For millennia, the only
thing capable of producing meaningful
language was a conscious human mind. So
we are deeply conditioned to equate
fluent language use with an inner
conscious life.
When further prompted, deep admits that
ascribing intelligence to it in the
sense of treating it as if it were
really a mind is an incredibly effective
illusion.
Let's take note of this. We stick to the
metaphor of artificial intelligence
because we are conditioned to the
effective illusion that when we
communicate with something, it must be
an intelligent agent.
Apparently, in order to be socially and
psychologically effective, intelligence
must be ascribed to a being that is
intelligent.
And this intelligent being can talk.
It's someone who communicates, someone
whom we listen to and who makes sense.
But who precisely this envisioned
thinking and speaking thing is is
historically in the long run contingent
to trace the history of this envisioned
thing. This video looks into some
contemporary theory. Elena Esposto,
Nicholas Lumman, Julian James, and Uval
Harrari. A while ago on this channel, we
posted a conversation with Elena Posto
about her book artificial communication.
The MIT press website where you can
download a book for free says in
artificial communication Elena Esposito
argues that the analogy between
algorithms and human intelligence is
misleading. Esposito proposes that we
think of smart machines not in terms of
artificial intelligence but in terms of
artificial communication. But her
proposition did not catch on. No one
really says artificial communication as
positive suggestion to understand AI in
social rather than in mental terms is
informed by her academic teacher
Nicholas Lumman. I consider Lumman's
systems theory the most advanced theory
framework available today. It's the only
theory I know perhaps next to NZA but he
doesn't really have a theory that
decisively beyond what Lumman calls old
European thought that is beyond
enlightenment and its humanism.
Here are some basics of the theory that
I hope you allow me to introduce before
presenting my main point. At the heart
of it is a distinction between three
kinds of autopoadic self-reproducing
systems. Three kinds of organisms so to
say. There are biological systems. The
self-reproductive evolution of life like
our bodies or plants. They consist of
all kinds of physiological processes.
They are mental systems. The self
reproductive evolution of consciousness.
what we call minds. They consist of all
kinds of psychological processes, their
social systems, the self reproductive
evolution of communication like the
economic system, the political system or
the media system. They consist of all
kinds of social processes like payments,
elections or YouTube videos.
All these systems are environments for
one another. To record this video, to be
able to talk to you on social media, my
body must be alive and my mind needs to
think. And for you, the same is the case
if you're watching it. Communication,
including social media, operates within
the environment of bodies and minds.
The three systems co-evolve.
They're often structurally coupled. As
Lumman says, society influences how
minds evolve and minds influence how
society evolves. And this in turn also
influences biological evolution in
various ways. That's by the way why we
speak of the entroposy.
In other words, systems are contingent
upon one another.
Curiously and importantly, systems are
operationally closed. They're not in
direct mechanical contact with one
another. They live on their own, so to
say. This video is social media
communication. I cannot continue
recording it and you cannot continue
watching it simply by being bodily alive
and mentally thinking. We need to
communicate to keep communication going.
I need to post the video and you need to
watch it. Communication can only be
continued by more communication, not by
biological or mental operations on their
own.
Systems operate only with their own
operations, not with the operations of
other systems. However, systems irritate
one another. What I say irritates what
you think. It triggers thoughts which in
turn can irritate your body. When I
speak, you may be scratching your head
or start yawning or maybe even smile
slightly.
Minds in society are coupled by the
common medium of language. Mental
systems are intelligent, can think in
language.
Social systems can communicate in the
same language and yet as mentioned they
remain operationally close to one
another. Thoughts never become part of
communication as thoughts. They must be
communicated and communication never
becomes part of thought as
communication. You must think on it.
Ivan not translator of Lumman's book
social systems calls this systemic rift
between minds and communication
hermeneutic despair and she illustrates
it with a scene from a theater play. She
writes
in Danton's death bushna dramatizes the
primal scene of hermeneutic despair. The
protagonist makes a silent gesture
toward his lover's forehead and says to
her, "There, there, what lies behind
this?" To understand one another, we
would have to break open each other's
skulls and pull the thoughts out of the
fibers of our brains.
Lumman writes, "Categorically,
human beings cannot communicate. Not
LÅS UPP MER
Registrera dig gratis för att få tillgång till premiumfunktioner
INTERAKTIV VISARE
Titta på videon med synkroniserad undertext, justerbart överlägg och fullständig uppspelningskontroll.
AI-SAMMANFATTNING
Få en omedelbar AI-genererad sammanfattning av videoinnehållet, nyckelpunkter och slutsatser.
ÖVERSÄTT
Översätt transkriptet till över 100 språk med ett klick. Ladda ner i valfritt format.
MIND MAP
Visualisera transkriptet som en interaktiv mind map. Förstå strukturen med ett ögonkast.
CHATTA MED TRANSKRIPT
Ställ frågor om videoinnehållet. Få svar från AI direkt från transkriptet.
FÅ UT MER AV DINA TRANSKRIPT
Registrera dig gratis och lås upp interaktiv visning, AI-sammanfattningar, översättningar, mind maps och mer. Inget kreditkort krävs.