Demis Hassabis: Why AGI is Bigger than the Industrial Revolution & Where Are The Bottlenecks in AI
TRANSCRIPCIÓN COMPLETA
I would say about 90% of the
breakthroughs that underpin the modern
AI industry were done either by Google
brain or Google research or deep mind.
So one of our groups the returns are
kind of still very substantial although
they're a bit less than they were
obviously at the start of all of this
scaling. We have amazing guests on the
show but very few honestly will be
considered in the same realm as Newton
Turing Einstein. Our guest today is one
of the greatest minds on the planet and
I consider myself incredibly lucky to
have had the chance to sit down with
him. Those labs that have capability to
invent new algorithmic ideas are going
to start having bigger advantage over
the next few years as the last set of
ideas all the juices being rung out of
them.
>> This is a truly special one and one that
I'll remember for a very long time.
>> I think we could probably get 30 40%
more efficiency out of our national
grids.
>> Enjoy the episode and I so appreciate
the time we had with a very special
human being.
>> I sometimes quantify the coming of AGI
is 10 times the industrial revolution at
10 times the speed. Thrilled to welcome
Damis Albus at Deep Mind. Ready to go.
>> Deis, I'm so excited to be doing this.
Thank you so much for joining me today.
>> Great to be here.
>> Now, there are many places that we could
have start, but I was watching actually
the documentary that you did, which was
fantastic, and I actually wanted to
start on AGI. Mhm.
>> Definitions are very varying. You've
been very thoughtful about what it means
to you.
>> And so I wanted to start, can you
explain to me how you think about it
today so we get that as a kind of ground
center?
>> Yeah. Uh well, we've we've always
defined we've been very consistent how
we define AGI as basically a system that
exhibits all the cognitive capabilities
the human mind has. And that's important
because the brain is the only existence
proof we have that we know of in maybe
in the universe uh that general
intelligence is possible. So that for me
is the bar for what AGI should be.
>> It's the worst question. How close are
we?
Everyone everyone says different things
and it's very difficult when you have
you know very prominent figures saying
it could be as early as you know 2026
2027.
>> Yeah I mean I think look I've got a
probability distribution around um the
timings but I I would say there's a very
good chance of it being within the next
5 years. So that's not long at all.
>> Is that closer than you thought? Has
that changed over time?
>> Not really. I mean actually when you
when you uh it's funny um my co-founder
Shane Le who's chief scientist here um
uh when we started out Deep Mind back in
2010 he used to write blog posts sort of
predicting about uh when AGI would
happen. And bearing in mind in 2010 when
we started almost nobody was working in
AI and everyone thought AI
no one was reading it was a dead end.
No. And but they're still there on the
internet for people to check. And uh we
used to do this extrapolation of compute
and algorithmic uh progress. And
basically we predicted around 20 years
it would take from when we started out
and I think we're pretty much on track.
>> What are the biggest bottlenecks when
you look today? You know in in the
documentary you said you just never have
enough compute.
>> What are the biggest bottlenecks when
you look at where we are today?
>> I think compute is the big one. Not just
for the obvious reason of scaling up uh
your ideas and your systems as as you
know the scaling laws as they're called
you know keeping on building bigger and
bigger um architectures with more and
more parameters. Um and as you do that
you get more intelligent systems but the
other thing you need a lot of compute
for is for doing experiments. So um the
computers the cloud is our workbench
basically. So if you have a new idea, a
new algorithmic idea, but you want to
test it, you kind of got to test it at a
reasonable scale, otherwise it won't
hold when you actually put it into the
main system. So um you need quite a lot
of compute if you have a lot of
researchers with lots of new ideas.
>> You mentioned the word scaling laws.
>> A lot of people suggest that we're
hitting scaling laws and we're starting
to see that plateauing effect.
>> Yeah.
>> Do you think that's true?
>> No, I don't think so. I think it's a bit
more nuanced than that. So um of course
when uh the leading companies all
started building these large language
models you're getting enormous jumps
with each generation of new system. Um
you know maybe they're almost like
doubling in performance. Uh at some
point that had to slow down. So it's not
kind of continuing to be exponential but
that doesn't mean there isn't great
returns uh still for scaling the
existing you know systems up further.
So, and we and the other frontier labs
are getting uh a lot of great returns on
on that kind of compute expansion. Um,
so I would say the returns are kind of
um still very substantial, although
they're a bit less than they were
obviously at the start of all of this
scaling.
>> Where are we behind where you thought we
would be?
>> Um, I think actually in most areas we
are ahead of where I thought we would
be. If you think about things like um
the video models or um even now with our
newest systems like Genie, they're
interactive world models. Um which I
think is kind of incredible if you sort
of step back and think about it. I think
if you'd shown me that 5 10 years ago, I
would have been pretty amazed. Um so I
think in most domains we're we we are
ahead of where um the field thought. Um
there's still some big things missing
though like continual learning. These
systems don't learn uh after you finish
training them, after you put them out
into the into the world. You know,
they're not very good at learning
further things. And I think some
critical capabilities that I'm sorry to
ask blunt and basic questions. Why do we
not have continuous learning today?
>> Um well, people haven't quite figured
out yet and all the leading labs are
working on this like how to integrate
new learning into the existing systems
that you know you spent months training.
Um so of course the brain does this very
elegantly, right? And um probably
through things like sleep reinforcement
learning. So you know you just kind of
get consolidation it's called in the
brain where you know your memories
during the day are replayed and then
some of that information is elegantly
incorporated into your existing
knowledge base and perhaps we I thought
for a while maybe we need something like
that uh to incorporate new information
along with uh uh the existing
information base. You mentioned video
models, you mentioned kind of media and
image. It seems that DeepMind has
progressed very quickly and caught up
slashovertaken other providers.
>> I think I've tweeted I think you liked
it, but I basically tweeted um what I
used and how it's changed over time and
Deep Mind Now is my number one for
research for new shows.
>> It wasn't that way before. what has led
to the acceleration and progression of
deep mind in a way that it wasn't maybe
there 2 to 3 years ago?
>> Yeah. Well, we made some organizational
changes. So, I think we've always had
the deepest and broadest research bench
DESBLOQUEAR MÁS
Regístrate gratis para acceder a funciones premium
VISOR INTERACTIVO
Mira el video con subtítulos sincronizados, superposición ajustable y control total de la reproducción.
RESUMEN DE IA
Obtén un resumen instantáneo generado por IA del contenido del video, los puntos clave y las conclusiones.
TRADUCIR
Traduce la transcripción a más de 100 idiomas con un solo clic. Descarga en cualquier formato.
MAPA MENTAL
Visualiza la transcripción como un mapa mental interactivo. Comprende la estructura de un vistazo.
CHATEA CON LA TRANSCRIPCIÓN
Haz preguntas sobre el contenido del video. Obtén respuestas impulsadas por IA directamente desde la transcripción.
SACA MÁS PARTIDO A TUS TRANSCRIPCIONES
Regístrate gratis y desbloquea el visor interactivo, los resúmenes de IA, las traducciones, los mapas mentales y mucho más. No se requiere tarjeta de crédito.