TRANSCRIPCIÓNEnglish

Demis Hassabis: Why AGI is Bigger than the Industrial Revolution & Where Are The Bottlenecks in AI

32m 15s6,611 palabras953 segmentsEnglish

TRANSCRIPCIÓN COMPLETA

0:00

I would say about 90% of the

0:01

breakthroughs that underpin the modern

0:03

AI industry were done either by Google

0:05

brain or Google research or deep mind.

0:07

So one of our groups the returns are

0:08

kind of still very substantial although

0:10

they're a bit less than they were

0:12

obviously at the start of all of this

0:13

scaling. We have amazing guests on the

0:15

show but very few honestly will be

0:17

considered in the same realm as Newton

0:19

Turing Einstein. Our guest today is one

0:21

of the greatest minds on the planet and

0:23

I consider myself incredibly lucky to

0:25

have had the chance to sit down with

0:27

him. Those labs that have capability to

0:29

invent new algorithmic ideas are going

0:32

to start having bigger advantage over

0:33

the next few years as the last set of

0:35

ideas all the juices being rung out of

0:37

them.

0:37

>> This is a truly special one and one that

0:39

I'll remember for a very long time.

0:41

>> I think we could probably get 30 40%

0:43

more efficiency out of our national

0:45

grids.

0:45

>> Enjoy the episode and I so appreciate

0:47

the time we had with a very special

0:49

human being.

0:49

>> I sometimes quantify the coming of AGI

0:51

is 10 times the industrial revolution at

0:53

10 times the speed. Thrilled to welcome

0:55

Damis Albus at Deep Mind. Ready to go.

1:11

>> Deis, I'm so excited to be doing this.

1:13

Thank you so much for joining me today.

1:15

>> Great to be here.

1:15

>> Now, there are many places that we could

1:17

have start, but I was watching actually

1:19

the documentary that you did, which was

1:20

fantastic, and I actually wanted to

1:22

start on AGI. Mhm.

1:24

>> Definitions are very varying. You've

1:26

been very thoughtful about what it means

1:28

to you.

1:29

>> And so I wanted to start, can you

1:30

explain to me how you think about it

1:32

today so we get that as a kind of ground

1:34

center?

1:35

>> Yeah. Uh well, we've we've always

1:37

defined we've been very consistent how

1:38

we define AGI as basically a system that

1:41

exhibits all the cognitive capabilities

1:43

the human mind has. And that's important

1:45

because the brain is the only existence

1:48

proof we have that we know of in maybe

1:49

in the universe uh that general

1:51

intelligence is possible. So that for me

1:53

is the bar for what AGI should be.

1:56

>> It's the worst question. How close are

1:58

we?

2:00

Everyone everyone says different things

2:02

and it's very difficult when you have

2:05

you know very prominent figures saying

2:06

it could be as early as you know 2026

2:08

2027.

2:09

>> Yeah I mean I think look I've got a

2:10

probability distribution around um the

2:13

timings but I I would say there's a very

2:15

good chance of it being within the next

2:16

5 years. So that's not long at all.

2:19

>> Is that closer than you thought? Has

2:21

that changed over time?

2:22

>> Not really. I mean actually when you

2:24

when you uh it's funny um my co-founder

2:26

Shane Le who's chief scientist here um

2:29

uh when we started out Deep Mind back in

2:31

2010 he used to write blog posts sort of

2:33

predicting about uh when AGI would

2:36

happen. And bearing in mind in 2010 when

2:37

we started almost nobody was working in

2:39

AI and everyone thought AI

2:42

no one was reading it was a dead end.

2:44

No. And but they're still there on the

2:46

internet for people to check. And uh we

2:48

used to do this extrapolation of compute

2:50

and algorithmic uh progress. And

2:52

basically we predicted around 20 years

2:54

it would take from when we started out

2:56

and I think we're pretty much on track.

2:58

>> What are the biggest bottlenecks when

3:00

you look today? You know in in the

3:02

documentary you said you just never have

3:04

enough compute.

3:05

>> What are the biggest bottlenecks when

3:07

you look at where we are today?

3:08

>> I think compute is the big one. Not just

3:10

for the obvious reason of scaling up uh

3:13

your ideas and your systems as as you

3:15

know the scaling laws as they're called

3:17

you know keeping on building bigger and

3:18

bigger um architectures with more and

3:20

more parameters. Um and as you do that

3:22

you get more intelligent systems but the

3:25

other thing you need a lot of compute

3:26

for is for doing experiments. So um the

3:29

computers the cloud is our workbench

3:32

basically. So if you have a new idea, a

3:34

new algorithmic idea, but you want to

3:36

test it, you kind of got to test it at a

3:38

reasonable scale, otherwise it won't

3:40

hold when you actually put it into the

3:42

main system. So um you need quite a lot

3:45

of compute if you have a lot of

3:46

researchers with lots of new ideas.

3:48

>> You mentioned the word scaling laws.

3:50

>> A lot of people suggest that we're

3:52

hitting scaling laws and we're starting

3:53

to see that plateauing effect.

3:55

>> Yeah.

3:55

>> Do you think that's true?

3:57

>> No, I don't think so. I think it's a bit

3:58

more nuanced than that. So um of course

4:01

when uh the leading companies all

4:04

started building these large language

4:05

models you're getting enormous jumps

4:07

with each generation of new system. Um

4:10

you know maybe they're almost like

4:11

doubling in performance. Uh at some

4:13

point that had to slow down. So it's not

4:15

kind of continuing to be exponential but

4:17

that doesn't mean there isn't great

4:19

returns uh still for scaling the

4:22

existing you know systems up further.

4:24

So, and we and the other frontier labs

4:26

are getting uh a lot of great returns on

4:29

on that kind of compute expansion. Um,

4:32

so I would say the returns are kind of

4:35

um still very substantial, although

4:37

they're a bit less than they were

4:38

obviously at the start of all of this

4:40

scaling.

4:40

>> Where are we behind where you thought we

4:43

would be?

4:44

>> Um, I think actually in most areas we

4:47

are ahead of where I thought we would

4:49

be. If you think about things like um

4:52

the video models or um even now with our

4:55

newest systems like Genie, they're

4:56

interactive world models. Um which I

4:59

think is kind of incredible if you sort

5:01

of step back and think about it. I think

5:02

if you'd shown me that 5 10 years ago, I

5:05

would have been pretty amazed. Um so I

5:08

think in most domains we're we we are

5:10

ahead of where um the field thought. Um

5:13

there's still some big things missing

5:14

though like continual learning. These

5:16

systems don't learn uh after you finish

5:18

training them, after you put them out

5:20

into the into the world. You know,

5:21

they're not very good at learning

5:22

further things. And I think some

5:24

critical capabilities that I'm sorry to

5:27

ask blunt and basic questions. Why do we

5:28

not have continuous learning today?

5:30

>> Um well, people haven't quite figured

5:32

out yet and all the leading labs are

5:33

working on this like how to integrate

5:35

new learning into the existing systems

5:39

that you know you spent months training.

5:41

Um so of course the brain does this very

5:44

elegantly, right? And um probably

5:46

through things like sleep reinforcement

5:48

learning. So you know you just kind of

5:50

get consolidation it's called in the

5:52

brain where you know your memories

5:54

during the day are replayed and then

5:56

some of that information is elegantly

5:58

incorporated into your existing

6:00

knowledge base and perhaps we I thought

6:02

for a while maybe we need something like

6:04

that uh to incorporate new information

6:07

along with uh uh the existing

6:09

information base. You mentioned video

6:11

models, you mentioned kind of media and

6:13

image. It seems that DeepMind has

6:16

progressed very quickly and caught up

6:19

slashovertaken other providers.

6:22

>> I think I've tweeted I think you liked

6:24

it, but I basically tweeted um what I

6:27

used and how it's changed over time and

6:28

Deep Mind Now is my number one for

6:30

research for new shows.

6:31

>> It wasn't that way before. what has led

6:34

to the acceleration and progression of

6:36

deep mind in a way that it wasn't maybe

6:39

there 2 to 3 years ago?

6:40

>> Yeah. Well, we made some organizational

6:42

changes. So, I think we've always had

6:44

the deepest and broadest research bench

DESBLOQUEAR MÁS

Regístrate gratis para acceder a funciones premium

VISOR INTERACTIVO

Mira el video con subtítulos sincronizados, superposición ajustable y control total de la reproducción.

REGÍSTRATE GRATIS PARA DESBLOQUEAR

RESUMEN DE IA

Obtén un resumen instantáneo generado por IA del contenido del video, los puntos clave y las conclusiones.

REGÍSTRATE GRATIS PARA DESBLOQUEAR

TRADUCIR

Traduce la transcripción a más de 100 idiomas con un solo clic. Descarga en cualquier formato.

REGÍSTRATE GRATIS PARA DESBLOQUEAR

MAPA MENTAL

Visualiza la transcripción como un mapa mental interactivo. Comprende la estructura de un vistazo.

REGÍSTRATE GRATIS PARA DESBLOQUEAR

CHATEA CON LA TRANSCRIPCIÓN

Haz preguntas sobre el contenido del video. Obtén respuestas impulsadas por IA directamente desde la transcripción.

REGÍSTRATE GRATIS PARA DESBLOQUEAR

SACA MÁS PARTIDO A TUS TRANSCRIPCIONES

Regístrate gratis y desbloquea el visor interactivo, los resúmenes de IA, las traducciones, los mapas mentales y mucho más. No se requiere tarjeta de crédito.

    Demis Hassabis:… - Transcripción Completa | YouTubeTranscript.dev