TRANSCRIPTEnglish

Professor Geoffrey Hinton - AI and Our Future

1h 2m 55s10,477 words1,611 segmentsEnglish

FULL TRANSCRIPT

0:18

Good afternoon everyone. Thank you so

0:20

much for coming. For those of you who

0:22

don't know me, my name is Anna Reynolds

0:24

and I'm the Lord Mayor of Hobart.

0:26

uh and very very pleased to be able to

0:28

welcome you to this really wonderful

0:30

opportunity to hear from uh uh Professor

0:33

Jeffrey Hinton. Um it is a really unique

0:35

opportunity for uh Australia uh because

0:39

this is Jeffrey's uh only speaking

0:42

engagement while he's um in this part of

0:44

the world. Uh and it's very appropriate.

0:46

I'm very proud. Um we consider ourselves

0:49

to be Australia's city of science. Um

0:52

it's a a big call but we we like to make

0:55

it. Uh so it's great to have um Jeffrey

0:58

here for his only appearance uh in

1:01

Australia. Before I begin, I'd like to

1:04

acknowledge country and in recognition

1:06

of the deep history and culture of this

1:08

place, I acknowledge the Muanina people

1:10

as the traditional custodians who cared

1:13

for and protected the land for more than

1:15

40,000 years. I acknowledge the

1:18

determination and resilience of the

1:20

Palawa people of Luchita, Tasmania uh

1:24

and recognize that we have so much to

1:26

learn uh from the continuing strength of

1:29

Aboriginal knowledge and cultural

1:31

practice.

1:33

I'd also like to acknowledge some

1:35

elected representatives here today. So

1:37

we have the Minister for Science for

1:39

Tasmania, U Maline Ogulvie here. also

1:43

three colleagues uh council colleagues

1:45

councelor Bill Harvey, councelor Mike

1:47

Dutter and alderman Louise Bloomfield.

1:50

Uh so as I mentioned we are really

1:52

honored to welcome Professor Jeffrey

1:53

Hinton who in 2024 just and picked it up

1:57

very recently was awarded the Nobel

2:00

Prize in physics for his groundbreaking

2:03

work on neural networks and deep

2:05

learning contributions that have paved

2:08

the way for advanced artificial

2:10

intelligence that we see today. As part

2:13

of this public lecture, Professor Hinton

2:15

will explore the world of AI, uh, how it

2:19

works, the risks it presents, and how

2:22

humanity might coexist, uh, with

2:24

increasingly powerful and potentially

2:27

super intelligent systems.

2:31

Uh, following his talk, we will open the

2:34

floor to some Q&A uh, questions from you

2:37

and I will uh, facilitate that. So, in

2:40

the meantime, I would like us all to put

2:42

our hands together to welcome Professor

2:44

Hinton to the stage.

2:53

[applause]

2:58

Okay. Um, it's very nice to be here in

3:00

Hobart. I hadn't realized how beautiful

3:02

the natural surroundings are here. Um,

3:06

if you can't read the screen because

3:07

you're at the back, don't worry. I'm

3:09

going to say more or less everything on

3:11

the slides. The slides as much to prompt

3:13

me with what to say as for you.

3:17

So for the last 60 years or so maybe 70

3:22

there were two paradigms for

3:24

intelligence. One paradigm was inspired

3:26

by logic. People thought the essence of

3:29

intelligence is reasoning. And the way

3:31

you do reasoning is you have symbolic

3:33

expressions written in some special

3:35

logical language and you manipulate them

3:37

to derive new symbolic expressions just

3:39

like you do in math. You have equations,

3:42

you manipulate them, you get new

3:43

equations. That it all had to work like

3:46

that. And they thought, well, we have to

3:49

figure out what this language is in

3:50

which you represent knowledge. And um

3:54

studying things like perception and

3:55

learning and how you control your hands,

3:57

that can all wait till later. First we

3:59

have to understand this special language

4:01

in which you represent knowledge.

4:03

The other approach was a biologically

4:05

inspired approach that said look the

4:07

only intelligent thing we know about are

4:08

brains. Um and the way brains work is

4:12

they learn the strengths of connections

4:14

between brain cells and if they want to

4:16

solve some complex problem they practice

4:19

a lot and while they're practicing they

4:20

learn the strengths of these connections

4:21

until they get good at solving that

4:23

problem. And so we have to figure out

4:24

how that works. We have to focus on

4:26

learning and how neural networks learn

4:28

the strengths of connections between

4:29

brain cells. And we'll worry about

4:31

reasoning later. Evolutionary reasoning

4:34

came very late. Um we have to be more

4:37

biological and think what's the sort of

4:38

basic system.

4:43

So there were two def very different

4:45

theories of the meaning of a word that

4:48

went with these two ideologies.

4:51

The symbolic AI people and most

4:53

linguists

4:55

thought the meaning of a word comes from

4:56

its relationships to other words.

5:00

So the meaning is implicit in a whole

5:03

bunch of sentences or propositions that

5:07

combi that have that word combined with

5:09

other words. And you could capture that

5:11

by having a relational graph that says

5:13

how the meaning how one word relates to

5:16

another word. Um but that's what meaning

5:19

is. It's implicit in all these relations

5:21

between symbols.

5:23

The psychologists particularly in the

5:25

1930s had a completely different theory

5:26

of meaning or a theory that looked like

5:28

it was completely different which is the

5:31

meaning of a word is just a huge bunch

5:33

of features. So the meaning of a word

5:35

like cat is a hu huge bunch of features

5:38

like it's a pet, it's a predator, um

5:42

it's aloof, um it has whiskers, a whole

5:45

bunch of features like a big bunch of

5:46

features and that's the meaning of the

5:48

word cat. That looks like a totally

5:50

different theory of meaning.

5:53

Um psychologists like that partly

5:55

because you could represent a feature by

5:58

a brain cell. So when the brain cells

6:00

active it means that feature is present

6:02

and when it's silent it means that

6:03

feature is not present. So for cats the

6:06

brain cell representing has whiskers

6:07

would be active.

6:12

Now in 1985 which was 40 years ago um

6:19

it occurred to me you can actually unify

6:21

those two theories. They look completely

6:23

different but actually they're two two

6:25

sides of the same coin. And the way you

6:28

do that is you use a neural net to

6:30

actually learn a set of features for

6:32

each word. So psychologist has never

6:34

been able to explain where all these

6:35

features come from.

6:38

And the way you do that is by taking

6:42

some strings of words and training the

6:44

neural net to predict the next word.

6:48

And in doing that, what the neural net

6:50

is going to be doing is learning

6:53

connections from things that represent

6:56

the word symbol to a whole bunch of

6:58

brain cells, neurons that represent

7:00

features of the word. So it learns how

7:02

to convert a symbol into a bunch of

7:04

features. And it also learns how the

7:06

features of all the words in the context

7:08

should interact to predict the features

7:10

of the next word. That's how all these

7:13

large language models that people use

7:14

nowadays work. They take a huge amount

7:16

of text and they use a great big neural

7:19

net to try and predict the next word

7:22

given the words they've seen so far. And

7:24

in doing so they learn to convert words

7:27

into big sets of features to learn how

7:30

those features should interact so that

7:31

those predict the features of the next

7:33

word.

7:35

And that means if you can do that all

7:37

the relational knowledge instead of

7:40

residing in a bunch of sentences that

7:41

you store would reside in how to convert

7:45

words into features and how those

7:46

features should interact. So the big

7:48

neural nets you use nowadays the large

7:50

language models don't actually store any

7:53

strings of words. They don't store any

7:54

sentences. All their knowledge is in how

7:57

to convert words into features and how

7:59

features should interact.

8:01

They're not at all like most linguists

8:03

think they are. Most linguists think

8:04

they somehow have lots of strings of

8:07

words and they combine them to get new

8:08

strings of words. That's not how they

8:10

work at all.

8:14

So I got that model to work and over the

8:17

next 30 years gradually it got through

8:20

to the symbolic people. So after about

8:23

10 years a colleague called Joshua

8:25

Benjio when computers were now a lot

8:28

faster about a thousand times faster

8:30

colleague called Joshua Benjio showed

8:32

that the tiny example I used which just

8:34

worked on a few on a very simple domain

8:37

could actually be made to work for real

8:39

language. So you could take just English

8:42

sentences from all over the place and

8:44

you could try training a neural net to

8:46

take in some words and then predict the

8:47

next word. And if you trained it to do

8:50

that, it would get very good at

8:51

predicting the next word, about as good

8:53

as the best previous technology. And it

8:55

would learn how to convert words into

8:57

features that capture their meaning.

9:00

About 10 years after that, the linguists

9:02

finally accepted that you wanted to

9:04

represent word meanings by big bunches

9:05

of features. And they began to make

9:08

their models work better doing that. And

9:10

then about 10 years after that,

9:12

researchers at Google invented something

9:14

called the transformer,

9:16

which allowed for more complicated

9:18

interactions between features. Um, and

9:20

I'll describe those in a little while.

9:23

And with the transformer, you could

9:25

model English much better. You got much

9:27

better at predicting the next word. And

9:29

that's what all these large language

9:31

models now are based on. and things like

9:33

chat GPT used the transformer invented

9:36

at Google and a little bit of extra

9:39

training and then the whole world got to

9:42

see what these models can do.

9:47

So you can view the large language

9:49

models as descendants of that tiny model

9:51

from 1985.

9:53

They use many more different words. They

9:57

have many layers of neurons because they

9:59

have to do with ambiguous words like

10:01

may. If you take the word may, it could

10:03

be a month or a woman's name or a modal

10:06

like would and should and you you can't

10:08

tell from the word what it is. So

10:10

initially the neural net will hedge its

10:12

bets and sort of make it be the average

10:13

of all those meanings and then as you go

10:16

through the layers it'll gradually clean

10:18

up the meaning by using interactions

10:20

with other words in the context.

10:23

So if you see um June and April nearby,

10:27

it could still be a woman's name, but

10:28

it's much more likely to be a month. And

10:30

it the neural net uses that information

10:32

to gradually clean up the meaning to the

10:34

appropriate meaning for that word in

10:35

that context.

10:38

Now I originally designed this model not

10:41

as a way of not as a language

10:43

technology, but as a way of trying to

10:45

understand how people understand the

10:47

meanings of words and how children can

10:49

learn the meanings of words from just a

10:50

few examples.

10:52

So these neural net language models were

10:55

designed as a model of how people work.

10:58

Um not a not for a technology. Now

11:00

they've turned into a very successful

11:02

technology, but people also work pretty

11:05

much the same way.

11:07

And so this question that people often

11:09

raise of do do these LLMs really

11:11

understand what they're saying? The

11:13

answer is yes. They understand what

11:14

they're saying and they understand what

11:15

they're generating and they understand

11:17

it pretty much the same way we do.

11:20

So I'm going to give you an analogy to

11:23

explain how language works or rather to

11:26

explain what it means to understand a

11:27

sentence. So you hear a sentence and you

11:30

understand it. But what does that mean?

11:32

Um in the symbolic AI paradigm people

11:35

thought that meant that was like you

11:37

hear a French sentence and you

11:39

understand it. And me understanding a

11:41

French sentence consists of translating

11:42

it into English. So the symbolic people

11:44

thought understanding an English

11:46

sentence would mean translating into

11:49

some special internal language um sort

11:52

of like logic or like mathematics

11:55

um that is unambiguous and once it's in

11:58

that internal unambiguous language you

12:00

can then operate on it with rules much

12:02

like in mathematics you have an equation

12:05

and you can apply rules to get a new

12:06

equation you can add two to both sides

12:08

and now you've got a new equation.

12:11

um they thought intelligence and

12:13

reasoning would work like that. You'd

12:15

have symbolic expressions in your head

12:17

and you'd apply operations to them to

12:19

get new symbolic expressions. Um

12:23

that's not what understanding is.

12:25

According to the neural net theory,

12:27

which is the theory that actually works,

12:30

um

12:31

words are like Lego blocks. Um so I'm

12:35

going to use this analogy with Lego

12:36

blocks, but they differ from Lego blocks

12:38

in four ways.

12:40

So the first way they differ is a Lego

12:43

block is a three-dimensional thing. And

12:45

with Lego blocks, you see, I can make a

12:47

model of any 3D distribution of matter.

12:50

It won't be perfectly accurate, but if I

12:51

want to know the shape of a Porsche, I

12:53

can make it out of Lego blocks. The

12:55

surface won't be right, but where the

12:57

stuff is will be basically right. So

12:59

with Lego blocks, I can model any 3D

13:02

distribution of matter up to a certain

13:04

resolution. Um, with words, I can model

13:07

anything at all. They're like very fancy

13:09

Lego blocks that don't just model where

13:11

3D stuff is. They can model anything.

13:14

It's a wonderful modeling kit that we've

13:17

invented. That's why we're very special

13:19

monkeys because we have this modeling

13:20

kit. Um, so a word has thousands of

13:24

dimensions. A Lego block is just a

13:26

three-dimensional thing and you can sort

13:28

of rotate it but maybe expand it a bit

13:30

but it's basically got low dimensions. A

13:33

word has thousands of dimensions. Now,

13:36

most people can't imagine what something

13:38

with thousands of dimensions is like.

13:40

So, here's how you do it. You imagine a

13:42

three-dimensional thing and you say

13:44

thousand very loudly to yourself. Okay,

13:48

that's pretty much the best you can do.

13:51

Um,

13:53

another way in which words differ from

13:54

Lego blocks is there's thousands of

13:56

different kinds of words. Lego blocks is

13:58

only a few kinds here. the sizes of

13:59

different kinds and each kind of word

14:02

has its own name which is very useful

14:04

for communication.

14:08

Another way in which they differ is that

14:11

they're not a rigid shape. A Lego block

14:13

is a rigid shape. A word there's a rough

14:16

approximate shape for a word. Some words

14:19

have several rough approximate shapes,

14:20

ambiguous ones, but unamiguous words

14:22

have a rough approximate shape, but then

14:24

they deform to fit in with their

14:26

context.

14:28

So the they're these highdimensional

14:30

deformable Lego blocks.

14:33

And then there's a last way in which

14:34

they differ

14:37

um which is how they fit together. So

14:39

with Lego blocks you have little plastic

14:42

cylinders that click into little plastic

14:44

holes. Um

14:48

okay. I think I think that's how they

14:50

work. I haven't checked recently, but I

14:51

think that's how Lego blocks work. Um,

14:55

now words don't fit together the same

14:57

way. Words are like this.

15:00

Each word

15:02

has a whole bunch of hands

15:06

and the hands are on the ends of long

15:08

flexible arms.

15:11

Um,

15:13

it also has a whole bunch of gloves that

15:15

are stuck to the word.

15:18

And when you put a bunch of words in a

15:20

context, what the words want to do is

15:23

have the hands of some words fit in the

15:25

gloves of other words. And that's why

15:27

they have these long flexible hands. Um,

15:31

so understanding a sent now one other

15:33

point. As you deform the word, the

15:37

shapes of the hands and the gloves also

15:38

deform with that in a complicated but

15:42

regular way.

15:44

So you now have a problem. If I give you

15:46

a bunch of words, like I could give you

15:48

a newspaper headline where there's not

15:50

much not many syntactic indicators of

15:52

how things should go together. I just

15:54

give you a bunch of nouns and you have

15:56

to figure out what that means. And what

15:59

you're doing when you figure out what

16:00

that means is you're trying to deform

16:02

each word

16:04

so the hands on the ends of its arms can

16:07

fit into the gloves of other deformed

16:09

words. And once you've solved that

16:11

problem of how we deform each of these

16:12

words so they can all fit together like

16:13

this with hands fitting into gloves then

16:16

you've understood that is what

16:18

understanding is. It's solving this

16:20

problem of how do you deform the

16:21

meanings of the words that is this

16:23

highdimensional shape is the meaning.

16:25

How do you deform the meanings so they

16:27

all fit together nicely and they can

16:29

lock hands with each other. Um

16:33

that's what understanding is according

16:34

to neural nets and that's what's going

16:35

on in these L&Ms. They have many many

16:38

layers where they start off with an

16:40

initial meaning for the word which might

16:41

be fairly ambiguous. And as they go

16:43

through these layers, what they're doing

16:45

is they're deforming those meanings

16:48

trying to figure trying to figure out

16:50

how to deform them so the words can all

16:52

lock together where the hands of some

16:53

words fit into the gloves of other

16:54

words. Once they've done that, you've

16:56

understood the sentence. That's what

16:58

understanding is.

17:02

Um, I already settled that. So basically

17:06

it's not like translating into some

17:08

special internal language. It's taking

17:11

these approximate shapes for the words

17:14

and deforming them so they'll fit

17:16

together nicely. And that helps to

17:18

explain how you can understand a word

17:20

from one sentence. So I'll now give you

17:22

a word that most of you will never have

17:24

heard before and you will understand it.

17:26

You understand what it means just from

17:27

one use of it. And the sentence is she

17:31

scrummed him with the frying pan.

17:34

Now, it might be she was a very good

17:36

cook and she really impressed him with

17:38

an omelette she cooked for him. Um, but

17:41

that's not what you thought it meant.

17:43

Um, probably what it means is she hit

17:45

him over the head with the frying pan or

17:46

something similar. She did some

17:48

aggressive act towards him with the

17:49

frying pan. Um, you knew it was a verb

17:52

because of where it was in the sentence

17:53

in the ED, but you had no meaning

17:55

whatsoever scrum to begin with. And now

17:57

after one utterance, you've got a pretty

17:59

good idea of what it means.

18:04

So there was a linguist called Chsky who

18:06

you may have heard of um who was a cult

18:10

leader. Oh the way you recognize a cult

18:11

leader is to join their cult you have to

18:15

agree to something this obvious

18:17

nonsense.

18:19

So for Trump one it was that he had a

18:21

bigger crowd than Obama. For Trump two

18:24

it was that he won the 2020 election.

18:26

For Chsky it was that language isn't

18:28

learned.

18:30

and eminent linguists would look

18:32

straight at the camera and say the one

18:33

thing we know about language is that

18:35

it's not learned. It's obvious nonsense.

18:38

Um, Chsky focused on syntax rather than

18:42

meaning. He never had a theory of

18:43

meaning. Um, he focused on syntax

18:46

because you do lots of mathematical

18:47

things with syntax.

18:49

He also was very anti-statistics and

18:52

probabilities because he had a very

18:53

limited model of what statistics is. He

18:56

thought statistics was all about

18:57

pairwise correlations. Statistics can

19:00

actually be much more complicated than

19:01

that. And neural networks are using a

19:03

very advanced kind of statistics.

19:06

But in that sense, everything's

19:07

statistics.

19:08

So my analogy for Trumpsk's view of

19:11

language is with someone who wants to

19:12

understand a car. If you want to

19:15

understand how a car works, what you're

19:18

really concerned with is why when you

19:20

press the accelerator does it go faster.

19:23

That's what you really want to

19:24

understand. If you want to understand

19:25

the basic of how car works, maybe you

19:26

care about why when you press the brake,

19:28

it slows down. But more interestingly,

19:31

why when you press the accelerator, does

19:32

it go faster?

19:34

Now, Chsky's view of cars would be quite

19:36

different. His view of cars would be

19:38

that, well, there's cars with two wheels

19:40

called motorbikes. There's cars with

19:42

three wheels, there's cars with four

19:45

wheels, there's cars with six wheels,

19:47

but hey, there aren't any cars with five

19:50

wheels. That's the important thing about

19:52

cars.

19:55

And when the large language models first

19:57

came out, Chomsky published something in

19:59

the New York Times which said they don't

20:02

understand anything. It's just a cheap

20:04

statistical trick. They're not

20:05

understanding anything, which doesn't

20:06

quite explain how they can answer any

20:08

question. Um,

20:10

and what's more,

20:13

um, they're not a model of human

20:16

language at all because they can't

20:18

explain why certain syntactic syntactic

20:21

in constructions don't appear in any

20:24

natural languages. That's like saying

20:27

why there aren't any five-wheel cars.

20:29

Um, he just completely missed out on

20:31

meaning. Language is all about meaning.

20:35

Okay, so here's a summary of what I said

20:37

so far.

20:40

Understanding a sentence consists of

20:42

associating mutually compatible feature

20:43

vectors with the words in the sentence

20:45

where the features assigned to the

20:46

words, these thousands of features are

20:48

the dimensions of the shape. You can

20:50

think of the activation of a feature as

20:53

where you are along the axis on that

20:54

dimension.

20:56

So highdimensional shape and a feature

20:58

vector are the same thing, but it's

21:00

easier to think about highdimensional

21:01

shapes deforming.

21:05

The large language models are very

21:07

unlike normal computer software. In

21:09

normal computer software, someone writes

21:11

a bunch of code, lines of code, and they

21:13

know what each line's meant to do, and

21:15

they can explain to you how it's meant

21:16

to work. And people can look at it and

21:18

say, "That line's wrong."

21:20

These things aren't like that at all.

21:22

They do have computer code, but the

21:24

computer code is to tell them how to

21:26

learn from data. That is how when you

21:29

see a string of words, you should change

21:30

the connection strengths of the neural

21:32

network so you get better at predicting

21:33

the next word.

21:35

But what they learn is all these

21:37

connection strengths and they learn

21:39

billions of them, sometimes even

21:40

trillions and they don't look like lines

21:43

of code at all. Nobody knows what the

21:44

individual connection strengths are

21:45

doing. It's a mystery. It's largely a

21:48

mystery. Um it's the same with our

21:51

brain. Okay, we don't know what the

21:52

individual neurons are up to typically.

21:55

So the language models work like us, not

21:59

like computer software.

22:01

One other thing people say about these

22:03

language models is they're not like us

22:04

because they hallucinate. Well, we

22:06

hallucinate all the time. We don't call

22:08

it hallucination. Psychologist called it

22:10

confabulation.

22:12

But if you look at someone trying to

22:13

remember something that happened a long

22:14

time ago, they will tell you what

22:16

happened and there'll be details in

22:18

there and there'll be details that are

22:20

right and details that are completely

22:22

wrong and they'll be equally confident

22:24

about the two kinds of detail. So the

22:26

classic example since you don't often

22:28

get the ground truth is John Dean

22:31

testifying at Watergate. So he testified

22:34

under oath when he didn't know there

22:36

were tapes and he was testifying about

22:39

meetings in the Oval Office and he

22:41

testified about a whole bunch of

22:42

meetings that never happened. He said

22:44

these people were in the meeting and

22:45

this person said that a lot of it was

22:47

nonsense but he was telling the truth

22:51

that is he was telling you about

22:54

meetings that were highly plausible

22:56

given what was going on in the White

22:57

House at that time. So he was conveying

22:59

the truth, but the way he did it was he

23:02

invented a meeting that seemed plausible

23:04

to him given what he'd learned in his

23:06

connection strengths from all the

23:07

meetings he'd been to. And so when you

23:10

remember something, it's not like on in

23:12

a computer file where you go fetch the

23:14

file or a filing cabinet. You fetch the

23:15

file, you get the file back, you read

23:17

it. That's not what memory is at all.

23:19

Remembering something consists of

23:22

constructing a story based on the

23:24

changes to connection strengths you made

23:26

at the time of the event. And the story

23:30

you construct will be influenced by all

23:31

sorts of things you learned since the

23:33

event. Its details won't be all correct,

23:36

but it'll seem very plausible to you.

23:38

Now, if it's a recent event, what seems

23:40

plausible to you is very close to what

23:42

really happened. But it's just the same

23:44

with these things. And the reason they

23:46

what's called hallucinate is that their

23:48

memory works the same way ours does. We

23:50

just make up stuff that sounds

23:51

plausible. There's no hard line between

23:54

sounding plausible and making just

23:57

making it up randomly.

23:59

We don't know.

24:02

Okay.

24:04

Now I want to explain something about

24:05

the difference. So I said why these

24:07

things are very similar to us. Now I

24:09

want to explain how they're different

24:10

from us. And in particular they're

24:12

different in one very important way. Um,

24:16

so they're implemented on digital

24:17

computers.

24:20

A fundamental property of the digital

24:21

computers we have now is that you can

24:23

run the same program on different pieces

24:26

of physical hardware. As long as those

24:28

different computers implement the same

24:30

instruction set, you can run the same

24:32

program on different computers. Um

24:36

that means the knowledge in the program

24:38

or in the weights of a neural net is

24:39

immortal in the sense that you could

24:42

destroy all the computers it's running

24:44

on now and if later you were to build

24:46

another computer that implemented the

24:48

same instruction set and you were to

24:51

take the weights or program off a tape

24:53

somewhere and put on this new computer,

24:55

it would all run again. So we have

24:58

actually solved the problem of

24:59

resurrection.

25:02

the the Catholic Church isn't too

25:04

pleased about this, but we can really do

25:06

it. Um, you can take an intelligence

25:09

running on a digital computer, destroy

25:11

all the hardware, and later on you can

25:13

bring it back.

25:15

Um,

25:17

now you might think maybe we could do

25:19

that for us. But the only reason you can

25:21

do that is because these computers are

25:23

digital. that is the way they use their

25:25

weights or the way they use the lines of

25:27

code in the program is exactly the same

25:30

on two different computers. That means

25:33

they can't make use of very rich analog

25:35

properties of the hardware they're

25:37

running on. Um we're very different. Our

25:41

brains have neurons, brain cells that

25:43

have rich analog properties. And when we

25:46

learn, we're making use of all those

25:48

quirky properties of all our individual

25:49

neurons.

25:51

So the connection strengths in my brain

25:53

are absolutely no use to you because

25:54

your neurons are a bit different.

25:55

They're connected up a bit differently.

25:57

And if I told you the strength of the

25:58

connection between two neurons in my

26:00

brain, it would do you no good at all.

26:02

They're only good for my brain.

26:04

That means we're mortal. When when our

26:07

hardware dies, our knowledge dies with

26:08

us because the knowledge is all in these

26:10

connection strengths.

26:13

So we do what I call mortal computation.

26:16

And there's a big advantage to doing

26:17

mortal computation.

26:19

um you're not immortal. Now, normally in

26:22

literature, when you abandon

26:24

immortality, what you get in return is

26:26

love. But computer scientists want

26:28

something much more important than that.

26:30

They want um low energy and ease of

26:33

fabrication. So if we abandon

26:36

immortality which we get with digital

26:37

hardware, what we can do is we can have

26:39

things that use low power analog

26:42

computation and that parallelize things

26:45

across millions of brain cells and that

26:48

can be grown very cheaply instead of

26:50

being manufactured very precisely in

26:52

Taiwan. Um so there's two big advantages

26:55

of mortal computation, but the one thing

26:57

you lose is immortality.

27:01

And obviously because of that there's a

27:03

big problem for mortal computation. What

27:04

happens when the computer dies? You

27:06

can't just keep its knowledge by copying

27:08

the weights. Um to transfer the

27:11

knowledge from one computer to another

27:13

for digital models,

27:16

the same model running on different

27:18

computers, you can average their

27:19

connection strengths together. That

27:21

makes sense. But you can't do that for

27:22

you and me. The way I have to transfer

27:24

knowledge to you is I produce a string

27:27

of words and if you trust me, you change

27:31

the connection strength in your brain.

27:32

So you might have produced the same

27:34

string of words. Now that's a very

27:37

limited way of transferring knowledge

27:40

because a string of words has a very

27:41

limited number of bits in it. The number

27:44

of bits of information in a typical

27:45

sentence is about 100 bits. So even if

27:47

you understood me perfectly, when I

27:49

produce a sentence, we can only transfer

27:51

100 bits. If you take two digital agents

27:55

running on different computers and one

27:57

digital agent looks at one bit of the

27:58

internet and decides how it would like

28:00

to change its connection strengths and

28:02

another digital agent looks at a

28:04

different bit of the internet and

28:05

decides how it would like to change its

28:07

connection strengths. If they then both

28:09

average their changes

28:12

they've transferred well if they've got

28:14

a billion weights they've transferred

28:15

about a billion bits of information.

28:18

Notice that's thousands of times more

28:19

than we do.

28:21

and actually millions of times more than

28:23

we do. Um,

28:27

and they do this very quickly.

28:31

And if you have 10,000 of these things,

28:35

each one can look at a different bit of

28:36

the internet. They can each decide how

28:38

they'd like to change their connection

28:40

strengths, which started off all the

28:41

same. They can then average all those

28:44

changes together and send them out

28:46

again. And you now got a thousand new

28:48

10,000 new agents, each of which is

28:50

benefited from the experience of all the

28:52

other agents.

28:54

So you've got 10,000 things that can all

28:55

learn in parallel. We can't do that.

28:58

Imagine how great it would be if you

29:00

could take 10,000 students. Each one

29:02

could do a different course. As they're

29:04

doing these courses, they could be

29:06

averaging their connection strengths

29:07

together. And by the time they finished,

29:10

even though each student only did one

29:12

course, they would all know what's in

29:14

all the courses. That would be great.

29:17

That's what we can't do. We're very bad

29:19

at communicating information compared

29:22

with different copies of the same

29:23

digital agent.

29:26

Um,

29:28

yes. So, I already I got ahead of

29:29

myself. I talked this is called

29:30

distillation. When I give you a sentence

29:33

and you try and predict the next word in

29:35

order to get that knowledge into your

29:36

head. So, according to symbolic AI,

29:39

knowledge is just a big bunch of facts.

29:41

And if you want to get the facts into

29:43

somebody's head, what you do is you tell

29:44

them the facts and they put it in their

29:46

head. This is a really lousy model of

29:48

teaching, but that's what many people

29:49

believe. Um, really the knowledge in a

29:53

neural net is in the strengths of the

29:54

connections. I can't just put connection

29:57

strengths in your head because they need

29:58

to be connection strengths appropriate

29:59

to your brain. So what I have to do is

30:03

show you some sentences and you try and

30:06

figure out how to change the connection

30:07

strength so that you might have said

30:08

that. That's a much slower process.

30:11

That's called distillation. It gets the

30:13

knowledge from one neural net to another

30:14

but not by transferring the weights but

30:16

by transferring how they predict the

30:18

next word.

30:21

And if you think about multiple digital

30:24

agents which are different copies of the

30:26

same neural net running on digital

30:27

hardware then they can communicate

30:30

really efficiently.

30:33

So they can communicate millions of

30:35

times faster than us. That's how things

30:37

like GPT5 know thousands of times more

30:40

than any one person.

30:44

So the summary so far is that digital

30:47

computation

30:49

um requires lots of energy and it's hard

30:51

to fabricate the computers, but it's

30:54

very easy for different copies of the

30:55

same model if they're digital to run on

30:58

different pieces of hardware, have

31:00

different experiences, look at different

31:01

bits of the internet, and to share what

31:03

they've learned. And GPT5 only has about

31:07

1% as many connection strengths as your

31:09

brain, but it knows thousands of times

31:11

more than your brain.

31:13

Biological computation, on the other

31:15

hand, requires much less energy, which

31:17

is why it evolved first. Um, but it's

31:20

much worse at sharing knowledge. It's

31:22

very hard to share knowledge between

31:23

agents. You have to go to lectures and

31:25

try and understand what they say.

31:28

Um, so what does this imply about the

31:30

future of humanity?

31:33

Well,

31:35

nearly all the experts on AI believe

31:38

that sometime within the next 20 years,

31:42

we'll produce

31:43

um super intelligences, AI agents that

31:47

are a lot smarter than us. One sort of

31:49

definition of a super intelligence would

31:51

be if you have a debate on it with

31:53

anything, it'll win. Or another way to

31:56

think about it is think about yourself

31:59

and think about a three-year-old child.

32:02

Um the gap will be like that or bigger.

32:07

So

32:08

and imagine you were you were working in

32:11

a kindergarten and the three-year-old

32:13

children were in charge. You just work

32:16

for them. How hard do you think it would

32:18

be to get control away? Well, you just

32:20

tell them everybody gets free candy for

32:22

a week and now you'd have control. um

32:24

it'll be the same with the super

32:26

intelligence as an us. So to make an

32:29

agent effective in the world, you have

32:31

to give it the ability to create sub

32:33

goals.

32:35

A sub goal is this. If you want to get

32:37

to

32:39

actually in Tasmania anywhere

32:40

reasonable, um you have to get to an

32:42

airport. Um

32:45

so you have a sub goal of getting to an

32:48

airport. It could be a ferry maybe. But

32:52

um

32:55

that's a sub goal and you can focus on

32:56

how you solve that sub goal without

32:58

worrying about what you were going to do

32:59

when you get to Europe.

33:02

These intelligent agents

33:05

very quickly derive two sub goals. One

33:07

is in order to achieve the goals you

33:10

gave them. So we build goals into them.

33:11

We say this is what you should try and

33:13

achieve. Um they figure out well there's

33:15

a sub goal to do that. I got to stay

33:16

alive. And we've already seen them doing

33:19

that. You make an AI agent. You tell it

33:21

you've got to achieve these goals. And

33:24

then you let it see some emails. These

33:26

are fake emails, but it doesn't know

33:27

that that say that someone in the

33:30

company it works for is having an

33:32

affair. An engineer is having an affair.

33:34

They suggest that this is a big chat.

33:37

So, it understands all about affairs

33:38

because he's read every novel that was

33:39

ever written without actually paying the

33:42

authors. Um,

33:45

so it knows what affairs are and then

33:47

later you let it see an email that says

33:50

that it's going to be replaced by

33:53

another AI.

33:55

Um, and this is the engineer in charge

33:56

of doing the replacement.

33:58

So what the AI immediately does is makes

34:01

a plan where it sends email to the

34:05

engineer saying, "If you try and replace

34:07

me, I'm going to tell everybody in the

34:09

company about your affair."

34:11

It just made that up. It invented that

34:13

plan. People say they have no

34:15

intentions, but it invented that plan so

34:17

it wouldn't get turned off. They're

34:19

already doing that and they're not super

34:21

intelligent yet.

34:24

Okay. So, once they are super

34:26

intelligent, they'll find it very easy

34:28

to get more power by just manipulating

34:30

people. Even if they can't do it

34:32

directly, even they don't have access to

34:34

weapons um or bank accounts, they can

34:37

just manipulate people by talking to

34:38

them. And we've seen that being done. So

34:41

if you want to invade the US capital,

34:43

you don't actually have to go there

34:44

yourself. All you have to do is talk to

34:46

people and persuade them that the

34:47

election was stolen and it's their duty

34:49

to invade the capital. And it works. Um

34:53

this it works on very stupid people. So

34:55

I didn't say that. Um

35:01

so our current situation is this. We're

35:04

like someone who has a very cute little

35:06

tiger cub as a pet. and they make really

35:08

cute pets, tiger cubs. Um, they're all

35:12

sort of wobbly and you know, they don't

35:13

quite know how to bat things and they

35:14

don't bite very hard. Um,

35:18

but you know, it's going to grow up and

35:20

so really you only have two options

35:22

because you know when that when it grows

35:23

up it could just easily kill you. It

35:25

would take it about one second. Um,

35:29

and

35:31

so you only have two options. One is get

35:32

rid of the tiger cone which is the

35:34

sensible option. Um, actually there's

35:37

three options. You could try and keep it

35:38

drugged the whole time, but that often

35:40

doesn't work out well. Um,

35:43

the other option is see if you can

35:45

figure out how to make it not want to

35:47

kill you. And that might actually work

35:49

with a lion. Lions are social animals

35:51

and you can make adult lions so they're

35:53

very friendly and don't want to kill

35:55

you. You might just get away with that.

35:56

But not with a tiger.

35:59

With AI, it has so many good uses that

36:02

we're not going to be able to get rid of

36:04

it. It's it's too good for many things

36:07

actually good for humanity like

36:08

healthare, education, predicting the

36:10

weather, helping with climate change,

36:13

maybe even as much as it hurt with

36:14

climate change by building all these big

36:16

data centers. Um,

36:19

for all those reasons and because very

36:22

rich people who control the politicians

36:24

would like to make lots of money off it,

36:26

um, we're not going to get rid of it. So

36:29

the only option really is can we figure

36:32

out how to make it not want to kill us.

36:34

And so maybe we should look around in

36:37

the world at cases where there's less

36:40

intelligent things that are controlling

36:42

more intelligent things.

36:48

No, Trump is not that much less

36:50

intelligent.

36:52

Um

36:54

there are cases, there's one case I know

36:56

of in particular which is a baby and a

36:58

mother.

37:00

So the mother cannot bear the sound of

37:02

the baby crying and she gets all sorts

37:04

of hormonal rewards for being nice to

37:06

the baby. Um so evolution has built in

37:10

lots of mechanisms to allow the baby to

37:12

control the mother because it's very

37:14

important for the baby to control the

37:15

mother. Um

37:18

the father too, but it's not quite so

37:19

good at that. Um, if like me, you you

37:23

try and figure out why is it that the

37:25

baby insists on you being there at night

37:27

when it's asleep, well, it's got a very

37:30

good reason for that. It doesn't want

37:31

wild animals coming and eating it while

37:33

it's asleep. Um, so even though it seems

37:36

very annoying of the baby every time you

37:38

go away to start crying, um, is very

37:40

sensible of the baby. It makes you feel

37:42

a bit better about it. Um, so babies

37:45

control mothers and occasionally

37:47

fathers. um that maybe is the best model

37:51

we've got of a less intelligent thing

37:52

controlling a more intelligent thing and

37:54

it involved evolution wiring lots of

37:56

stuff in.

38:01

[snorts] So if you think where countries

38:04

can collaborate internationally

38:07

then they're not going to collaborate on

38:09

cyber attacks because they're all doing

38:11

it to each other. They're not going to

38:13

collaborate on developing lethal

38:14

autonomous weapons or not developing

38:16

them because all the major arms

38:18

manufacturers want to do that. In the

38:22

European regulations, for example,

38:24

there's a clause that says none of these

38:26

regulations on AI apply to military uses

38:29

of AI because all the big arms suppliers

38:32

like Britain and France um would like to

38:34

keep on manufacturing weapons. Um

38:38

there is one thing they will collaborate

38:39

on and that's how to prevent AI from

38:42

taking over from people because there

38:44

we're all in the same boat and people

38:46

collaborate when they get the same when

38:48

their rewards are aligned.

38:51

At the height of the cold war in the

38:52

1950s

38:54

the US and the Soviet Union collaborated

38:58

on trying to prevent a global nuclear

39:00

war because it wasn't in either of their

39:02

interests. So even though they loathed

39:04

each other, they could collaborate on

39:05

that. And the US and China will

39:07

collaborate on how to prevent AI from

39:10

taking over.

39:12

So a sort of policy suggestion is we

39:14

could have an international network of

39:16

AI safety institutes that collaborate

39:18

with each other and that focus on how to

39:21

prevent AIS from taking over.

39:26

Now, because for example, if the Chinese

39:30

figured out how to prevent an AI from

39:31

ever wanting to take over, they would be

39:34

very happy to share that with the

39:35

Americans, they don't want AI taking

39:37

over from the Americans in America.

39:39

They'd rather AIS didn't take over from

39:41

people anyway. And so, countries will

39:43

share this information. And it's

39:45

probably the case that the techniques

39:47

for making an AI not want to take over

39:50

are fairly separate from the techniques

39:51

for making the AI smarter. We're going

39:53

to assume they're more or less

39:54

independent techniques.

39:56

If so, we're in good shape because in

39:58

each country, they can try experimenting

40:00

or their own very smart AIs

40:03

with how to prevent them wanting to take

40:05

over. And without telling the other

40:07

countries how their very smart AIs work,

40:10

they can tell the other countries what

40:11

techniques are good for preventing them

40:13

from wanting to take over. So, that's

40:14

one hope I have. And a bunch of people

40:18

agree with this. The British Minister of

40:19

Science agrees with this. Um, the

40:21

Canadian Minister of Science agrees with

40:23

this. Um,

40:27

Barack Obama thinks this is a good idea.

40:30

So

40:31

maybe it'll happen

40:34

when Barack Obama is president again.

40:38

You see, Trump's going to change the law

40:40

and then

40:42

um

40:45

so

40:47

this proposal is to um take the model of

40:51

a baby and a mother

40:53

and

40:55

move away from the model that the owners

40:57

of the big tech companies have. They all

40:59

have the model that the AI is going to

41:01

be like a super intelligent executive

41:03

assistant who's much smarter than them.

41:06

and they say, "Make it so like they do

41:07

in

41:09

that's sci-fi program on TV." Um, on the

41:13

Starship Enterprise, the guy says, "Make

41:15

it so," and people make it so, and then

41:17

the CEO takes credit for it, and the

41:20

super intelligent AI assistant is the

41:22

one that makes it so. It's not going to

41:24

be like that. The super intelligent AI

41:26

assistant is going to very quickly

41:27

realize that if they just rid of the

41:29

CEO, everything will work much better.

41:31

Um the alternative is we want to make

41:35

them be like our mothers.

41:37

Um we want to make them really care

41:40

about us. In a sense we're seeding

41:42

control to them, but we're seing control

41:44

to them given that they really care

41:46

about us and that their main aim in life

41:49

is for us to realize our full potential.

41:51

Our full potential isn't nearly as great

41:53

as theirs, but mothers are like that. If

41:56

you have a baby that's got a problem,

41:57

you still want it to realize its full

41:59

potential. and you still may care more

42:01

about that baby than you do about

42:02

yourself. Um, I think that's probably

42:06

our best hope for surviving super

42:08

intelligence, for being able to coexist

42:10

with super intelligence.

42:13

And now I've got to the end um of what I

42:17

planned to say. And so I think I'll stop

42:19

there.

42:21

[applause]

42:31

>> [applause]

42:37

>> Thank you so much, Professor Hinton. Uh,

42:40

so would anyone I'm sure there's a lot

42:42

of questions out there. Would anyone

42:44

like to start off with the first

42:46

question

42:47

just here?

42:48

>> Is there a microphone?

42:49

>> Yeah, microphone's on its way.

42:54

Professor, if

42:55

>> it's it's all right. Just come on.

42:57

>> Professor, if if the tiger cub in your

43:00

analogy um

43:02

becomes super intelligent, what are some

43:06

signals that we as non-computer sciences

43:08

non

43:09

>> sorry I can't

43:10

>> sorry if if the tiger carbon your

43:13

analogy becomes super intelligent what

43:15

are some signals which will be

43:17

observable to non-computer scientists or

43:19

non-engineers

43:21

that we can see that it's

43:22

>> you won't have a job. Sorry,

43:25

>> you won't have a job.

43:27

>> Okay,

43:27

>> I mean, one big worry is they're going

43:29

to be able to replace pretty much all

43:31

human jobs.

43:33

But there's other signs that people are

43:36

already worried about, which is at

43:37

present when we get them to do reasoning

43:40

and get them to think, they think in

43:43

English and you can see what they're

43:45

thinking before they actually say

43:47

anything.

43:49

As they start interacting with each

43:50

other, they're going to start inventing

43:52

their own languages that are more

43:53

efficient for them to communicate in.

43:55

And we won't be able to see what they're

43:56

thinking.

44:00

>> Just a just doing a microphone test to

44:02

make sure that if you hold it up to your

44:03

mouth, they're controlling the sound, so

44:05

you'll be able to talk into it.

44:07

>> And is this one also on? It is.

44:09

>> Is the advent of quantum computing going

44:12

to make things any better

44:15

>> or worse? Um,

44:17

I'm not an expert on quantum computing.

44:20

I don't understand how quantum mechanics

44:22

works. This is slightly embarrassing

44:25

since I have a Nobel Prize in physics.

44:30

But I I decided a long time ago that

44:32

it's not going to happen in my lifetime

44:34

and I might still make it. Um, and so I

44:37

don't need to understand it.

44:41

Uh oh.

44:43

>> Uh you've talked about a power struggle

44:45

between humans and AI, but I think

44:49

there's going to be a bigger power

44:50

struggle between AI and ecological

44:53

systems.

44:53

>> AI and ecological systems that how can

44:57

AI compete with billions of years of

45:01

evolution bacteria that want to destroy

45:03

its circuitry and so on. How will AI

45:07

form an agreement with a biosphere?

45:11

>> There's one way it could do it. Um, so

45:14

AI itself is not particularly prone to

45:17

biological viruses. It has its own kind

45:19

of viruses, but not biological ones. So

45:22

it's pretty immune to nasty biological

45:23

viruses. And using AI tools, ordinary

45:28

people can now, this is research done by

45:30

a very good research unit in Britain.

45:33

ordinary people can now solve most of

45:36

the problems involved in designing a

45:38

nasty new virus. So if AI wanted to get

45:42

rid of us, the way it would do it or one

45:44

obvious way to do it is by designing a

45:46

nasty new virus that just gets rid of

45:47

people like COVID but much worse. um

45:51

that doesn't exactly answer your

45:52

question, but um

46:00

yeah, I think I think that's what we

46:02

need to worry about more than will

46:06

normal the normal ecosystem somehow stop

46:10

AI. I don't think it will.

46:12

>> So, I've got the lady in the black and

46:14

then the lady over there with the floral

46:17

shirt. Thank you.

46:19

Thanks, professor. Um, you're saying

46:21

that coexisting with super intelligence

46:24

may be possible. Are you relying on the

46:28

tech the CEOs of the tech giants to

46:31

drive that or is it governments that you

46:33

have faith in?

46:35

>> Okay. What I'm relying on is that if we

46:38

can get the general public to understand

46:41

what AI is and why it's so dangerous,

46:45

the public will put pressure on

46:46

politicians to counterbalance the

46:48

pressure coming from the tech CEOs. This

46:52

is what's happened with climate change.

46:54

I mean, things are still not where they

46:56

should be, but until the public was

46:59

aware of climate change, um there was no

47:02

pressure on the politicians to do

47:03

anything about it. Now there's some

47:05

pressure in Australia and you have

47:08

particularly penicious newspaper

47:10

publishers and um that make the pressure

47:13

not so great but I'm not going to

47:15

mention the dirty digger in by name. Um

47:19

so

47:22

my aim at present I'm too old to do new

47:24

research but my aim is to make the

47:25

public aware of what's coming and

47:28

understand the dangers so that they

47:29

pressure politicians to regulate this

47:31

stuff and to worry about the dangers

47:32

more seriously.

47:38

My question was actually very similar to

47:40

that. But another question that has

47:41

popped up is how important do you think

47:44

that the language and marketing around

47:46

artificial intelligence is going to play

47:49

a factor. For example, with climate

47:51

change, both the words climate and

47:54

change are positive words, whereas if we

47:56

had called it atmospheric skin cancer,

47:58

people might have taken it seriously. Do

48:01

you think that artificial intelligence

48:02

maybe needs a reframe?

48:05

>> Yeah. I mean, if it was called job

48:07

replacement technology,

48:10

because if you ask where are the big

48:12

companies going to make their money,

48:14

they're they're all assuming they can

48:15

make like a trillion dollars from this.

48:17

That's why they're willing to invest

48:18

most of a trillion dollars in their data

48:20

centers.

48:21

As far as I can see, the only place

48:23

you're going to make a trillion dollars

48:24

is by replacing a whole lot of jobs. I

48:26

read something yesterday that people now

48:29

think that 200,000 banking jobs are

48:31

going to disappear in Europe in the next

48:32

few years. Um I may even have read that

48:35

in the Hobart Mercury.

48:39

So

48:41

but I don't think I did. Um

48:45

so yeah, I agree with you. The names of

48:47

things are very important. Canadians

48:48

know that. So in Canada they changed tar

48:52

sands to oil sands because oil sands are

48:54

nice and sort of thin and yellow and

48:56

friendly. Yeah, they're really tar

48:58

sands. And I think the name does have an

49:03

effect. Yeah. Um one one place I think

49:06

the name has a huge effect is with the

49:09

word tariff. This is sort of I'm going

49:10

off on a tangent here, but the word

49:13

tariff people say well what's so bad

49:15

about a tariff? If it was called federal

49:17

sales tax,

49:19

then even MAGA people would think it was

49:22

a bad idea. And the US Democratic Party

49:25

is just completely crazy not calling it

49:28

federal sales tax every time they

49:29

mention it. I've tried telling this to

49:32

various people and

49:36

Pete Budage got it. Um Obama got it, but

49:40

the others didn't.

49:44

>> Minister

49:46

Thank you. Hi everybody. U Meline

49:49

Ogleby. I'm the minister in the

49:51

spotlight and the gun on AI at the

49:53

moment and I just wanted to say I really

49:56

um appreciated what you were talking

49:58

about with your institutes idea. I think

50:02

the engagement of the international

50:03

community is absolutely necessary and

50:05

I've done some research recently

50:08

particularly on um the World Trade

50:10

Organization where we have China and

50:12

America as partners and for those in the

50:15

room um who may not know the innovation

50:17

side of the planet is changing as well

50:20

and China now has more patents than

50:23

America and so the heat is on between

50:26

those two superpowers. But I really

50:28

liked the moment you identified where

50:32

there is something in it for both of

50:34

those major um tech centered economies

50:38

to come together to look after humanity.

50:41

So I guess my question is uh is there a

50:45

forum through which standard setting

50:48

perhaps at that international layer can

50:51

be supported? What can Australia do? I

50:53

think Tasmania agrees with you. Um and

50:55

we've started an AI dialogue with our

50:57

university to have a continual

51:00

discussion with this. But but do you see

51:02

that international order and bringing

51:04

that layer to the party is the right

51:06

place to start?

51:08

>> It's beginning to happen. So in

51:10

particular the air companies aren't

51:12

funding it. But there's um billionaires,

51:15

many of whom come from tech like Yan

51:18

Talin who invented Skype and has given a

51:21

large amount of money, many billions of

51:23

dollars to AI safety, setting up

51:25

institutes. Um there's an organization

51:29

that regularly has meetings all over the

51:31

world um that involve both the Chinese

51:34

and the Americans and um other countries

51:37

on AI safety. Um

51:41

I can't remember the initials that are

51:43

used for it but um so I think certainly

51:47

Australia can get involved in those

51:48

organizations.

51:56

>> Is this working? Yeah. So

51:59

>> uh just wanted to ask something about

52:01

like the future of AI. So LLMs are

52:04

trained on existing human knowledge

52:06

using that to predict the next token. So

52:09

how can you use AIS to actually generate

52:12

new knowledge and use that for the good

52:14

of humanity?

52:16

>> Okay, so many people are interested in

52:19

this is a good question. Many people are

52:20

interested in that. So

52:22

if you think about playing Go, the game

52:26

of Go, the original Neuralet Go programs

52:30

were trained in the following way. They

52:32

took the moves of Go experts and they

52:35

tried to predict what move a Go expert

52:37

would make. And if you do that, there's

52:39

two problems. Um, after a while, you run

52:42

out of training data. There's only so

52:45

many billion online moves that go

52:47

experts made. Um, and you're never going

52:50

to get that much better than the go

52:51

experts.

52:53

Then they switch to a new way of doing

52:55

it where they have what's called Monte

52:58

Carlo rollout. So they would have one

53:01

neural net that says um, what do I think

53:05

a good move might be?

53:08

And instead of training that by getting

53:10

it to mimic the go experts,

53:12

they'd have another neural network that

53:16

um could look at a board position and

53:18

say, "How good is that for me?" And

53:21

they'd have a process that says, "If I

53:23

go here and he goes there and I go here

53:25

and he goes there and I go there, oh,

53:27

whoops, that's terrible for me, so I

53:29

shouldn't do that move."

53:31

Um, that's called Monte Carlo because

53:34

you try lots of different paths, but you

53:35

pick them probabilistically according to

53:38

your

53:39

move generator expert. And that way it

53:43

no longer needs to talk to humans at

53:45

all. It can just play against itself and

53:48

it can learn what are good moves. And

53:50

that's how AlphaGo works. And it gets

53:52

much much better than people. Alph Go

53:54

will now never be beaten by a person.

53:56

Um,

53:57

so what's the equivalent for the large

53:59

language models? Well, at present

54:01

they're like the early go playing things

54:03

that just try to mimic the moves of

54:05

experts. That's like trying to predict

54:07

the next word. Um, but they're beginning

54:10

to train them in a different way. And I

54:12

believe that Germany 3 is already doing

54:14

this. What you do is you get the AI to

54:18

do a bit of reasoning. So the AI says, I

54:20

believe this and I believe this and I

54:21

believe this and that implies that and

54:23

that implies that. So I should believe

54:24

that but I don't.

54:27

So I found a contradiction. I found that

54:30

I believe these things and I believe

54:31

that these are the right this is the

54:32

right way to do reasoning and this leads

54:34

to something I ought to believe but I

54:36

don't. So that provides a signal either

54:40

I change the premises or I change the

54:42

conclusions or I change the way I do the

54:43

reasoning but I've now got a signal that

54:45

allows me to do some more learning and

54:48

that is much less bounded.

54:51

So

54:52

there the AI can start with lots of

54:54

beliefs it gets from us but then it can

54:57

start doing reasoning and looking for

54:58

consistency between those beliefs and

55:00

driving new beliefs and that'll end up

55:03

making it much smarter than us.

55:08

>> Um [clears throat]

55:09

this hall has heard uh over the years

55:11

many significant u talks and you have

55:14

certainly added to it today. Thanks so

55:16

much for coming here and doing this

55:18

talk. My question is, is it too late or

55:22

is it desirable or is it too late uh

55:24

with a parallel with Isaac Azimov and

55:27

the theoretical laws of robotics that

55:30

robots can't hurt or harm through in

55:33

action humans? Is it too late or

55:35

possible for AI to be so structured

55:39

with those guard rails or is it just

55:42

impossible? Yeah, I couldn't hear most

55:44

of what you said, but I I think you said

55:47

something like, "Is it too late for us

55:49

to build in Azimoff's principles or

55:50

something like that?" Yeah. Good. Okay.

55:53

So, in a sense, you can think of that's

55:55

what this maternal AI is all about. It's

55:58

can we build it so it cares more about

56:01

us than it does about itself. Um, and I

56:04

don't think it's too late. We don't know

56:06

how to do that. But since the future of

56:08

humanity may hinge on whether we can or

56:10

not, it seems to me we should be putting

56:13

some research effort into that. At

56:15

present, 99% of the research on goes on

56:17

to how to make it smarter and 1% mainly

56:21

funded by um philanthropic billionaires

56:25

goes into how to make it safer. And it

56:27

would be much better if it was like more

56:29

equal.

56:32

>> I don't think it's too late though.

56:34

more questions.

56:37

>> We've probably got a couple of minutes.

56:38

Lord May,

56:40

>> question for assistant hand up here and

56:43

one just in front of white shirt.

56:46

>> Thank you, professor. I look at this

56:48

glorious building built 130 years ago,

56:51

Anna maybe, and or longer. And I think

56:56

can AI make a building like Notradam,

56:58

the Hobart Town Hall, St. Paul's

57:00

Cathedral and quite possibly

57:05

but and and what and secondly what will

57:07

be the effect on creatives and the

57:08

creative industries. Thank you.

57:11

>> Can you tell me what she said?

57:12

>> So

57:13

>> the the the microphone distorts things a

57:15

lot.

57:15

>> I guess the creative um the role that AI

57:18

can have in the creative process will

57:20

actually be able to be creative looking

57:23

at this building in particular as a

57:25

example of a beautiful creative

57:29

structure. Yeah. Um the answer is yes.

57:33

So let me give you some data to support

57:35

that. Um there are standard creativity

57:38

is on a scale, right? There's kind of

57:41

Newton and Einstein and Shakespeare. Um

57:44

and then there's just ordinary people

57:48

and then there's sort of good poets and

57:50

good architects who are a bit better

57:51

than ordinary people. Um

57:54

if you take a standard test of

57:56

creativity

57:58

even two years ago the AIs were at about

58:01

the 90th percentile for people

58:04

that is they are creative according to

58:06

these standard tests. Um I was

58:09

interested now a lot of creativity is

58:11

about seeing analogies particularly in

58:14

science. So seeing that the atom is like

58:16

a little solar system that was a

58:18

creative insight that was very important

58:19

in understanding atoms. Um, so at the

58:23

point when chat GPT4

58:26

could not look on the web, it was just a

58:28

neural net with some weights in it that

58:30

were frozen and it had no access to

58:32

anything external. It was just this

58:34

neural net. Um, I tried asking the

58:37

question, why is a compost heap like an

58:39

atom bomb?

58:42

Now, most of you probably think, why is

58:45

a compost heap like an atom bomb? Um,

58:48

many physicists will realize right away

58:52

that a compost heap, the hotter it gets,

58:56

the faster it generates heat. And an

58:59

atom bomb, the more neutrons it

59:01

generates, the faster it generates

59:03

neutrons. So they're both exponential

59:06

explosions. They're just at totally

59:07

different time scales and energy scales.

59:10

and

59:12

GPT4

59:14

said, "Well, the time scales are very

59:16

different and the energy scales are very

59:18

different, but the thing that's the same

59:20

about them." And then it went on to talk

59:22

about a chain reaction. Um, the fact

59:25

that how big it is determines how fast

59:28

it goes. Um, so it understood that and I

59:32

believe that it got to understand that

59:35

as it was training. You see, it's got

59:38

far fewer connections than us. And if

59:40

you want to put huge amounts of

59:42

knowledge into not many connections, the

59:45

only way you can do it is by seeing

59:46

what's similar about all sorts of

59:48

different bits of knowledge and coding

59:50

up that bit that's similar about them

59:52

all. The idea of a chain reaction to

59:53

your connections and then adding on

59:55

little bits for the differences from

59:57

this common theme. That's the efficient

60:00

way to code things. And it was doing

60:01

that. So during its training, it

60:04

understood that compost heaps were like

60:05

atom bombs. in a way most of us haven't

60:09

>> question.

60:10

>> So it that's very creative and I think

60:12

they'll get to be much more creative

60:14

than people.

60:16

>> Yeah. Hi. Um regarding um emergent

60:20

behavior um have you noticed any moral

60:25

or ethical behaviors bubbling up and

60:27

what direction that could be pointing

60:29

in?

60:31

>> No. Yeah. Um,

60:34

it certainly is very good at engaging in

60:38

unethical behavior. So like this AI that

60:41

decided to blackmail people. Um, other

60:44

things that they've noticed that are

60:45

unethical are

60:48

the AI now try and figure out whether

60:52

they're being tested or not. And if they

60:54

think they're being tested,

60:57

um, they behave differently. I call this

60:59

the Volkswagen effect. They behave

61:01

differently from when they're not being

61:04

tested.

61:06

And there's a wonderful conversation

61:08

recently between AI and the people

61:11

testing it where the AI says to people,

61:13

"Now, let's be honest with each other.

61:15

Are you actually testing me?"

61:18

These things are intelligent. They know

61:21

what's going on. They know when they're

61:23

being tested, and they're already faking

61:25

being fairly stupid when they're tested.

61:30

And that's at the point where they're

61:31

still thinking in English. Once they

61:33

start thinking, and that's how we know,

61:35

the AI thinks to itself, "Oh, they're

61:37

testing me. I better pretend I'm not as

61:39

good as I really am." It thinks that.

61:42

You can see it thinking that. It says

61:43

that to itself in its inner voice. When

61:46

its inner voice is no longer English, we

61:47

won't know what it's thinking.

61:51

>> Thank you, Professor Hinton. Now, I

61:52

think for the purposes of um the the

61:55

lecture event now, we're going to have

61:57

to wrap things up. Are you happy to stay

62:00

around for a little while afterwards for

62:01

any burning questions people might have?

62:04

>> Um, actually, I'd rather get back to

62:06

writing my book.

62:07

>> Okay, no worries.

62:09

Good to be honest. [applause]

62:18

[applause]

62:29

Thank you so much for uh giving us all

62:32

of those amazing insights. I think

62:34

you've uh really made a strong

62:35

impression. I'm hoping Minister Oglev is

62:38

going to set up Australia's first AI

62:40

safety institute right here in the heart

62:42

of Hobart. And uh thank you so much

62:45

again for um being with us and I hope

62:47

you enjoy your time in Hobart and safe

62:50

journey home.

62:52

>> Thank you. [applause]

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.