TRANSCRIPTEnglish

MIT 6.S087: Foundation Models & Generative AI. PANEL

41m 45s7,125 words1,034 segmentsEnglish

FULL TRANSCRIPT

0:04

um I'm going to ask you each one um kind

0:08

of a more targeted question at first and

0:10

you guys also think about what what you

0:12

want to ask our poist today so um

0:16

Professor first question to you um rard

0:18

touch on this topic and you are an

0:22

expert in computational biology probably

0:24

exposed a lot to Evolution and and

0:26

mechanics how it worked I want to um get

0:30

your opinion your perspective on the

0:31

Dilemma that exists right now um in

0:33

terms of centralization versus

0:35

decentralization in terms of alignment

0:38

versus um more risk and diversity

0:43

so let me pass this to

0:46

you in terms of perfect yeah in terms of

0:51

alignment versus more risks and

0:53

diversity specifically meaning that well

0:55

we as humans are very diverse uh we have

0:58

diverse cultures you know

1:00

um lived in in Greece and France rard

1:02

lived in Sweden I lived in Ukraine um we

1:05

are BR in different environments and the

1:08

evolution might have been pushed by our

1:10

differences um but now we have a very

1:14

defragmented um defragmented AI

1:18

defragmented organization defragmented

1:20

Society in terms of who's pushing AI

1:22

forward they are implementing their own

1:24

AI alignment systems um they're reducing

1:28

the diversity but potentially also

1:30

reducing biases and stereotypes that

1:32

have already existed in society so we

1:35

kind of have a dilemma between high

1:37

risks um but more opportunity for

1:41

Innovation or lower risks and lower

1:43

opportunity for Innovation what's your

1:45

perspective on that coming from biology

1:48

Evolution and things around that

1:51

beautiful uh fantastic question

1:53

extremely rich extremely uh deep broad

1:56

reaching Etc so um let me start with

1:58

Biology a little bit so basically uh as

2:00

you mentioned sort of humans are forced

2:03

to be diverse we don't have a choice we

2:05

basically have genetic variation that

2:08

modifies every aspects of our brain and

2:11

of our body and of our behavior and of

2:14

our inclinations and so so forth I have

2:17

three children uh you know they are

2:20

completely different from each other and

2:22

and and they were completely different

2:23

when when they first came out and

2:25

they're still completely different now

2:27

and um as much as we would love to as

2:30

parents think that nurture matters a lot

2:33

it's only about 50% and another 50% is

2:36

just like nature and and there's very

2:38

little you can do about that and um

2:41

that's I think part of the beauty of

2:42

humanity the fact that whether we like

2:44

it or not we're all programmed to

2:46

actually think differently to interpret

2:47

things differently to uh Etc and that

2:50

that's just the nurture component the

2:51

nature component Al sorry that's just

2:53

the nature component the nurture

2:55

component also gives us extraordinary

2:56

diversity in sort of where we grew up

2:59

the things that we saw as cultural

3:01

references at different points in our

3:03

lives as you mentioned different

3:04

cultures different families even in the

3:06

same sort of street block you can have

3:09

kids growing up with completely

3:12

different perspectives on life and I

3:14

think that's what makes MIT work that's

3:17

what makes any team work the fact that

3:20

we think differently and we can bounce

3:22

ideas with each other with mutual

3:25

respect but also uh completely different

3:29

perspectives and that shapes the ideas

3:30

very interestingly so I think one way to

3:33

achieve that with AI even with a single

3:36

underlying large language models is to

3:38

instill different personalities in a set

3:41

of agents that are interacting together

3:44

in the same system so that forces the

3:46

agents to actually process ideas in

3:48

different ways so if you want to have

3:50

the most Creative Solutions you don't

3:52

want a single AI That's going to give

3:53

some average you want a lot of different

3:57

AI that are going to be bouncing off

3:58

each other each with own personality and

4:00

you can encode that you can give them

4:01

personalities you can basically say you

4:04

know you are a professor who grew up in

4:06

Iran and who has you know these kinds of

4:07

backgrounds you are a waiter who grew up

4:10

in I don't know Scandinavia and has this

4:12

background Etc and then based on these

4:16

personalities you can sort of build a

4:19

life story and a set of attributes for

4:21

each of the agents and then push them

4:23

towards uh more creativity um in terms

4:26

of bias we all worry so much that AI

4:29

will be biased but I have to say that

4:31

humans are you know have a terrible

4:33

track record on bias we are horrible

4:35

when it comes to bias and icii as a hope

4:38

for being able to not just debias but

4:42

anti-bias uh our thoughts to be able to

4:44

sort of

4:45

artificially tag on different biases

4:49

with different attributes and push us

4:51

off our comfort zone in terms of

4:54

expectations and have the AI push itself

4:57

off its comfort zone so you can

4:59

basically create create again these

5:00

personalities with very different

5:03

stereotypes and with mismatch of these

5:05

stereotypes and sort of have the AI

5:07

interact with those and actually learn

5:09

how to uncouple uh those biases so

5:13

that's on the bias a little bit on the

5:15

diversity and in terms of the

5:17

centralization I you know again I think

5:20

the scenario of Skynet in Terminator is

5:23

exactly one of centralization it's

5:25

basically US versus the AI and I think

5:29

that the for of the market are such that

5:33

as centralization happens in One

5:35

Direction you will have forces pushing

5:37

against it in the other direction and

5:40

there is uh there are laws against

5:43

Monopoly there are antitrust laws that

5:45

are sort of go going to kick in if we

5:48

see that in fact centralization is

5:50

pushing too far and I think that's

5:52

healthy I think that the forces of the

5:54

market are are healthy I think that the

5:57

best way to combat the Skynet scenario

6:00

of the AI apocalypse is not to pause AI

6:04

it's to double down and to sort of you

6:07

know expand out and to democratize and

6:10

to sort of you know provide

6:11

opportunities for many others to build

6:14

on the same architectures on the same

6:17

Hardware on the same software and

6:18

sometimes even on open AI to basically

6:21

create diverse agents on top of it and

6:24

that's what we saw a few months ago with

6:26

the Chachi pts the fact that everyone

6:28

can program their own Ai and even if

6:30

there's an underlying architecture you

6:32

can still have diversity in the

6:34

utilizations and in the

6:36

outcomes okay so thank you that's very

6:40

interesting I do think that saying that

6:42

you have one single big big AI that you

6:44

incorporate different personalities into

6:47

it sounds like if you take the biology

6:48

and evolution similarity like well all

6:50

of humankind would share a single brain

6:53

you know that would be prompted

6:54

differently and that sounds like w why

6:56

don't Humanity have a single brain

6:58

because it's very fragile if it screws

7:00

up we're all screwed so I I you see I

7:03

mean I think it's and also I think if

7:04

you have that such a big thing it's

7:07

gonna even if it's just less biased it's

7:10

going to be biased systemically in

7:11

exactly the same way for all of those

7:13

users right well a human being exactly

7:15

is very biased but differently so which

7:18

is I think much more in line with nature

7:21

and evolution which I think is great

7:22

guiding Stars uh so like since you since

7:25

you work with this I feel also the last

7:27

thing you point as well that let's just

7:29

push through right but like what's and

7:31

what's the what can we learn in terms of

7:33

innovation and change from nature well

7:36

most of change is bad and how you know

7:39

we understand that is by the passing of

7:42

time like if you push things very very

7:44

quickly what systems like Evolution will

7:47

understand what's bad Innovation is

7:49

going to kill us and what's not if you

7:50

don't give it enough time to see the

7:53

effects does that make sense uh no

7:56

absolutely these are a great idea so so

7:57

basically on the first comment of the

7:58

single brand many many personalities

8:01

even if you have a

8:03

single giant llm it has 5 billion

8:06

parameters if you look at the human

8:07

brain the way that thought happens it's

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.

    MIT 6.S087: Foundation… - Full Transcript | YouTubeTranscript.dev