文本记录English

MIT 6.S087: Foundation Models & Generative AI. ETHICS

55m 44s10,622 字数1,517 segmentsEnglish

完整文本记录

0:01

all right let's get

0:03

started so uh today should be very fun

0:07

because we're going to talk about ethics

0:08

and regulations so first I'll provide a

0:11

kind of a very high level lecture cover

0:15

a lot and then we're going to have a

0:16

panel uh where manolis comes back uh and

0:20

you get a chance to ask a lot of

0:21

questions we can discuss this uh more in

0:25

detail right so I mean there's there's

0:27

little doubt that Foundation models and

0:29

generative AI becoming uh very impactful

0:31

as a technology right so then the

0:33

question is how do we develop this

0:35

safely and responsibly and what kind of

0:38

air do we want and what can we affect

0:40

and also should we regulated and how can

0:42

we balance Innovation and regulation is

0:45

what we kind of try to uh cover today so

0:48

I'm actually going to open up and asking

0:50

you a question who would you blame if an

0:54

AI hurt

0:56

you I think this is this is a key

0:58

question and so there's a lot of

1:01

different

1:02

Alternatives um maybe you want to right

1:06

let's say tatb does something should you

1:07

blame the company that developed it open

1:10

AI or is it perhaps in institutions like

1:13

government institutions agencies that

1:15

supposed to regulate things and protect

1:16

us is it maybe you know the AI itself is

1:21

an entity or a person is responsible

1:23

like we're you know we see AI as as an

1:25

individual with its own aims and and

1:27

desires is it the data that's been

1:29

trained on

1:30

is that the problem or is it maybe you

1:33

know the best kind of tool that we have

1:35

is the people behind it right the

1:36

stakeholders behind this AI

1:40

um and I think like anthrop

1:45

anthropomorphizing sorry

1:47

anthropomorphizing AI is very dangerous

1:49

because it makes us feel that AI has an

1:52

agency and responsibility and it can

1:54

allow stakeholders to hide behind Ai and

1:58

relieve themselves of responsibilities

2:00

uh but at the end of the day still right

2:03

even this type of AI that we're seeing

2:04

right now that's very powerful there's

2:06

always people behind the AI That's

2:09

benefiting from it right and that's the

2:11

ones that that somehow uh probably

2:13

should be kept responsible I think

2:15

that's a very it's a useful perspective

2:17

because they're they're motivated and

2:18

they have G something to gain or lose so

2:20

you can try to understand what AI is

2:21

doing but looking at the stakeholders

2:23

behind it so right if you ask CHP itself

2:28

to gbt forhead to create an of yourself

2:30

you get this you know personlike entity

2:34

that media also likes to portray which I

2:36

think is a dangerous and false

2:37

representation that allows uh the

2:40

stakeholders behind them to hide behind

2:42

the problem of the AI rather than the

2:44

problem of the stakeholders right that

2:46

wants to build it and benefit from

2:49

it okay so what are like let's take TP

2:52

for example in open eye what are the

2:53

stakeholders well you know there the

2:55

people behind the company uh uh the

2:58

leader the management Etc right we have

3:00

my picture here then we also know that

3:03

Microsoft has bought a 50% stake in

3:05

openai so then suddenly they also kind

3:07

of Stak holders and have a lot of

3:08

motivation here uh and then on top of

3:11

that right suddenly the CEO of openi is

3:14

pushed out and then he's he returns

3:17

right so this this a you know when we

3:18

talk about transparency responsibility

3:20

in terms of AI models there's a huge

3:22

lack of transparency in the stakeholders

3:25

Behind These AI models and I think it's

3:28

it's kind of a you know great challenge

3:31

to kind of hypocritical that these

3:33

people are supposed to build transparent

3:35

Ai and that's what we care about when

3:36

the organization is completely you know

3:39

non-transparent and they can't keep

3:41

their things together right and that

3:42

should make us worry this is nothing

3:44

that's not related to AI at all but the

3:46

problem already starts there right when

3:47

you try to establish who's the

3:51

stakeholders okay so

3:54

um right so we're going to start off

3:56

covering a lot of different uh topics

3:59

BAS basically the first part will be you

4:01

know jumping between topics of AI and

4:04

ethics and then we're going to say we're

4:06

going we're going to say like well these

4:07

are complicated issues and then we're

4:08

going to see how government and

4:10

institutions try to regulate and address

4:12

this issu and then we're going to talk

4:13

about potential future

4:16

directions and I think also I don't want

4:18

to like pretend there's a mainstream

4:21

perspective on these issues right these

4:23

are very nuance and complicated I want

4:25

to hide behind some some uh you know

4:28

mainstream or say that these are you

4:31

know solved solved problems and there's

4:33

clear answers so just to be completely

4:36

you know transparent uh this is you know

4:40

I'm sharing these uh thoughts and

4:42

perspectives H and really it's just to

4:45

get you guys to think about these

4:47

problems also just make you ask more

4:50

questions really uh so I'm going to

4:52

cover a lot of different things and jump

4:54

back and forth not going to cover

4:55

everything just going to try to make you

4:57

guys think and and be excited and try to

5:01

make up your own opinion right I don't

5:02

want to lect you and tell what's good or

5:04

bad and moral that's for up for you I

5:06

guess to decide I want to provide you

5:07

with certain aspect and problems that

5:09

we're seeing in

5:11

AI okay so before we jump in I just want

5:15

to emphasize certain perspectives so one

5:19

perspective I think is important is that

5:20

is AI categorically different from other

5:23

Technologies or other problem areas that

5:25

we've been seeing in our history right

5:28

maybe AI is just part of a type of

5:31

development or Evolution I've been

5:32

seeing for a long time maybe AI is just

5:34

you know part of a Continuum rather than

5:36

something that's very different and

5:38

maybe AI is not the problem in this

5:40

trajectory maybe we went wrong somewhere

5:42

earlier maybe when we started using

5:44

electricity maybe that's where we really

5:46

went off right and did something bad so

5:48

also like is AI really that different

5:50

from what we seen before I think it's an

5:52

important uh perspective and and

5:54

actually is this not where a real

5:56

challenge it's not AI but something else

6:00

okay another important point I think is

6:01

that ethics is a lot about how things

6:04

should be how the world should work

6:06

right philosophy and ethics love to talk

6:08

about the shoulds I think it's a big

6:10

difference between how the world should

6:12

work and how the work how the world

6:14

actually works right so I think this for

6:16

me I think it's less utility in building

6:17

up a Utopia of how we would like the

6:20

world to work I think it's more

6:21

interesting talk about how we think the

6:23

world will work uh and you know realize

6:26

there's a difference

6:27

here okay and last well I mean so these

6:31

are you know real problems that are uh

6:35

high stake problems right and now

6:38

there's definitely you know using these

6:40

Technologies uh to boost their nation's

6:43

military security and it's becoming an

6:45

arms race and I think it's also you know

6:47

that makes the stakes higher and certain

6:50

things become like a necessity to

6:51

protect your nation and typically when

6:54

there's an arms race it's a lot of you

6:56

know there much less should but it's a

6:58

lot of competition

7:00

uh and things move very very quickly um

7:03

yeah so I think that's also you good to

7:06

keep in mind that these things are not

7:07

just affecting ourselves but they're

7:09

affecting this you know bigger picture

7:11

uh right everything is is fair in Love

7:13

and War and I think that's uh

7:15

historically quite

7:18

true okay so we're going to start off

7:21

walking through some different threats

7:23

that we see from

7:24

Ai and well there's a lot of them right

7:27

we have misinformation and manipulation

7:29

we have deep fakes we have privacy bias

7:32

and discrimination lack of transparency

7:34

and accountability centralization job

7:37

displacement unnatural living conditions

7:39

and the unintended unknown so let's get

解锁更多

免费注册以访问高级功能

互动查看器

观看带有同步字幕、可调节叠加层和完整播放控制的视频。

免费注册以解锁

AI 摘要

获取由 AI 立即生成的视频内容摘要、要点和结论。

免费注册以解锁

翻译

一键将字幕翻译成 100 多种语言。以任何格式下载。

免费注册以解锁

思维导图

将字幕可视化为交互式思维导图。一目了然地了解结构。

免费注册以解锁

与字幕聊天

提出关于视频内容的问题。直接从字幕中获取由 AI 驱动的答案。

免费注册以解锁

从您的字幕中获得更多

免费注册并解锁交互式查看器、AI 摘要、翻译、思维导图等。无需信用卡。

    MIT 6.S087: Foundation Models &… - 完整文字记录 | YouTubeTranscript.dev