TRANSCRIPTEnglish

AI and Education: What's Changing, What Kids Must Learn, and the Path Forward | SXSW

54m 7s9,566 words1,381 segmentsEnglish

FULL TRANSCRIPT

0:00

I think we need to move away from

0:01

preparing kids for jobs cuz jobs are

0:04

going to change. The way students are

0:06

taught and what they're taught to

0:08

optimize for is very at odds with where

0:11

we're at right now with AI. The safest

0:14

assumption we have to make in this

0:16

moment is that kids are going to be

0:18

using artificial intelligence at home.

0:20

So, the classroom really has to be a

0:22

place where the deep learning is

0:23

happening and where we're raising the

0:25

bar on knowledge. When you outsource

0:27

your cognitive work to an AI, you

0:30

actually become cognitively weaker. This

0:32

is potentially really detrimental to

0:34

society more broadly that we become so

0:36

reliant on these technologies, we stop

0:39

believing in our own ability to make

0:41

decisions. In the age of advanced

0:43

technologies, kids need to read more and

0:46

we'll have to make school harder. The

0:48

most important skills for the future

0:50

have nothing to do with technology.

0:55

What should kids be studying and

0:57

learning today for a future that's going

0:58

to be transformed by artificial

1:00

intelligence? And how does the

1:02

institution of education need to evolve

1:04

to meet the moment? In the spirit of

1:06

South by Southwest, I'm going to share a

1:07

talk that I delivered on this topic

1:09

where I go into detail on some of these

1:12

ideas from where we can expect this

1:14

technology to take us to the skills kids

1:16

in school today need to be cultivating

1:18

and the systemwide change the

1:21

institution of education is going to

1:22

need to undergo to prepare kids properly

1:24

for this world. I'm Chenade Boll and

1:27

this is I've Got Questions. Um, super

1:30

delighted to be moderating this

1:32

conversation with the fabulous Chenade

1:34

Boval. Chenade, tell us what it is to be

1:37

a strategic foresight adviser and your

1:40

lens on AI and the future of education.

1:43

>> Yeah. Um, you know, education is the

1:46

bedrock for a healthy democracy and for

1:50

a functioning society. uh and it's not

1:53

just essential for things like economic

1:56

mobility and economic security but

1:59

fairness as well uh and for well-being.

2:02

I believe there's no such thing as a

2:04

state that overinvests in children's

2:08

future. An investment in children is an

2:10

investment in national interest. Right?

2:13

So, you want to foster uh an informed

2:16

and adaptive citizenry that can not just

2:19

safeguard the future, but thrive in it.

2:22

Especially a future that's going to be

2:23

as complex as the one children in school

2:26

today are entering into, which will be

2:28

shaped by quantum computing, genetic

2:30

engineering, artificial intelligence,

2:32

commuting back and forth to space. This

2:35

is an incredibly complex world they'll

2:37

be entering into. Uh so the more that

2:39

they can understand it, the more we can

2:41

support them in that journey, the better

2:43

equipped they are. And that's an

2:44

investment in a country's economic

2:46

security, in our collective health and

2:48

well-being, and our overall security and

2:50

national security interests. Goodness.

2:52

So couldn't be a more pressing u topic.

2:56

Uh and before we dive into um some of

2:59

the kind of finer points, where would

3:01

you say taking a step back like where

3:04

are we at this moment with AI?

3:09

So if I were to say where we are, I mean

3:11

it's very very early. Maybe it's 1992.

3:15

The internet has dropped. Companies are

3:17

experimenting. We know it's maybe going

3:19

to big be a big deal, but there's still

3:21

a lot of doubt. Some people are also

3:23

kind of playing around on it. But we

3:26

have yet to fully comprehend the way it

3:28

is going to fundamentally transform our

3:31

world. the Googles, the Apples, the

3:34

Amazons of the future, they have yet to

3:36

be invented, but they're coming. And

3:39

artificial intelligence, it's also a

3:40

general purpose technology. So, similar

3:43

to something like electricity. Think of

3:46

how pervasive electricity is, we don't

3:48

even think about it at all. It's so

3:50

foundational that it's moved into the

3:52

background. So, we will soon be

3:54

streaming artificial intelligence the

3:57

way we stream electricity. that is going

3:59

to be a fundamentally different society

4:02

to live in. Uh and these general purpose

4:04

technologies they take time to get so

4:07

entrenched in society. But you know when

4:10

it's reached that point because when

4:12

people can't access general purpose

4:14

technologies whether that's at a country

4:16

level at certain neighborhoods we deem

4:18

that wildly unethical. Who doesn't get

4:21

fair access to the internet? Who doesn't

4:23

get access to electricity? That is the

4:25

path artificial intelligence is on.

4:28

So if artificial intelligence is going

4:30

to be this general purpose technology

4:32

and fade into the background,

4:35

what does AI and education

4:38

look like now? Like how should educators

4:41

be considering AI and education given

4:44

that it will be in the background but

4:46

today we're at this very early phase.

4:49

Yes. So I think that there are kind of

4:51

three pillars that are related but

4:56

distinct in terms of how we should be

4:58

thinking about AI in education. The

5:01

first pillar is safe adoption for kids

5:05

and for learners. So this means

5:07

equipping kids with the tools to

5:10

navigate artificial intelligence because

5:12

they're going to be on these tools at

5:14

home regardless. They have

5:15

supercomputers in their pockets,

5:17

supercomputers on their iPads. So giving

5:20

uh kids the the skills to utilize these

5:22

tools safely. So that's conversations

5:24

like AI isn't your friend, right? Your

5:26

chatbot isn't something that you tell

5:28

secrets to. This is what we do or don't

5:30

share with artificial intelligence. And

5:32

this is all also how you ask it good

5:34

questions and validate its answers. So

5:36

that's pillar one. Pillar two is how do

5:40

we more urgently adjust what we are

5:43

teaching in school or just the formula

5:46

for what happens in the classroom versus

5:48

what happens at home knowing that kids

5:50

are going to be leaning into these

5:52

technologies at home to do homework and

5:55

to complete assignments. The third

5:57

pillar and this is where I think we are

5:59

rushing into but this is actually the

6:01

long-term game is how do we

6:03

fundamentally redesign the entire system

6:07

of education for the age of artificial

6:10

intelligence. But what seems to be

6:12

happening in this moment is we are kind

6:14

of merging all of those pillars in a

6:17

sense of urgency. And this leads us to

6:20

deploy AI in schools for the sake of

6:23

feeling like we need to meet the moment

6:25

by bringing AI into the classroom. And

6:27

there are a lot of technologies that

6:29

aren't ready. So I think we focus on

6:32

pillar one, giving kids the tools to use

6:34

these tools safely if they're going to

6:36

be using them on their phones. We

6:37

slightly adjust what we're teaching to

6:39

account for cheating in homework. But

6:41

it's much more at the departments of

6:43

education, the ministries of education

6:45

level to take this longer term lens and

6:47

fundamentally redesign AI or school for

6:50

the age of artificial intelligence. So I

6:54

think we've heard this week a number of

6:56

different ways that educators are

6:57

experimenting with AI, different pilots,

7:01

different ways of going about it. Like

7:03

is now the time to be exounds like we

7:06

should be teaching it and educating

7:08

about it? is now the time to be

7:11

experimenting with it in deeper ways.

7:14

>> Yeah. Yeah. So, you know, teaching it is

7:17

it's more so AI is a hard skill. So,

7:19

that should be happening. Uh

7:21

experimenting with it. Yes. We need to

7:23

be running these pilots. We need to be

7:25

gathering the data as to what's working

7:26

and what's not. But it has to be in very

7:29

very intentional ways. Um and not just

7:32

assuming that we can just throw in an AI

7:34

tutor somewhere arbitrarily and that's

7:36

going to be sufficient. um and making

7:39

sure that we're not running social

7:41

experiments that jeopardize learning

7:43

outcomes uh for the sake of just feeling

7:46

like we need to quickly meet the moment.

7:48

Uh so yes, I think that these

7:49

experiments, they're vital. They should

7:50

be happening uh but they need to be very

7:52

very intentional uh and very very

7:54

controlled.

7:56

I know that you're working with

7:58

Fortune50 companies in this space and

8:00

advising them on how to navigate AI and

8:04

education. what are some of the data

8:07

points or and the advice that you've

8:09

been giving them?

8:10

>> Yeah, so it's been really really

8:12

interesting to look at some of the data

8:14

that's coming through and of course

8:16

we're still very very early in early in

8:18

the age of using artificial intelligence

8:20

in education. Uh but there's one clear

8:22

trend that stands out. So, I'm going to

8:24

walk us through a study that I find

8:26

particularly helpful. And this was done

8:28

uh by the University of Wharton, the

8:30

University of um I believe Pennsylvania

8:33

and uh Budapest British International

8:36

School and it implemented artificial

8:38

intelligence in math classes. Uh so

8:41

there were a few math classes in the

8:43

high school and they broke the class up

8:45

into three groups. The control group

8:47

which is just your traditional doing

8:48

homework problems with your textbook. uh

8:51

the GPT base group and this is the

8:53

students that got uninhibited access to

8:56

artificial intelligence. Uh and then the

8:58

GPT tutor group. So these are students

9:00

that got access to an AI that has been

9:02

designed to just guide them through

9:04

problems, give them hints but not the

9:07

answers. All students got the base

9:09

lesson for math together and then they

9:12

broke out into their respective groups

9:14

and their respective tiers of AI access

9:16

or not. So the study showed that when it

9:19

came to the practice problems, the

9:21

children that got uninhibited access to

9:23

AI did 48% better on the practice

9:26

problems than the control group. The

9:28

students that got access to the GPT

9:30

tutor did 127% better than the control

9:34

group. But when it came back time to

9:37

actually test students without access to

9:39

AI and do the final postunit test, the

9:43

kids that got the uninhibited access

9:45

performed 17% worse. So AI harmed the

9:49

learning outcome. And the children that

9:50

got access to the AI tutor performed at

9:53

the same level as the children that

9:56

didn't get access to any artificial

9:57

intelligence. And so the conclusion of

10:00

the study was that generative AI harms

10:02

learning outcomes. But then there was a

10:05

second study that happened at Harvard.

10:07

And of course we have to control for the

10:09

fact that self-directed learning is a

10:11

little bit different at a university

10:12

level. Um and clearly if you're getting

10:14

into Harvard, there's also some kind of

10:16

higher order thinking that you're able

10:17

to do. Um but that aside, it was a

10:20

physics class and they broke the physics

10:22

class into two groups. The control

10:24

group, which is the students that went

10:26

to the traditional lecture with the with

10:28

the professor. Uh then they broke out

10:30

into peer groups and and worked uh with

10:32

one another and their peers and had

10:33

instructor-led guidance on solving

10:35

problems. The second group had no

10:38

in-class lessons at all. The entire

10:41

process with was done with AI, but they

10:44

specifically designed the AI to be

10:46

self-paced uh going with the students

10:49

needs to provide immediate feedback

10:51

whether the student was on the right

10:53

direction or off the right direction

10:55

while they're doing the problems to

10:57

provide motivation and to really take

10:59

all learning best practices and

11:01

implement it into that system and

11:03

continue to adapt how it tested the

11:04

child based on where how they were

11:06

evolving um and doing the problems. when

11:09

they did the general test after those

11:12

two experiments, the kids that went the

11:14

pathway of artificial intelligence

11:16

performed twice as well as the peers

11:19

that didn't get access to AI and they

11:21

were more motivated and more engaged.

11:24

And so what we can learn from just those

11:27

two kind of isolated studies is that you

11:30

have to adapt the entire ecosystem,

11:34

right? It it's akin to inventing

11:36

electricity but only swapping out where

11:39

the steam engine was and putting light

11:42

switch there or light switch there. Not

11:44

building out the entire assembly line

11:46

and rethinking about how we design the

11:48

system. That's step one. Step two,

11:52

immediate feedback is absolutely vital

11:56

in AI learning outcomes. Gone are the

11:59

days if we're going to incorporate

12:01

artificial intelligence where we wait

12:03

for the unit test or midterms or the end

12:06

ofear exam to see where students are.

12:08

Artificial intelligence needs to be able

12:11

need to be able to extract the data in

12:12

real time. This is how somebody's

12:14

adapting or this is how they're falling

12:16

behind. And AI needs to provide that

12:18

feedback or else we learn visibility. We

12:20

lose visibility into how well things are

12:21

happening. Self-paced learning is also

12:24

vital. So if you go back to the the high

12:26

school, everybody had an hour and a half

12:29

to learn the math problem. So whether

12:30

you were working with your textbook or

12:32

working with AI, that hurt people that

12:34

were doing the AI method and it helped

12:36

people doing the traditional method. So

12:38

kids need to learn at their own pace. Uh

12:40

and the system needs to be able to adapt

12:42

in real time. So those are just a few of

12:44

the key takeaways uh when we think about

12:47

AI and education. Um, but that's why

12:49

this is a longer term redesign and that

12:52

redesign should not fall on teachers.

12:56

Uh, and that's one thing that I think we

12:57

need to really make clear that this

13:00

moment shouldn't fall on teachers. They

13:02

already have way too much on their

13:04

plate. Uh, this is something for

13:06

government departments head department

13:08

heads and those designing curricula more

13:10

broadly and I think we're getting that

13:13

wrong. Right. Yes.

13:18

Um I had um the opportunity to sit on a

13:20

few panels yesterday and there was

13:21

definitely um you know a very

13:23

understandably iate educator was like we

13:26

had co we had you know we're in an

13:28

underprivileged area we've got so many

13:30

pressures and now we have AI to learn

13:33

and we have to do this two-day course

13:35

and then the pilot ends and then we've

13:37

got to go and do another two-day course

13:38

and AI is the last thing that we need.

13:41

Um, so it sounds like there is a right

13:43

way to do it and there are definitely

13:45

ways um that can uh be harmful and some

13:50

of those right ways involve a lot of

13:53

structural redesign of how um of of how

13:58

students actually engage with AI.

13:59

>> Yeah, this is like the institution of

14:01

education. We need to approach

14:03

differently. Uh it's akin to asking the

14:05

accountant to redesign the the concrete

14:08

and the bricks. uh that's not what we

14:10

should be doing and I know that you

14:12

there was a we had talked about a school

14:14

that you had been tracking uh so I think

14:16

it could be helpful to share some of the

14:17

insights there

14:18

>> yeah absolutely um people familiar with

14:21

alpha school in the room a few people um

14:24

what it is it's a 2hour

14:27

learning process where like all the hard

14:29

skills all the knowledge that you need

14:30

to learn at school happens in a 2-hour

14:32

period with an AI tutor and the

14:36

experience is entirely personalized at

14:39

adapted to where that student is at. So

14:41

if you walk around uh the classroom,

14:44

you'll see different students working on

14:47

completely different math problems,

14:49

let's say, and as uh it's understood

14:53

what that student is interested in, the

14:55

math problems become kind of

14:57

contextualized within topics that they

14:59

love. Um and so and critically and again

15:05

sort of going back to one of the um best

15:06

practices or you know um mandatories in

15:09

AI being successful um in schools

15:13

there's real time feedback and

15:16

performance on performance of that

15:18

child. they can see their own

15:19

performance and actually start to own

15:22

that journey for themselves and uh they

15:26

get everyone into the 99th percentile no

15:29

matter where the where they've started

15:31

in their journey. So I think that's a

15:33

fascinating way um to look at it and I

15:35

think that uh besides those two hours so

15:38

the point of getting you know all of

15:40

that work done in those two hours and

15:42

some people are able to some some

15:44

students are able to accomplish more

15:46

double in those two hours and some on

15:48

the higher performing end five times

15:51

more. Um, but the critical part is

15:53

freeing those students to focus on life

15:57

skills, on EQ skills, on kind of

16:00

developing their own human ingenuity.

16:03

And so I feel like that is, you know,

16:05

from all the research that you've

16:07

discussed, just an amazing example of of

16:09

what's happening today.

16:10

>> Yeah.

16:10

>> Yeah. Totally. Uh, and and that's kind

16:12

of the moment we're in, right? were in

16:14

pilots, in experimentation, uh, and

16:17

innovation. Uh, really a a redesign,

16:20

zoom out, wide lens, um, and some in

16:23

some ways taking risks, but it should

16:25

never hurt the learning outcome and it

16:28

should never be a burden to teachers and

16:31

both of those things need to be true.

16:33

>> Absolutely. Um so did want to kind of

16:35

like dig in a little bit more into some

16:37

of the current challenges with AI that

16:40

you know educators, students, parents

16:43

are experiencing today which is around

16:46

AI and cheating

16:48

um and using you know chat GPT to to get

16:52

to the answer uh right away and um what

16:56

impact that might be having on the

16:59

learner experience and kind of the point

17:01

of being at school.

17:02

>> Yes. Oh, absolutely. And I think that

17:04

that we're in a bit of a crisis in in

17:06

this moment when it comes to artificial

17:08

intelligence and cheating. And we can

17:10

talk about what happens when you kind of

17:11

shortcircuit that that thinking. But I

17:14

think that the safest assumption we have

17:16

to make in this moment is that kids are

17:18

going to be using artificial

17:19

intelligence at home. So whatever

17:21

happens past 300 p.m., expect that to be

17:25

powered by a supercomput in some way. So

17:28

we have to start there. That means we

17:31

have to change what we are doing in the

17:33

classroom. And so in some instances that

17:36

means maybe we flip what happens at home

17:37

happens in the classroom. Uh but in

17:39

other ways maybe for example you teach

17:41

history. You give children the the

17:44

research portion and they can go home

17:46

and do all the research with CHBT that

17:49

they want but the higher order critical

17:52

thinking deep learning and discussion

17:55

all of that happens in the classroom. So

17:57

the classroom really has to be a place

17:59

where the deep learning is happening,

18:01

where the testing is happening, uh, and

18:03

where we're raising the bar on

18:04

knowledge, but we do have to assume

18:06

everything passed four is likely going

18:08

to be done, co-created by, or outsourced

18:11

to an AI system. And then again, the

18:15

broader goal and the longer term goal is

18:17

that we've entirely redesigned the

18:19

curriculum to account for the fact that

18:22

kids can lean into supercomputers

18:25

because that is the actual goal. In the

18:26

end, kids in school today are going to

18:28

step out into a world with advanced

18:30

robots, with supercomputers that are

18:33

polymaths. We want them to know how to

18:36

engage with these with these tools and

18:37

these systems, how to utilize them, how

18:39

to invent with them. And we'll have to

18:41

redesign education to account for that.

18:44

And we'll have to make school harder

18:47

because you do get access to these

18:48

supercomputers. And so in a in a kind of

18:50

superficial way, maybe that means people

18:52

are learning about quantum computing at

18:54

seven years old because that learning is

18:56

facilitated by a teacher and by a

18:58

supercomputer. But that part is going to

19:00

take time. So the more urgent kind of

19:02

redesign is flipping what happens in

19:05

school uh towards and versus what

19:08

happens at home. uh what happens if we

19:11

don't do that and I think it's quite

19:12

obvious right we we end up just

19:14

shortcircuiting the thinking uh so there

19:17

shouldn't be anything to cheat on

19:19

because what happens at home isn't what

19:21

we are evaluating uh and that is I think

19:24

the kind of the baseline that we need to

19:26

to move towards and that's I think what

19:29

we should be doing more urgently

19:31

so many places to go from everything

19:33

that you just said there um but I'd say

19:36

you know soam maybe a positive example

19:38

of how outside of hard learning and

19:41

maybe an AI tutor helping you learn the

19:43

things that you need to learn at your

19:44

own pace in the most individualized and

19:47

sort of datari manner um well how can

19:50

you use AI outside of that core learning

19:54

uh in a way that helps helps children

19:58

become more human right so like what

20:01

does human flourishing kind of look like

20:03

and does AI have a role in that I think

20:05

we need to learn how they work and learn

20:07

how to use them but also what should we

20:10

as humans be focused on because we want

20:12

to collaborate with AI. So what are the

20:15

sorts of uh subjects and skills that are

20:19

deeply human that we can focus on and

20:22

something I've been thinking about quite

20:23

a bit recently is like different types

20:25

of knowing. Uh there's a cognitive

20:28

scientist called John Vivei and he plots

20:30

out different types of knowing and kind

20:32

of the most sort of like academic or

20:34

kind of researchbased and fact and

20:36

knowledge based uh learning is

20:38

procedural knowing. Um and that's the

20:42

kind of knowing that AI is really good

20:44

at and is getting increasingly good at.

20:47

But what AI doesn't have is lived

20:49

experience and deep insights that change

20:53

you as you experience them in the world,

20:56

change your perception of the world, and

20:58

then change how you connect with others.

21:01

And so I've been really interested to to

21:04

hear some of the talks this week about

21:07

experiential in learning environments.

21:09

Um, I I learned about Thinkery, which is

21:11

here in Austin, and it's kind of this

21:14

interactive learning museum environment.

21:18

And it's just really interesting to

21:20

think about what are those deeply human

21:22

skills that we can be focused on

21:24

teaching students while they're learning

21:28

what AI is and the best ways to

21:30

collaborate with AI.

21:31

>> Yeah, I think that that's vital. And I

21:33

just wanted to quickly go back to the

21:35

cheating. Another thing we'll probably

21:36

need to do in the short term is

21:38

introduce more pop quizzes and surprise

21:40

tests and they don't need to count

21:42

towards grades but to see where students

21:45

are as we are in this kind of new

21:47

territory. So a lot of times we don't

21:49

know how much they're using AI, how much

21:51

they're cheating with it to insert more

21:53

more chances for assessment uh and be

21:55

tracking that data because that's

21:57

another thing. we don't have a lot of

21:58

visibility into how this experiment is

22:00

going in terms of what AI can do and

22:03

what AI can't do and therefore what

22:05

should we be teaching kids in particular

22:07

and what skills should be fostering. Um

22:09

my philosophy is we should never assume

22:13

AI will never be able to do something

22:15

and the the reality is we cannot predict

22:19

the future what jobs will be there how

22:21

advanced AI is going to get and how

22:23

quickly that means we have to prepare

22:26

kids for absolutely anything whichever

22:29

way the future evolves however quickly

22:32

we get to the moon start genetic

22:34

engineering kids can pivot adapt and

22:37

think critically about the world around

22:40

them. And most of those skills don't

22:43

actually have anything to do with

22:45

technology. Technology, they require

22:48

deeper thinking. Critical thinking is

22:51

absolutely vital. In the age of advanced

22:53

technologies, kids need to read more.

22:56

Read for the sake of reading and read in

22:58

a way that they can come back to school

23:00

or with their parents and discuss the

23:02

ideas and have those ideas challenged.

23:05

Kids need to play more in the age of

23:07

advanced technologies. The future Steve

23:10

Jobs of the world, they're not going to

23:12

come from a corporate cubicle. They're

23:14

going to come from people that have

23:15

imagination that can play freely,

23:17

experiment, work collaboratively,

23:20

long-term thinking. So, getting kids to

23:23

think beyond the immediate horizon and

23:25

beyond just that this unit test in

23:27

chemistry or in math, but how could this

23:29

impact things in five, 10 years to come?

23:32

uh and even cross-disciplinary thinking.

23:35

Kids in school today are likely to hold

23:38

17 jobs across five different

23:40

industries. They won't be doing just one

23:43

thing. So, we have to get them to think,

23:45

how does math connect to what I just

23:47

learned in history, which might connect

23:49

to what I do in in philosophy or in

23:52

English. So, all of these new and

23:54

they're not even new skills, but it's

23:56

just about centering these types of

23:58

skills. uh most of the important the

24:00

most important skills for the future are

24:03

ones we can foster for free and that's

24:05

what I think we can sometimes miss in

24:07

these moments where tech we feel like we

24:09

have to lean into technology for the

24:11

sake of it uh but it's actually the

24:13

other skills that we need to make sure

24:15

we are doubling down on in the age of of

24:18

advanced technologies uh and one kind of

24:21

lived example that I that I do with my

24:23

nieces and nephews constantly since the

24:25

age of about six or seven I

24:27

theoretically introduced uce them to

24:29

technology and this is what can happen

24:30

in the classroom as well. I explain

24:32

concepts in age appropriate ways like

24:35

genetic engineering and I ask them to

24:38

interpret what that would mean for their

24:40

world and their sense of ethics. So if

24:42

we could theoretically make sure nobody

24:45

gets sick in the world with these

24:46

technologies should we do that? But what

24:49

if to my nephew it meant all that

24:51

basketball practice you do somebody

24:54

didn't have to do because that same

24:56

technology allows them to suddenly be

24:58

really good at basketball. How should we

25:00

think about that? They engage in the

25:02

higher order thinking. They're exposed

25:04

to the longerterm concepts of technology

25:07

without actually having to play around

25:10

passively on an iPad. Uh so these are

25:13

the types of deep conversations and

25:15

higher order thinking that can happen

25:16

into the classroom that teachers are

25:19

uniquely positioned to deliver and to

25:21

facilitate. I mean when you think about

25:23

a teacher and they don't get enough

25:24

credit for all of the things that they

25:26

do. I mean the curriculum is one small

25:28

part of it. They are social workers,

25:30

they are therapists. They know their

25:32

children inside and out. So being able

25:34

to go deep into these types of

25:35

conversations, that's what we also need

25:37

to be focusing on. And I know it

25:39

sometimes feels counterintuitive because

25:41

I'm a futurist and I spend most of my

25:42

days in patents in technologies talking

25:44

about robots uploading brain interfaces

25:47

yet the most important skills for the

25:49

future have nothing to do with

25:50

technology.

25:53

>> Great. And I want to go back to uh

25:54

something that you said. So technology

25:57

for the sake of technology is absolutely

26:00

not the right way to go about things but

26:03

learning for the sake of learning is.

26:06

And some really interesting insights uh

26:09

this week about how schools and

26:11

testbased learning does not set students

26:15

up to enjoy or take pride in just the

26:18

the sort of act of learning. And you

26:22

know sort of encouraged and optimized to

26:24

find the answer, get the answer right.

26:27

And then even in a critical uh thinking

26:30

class that uh cognitive scientist uh

26:32

Christine Legari talked about yesterday,

26:35

even in that class where there is no

26:37

right answer, what the students wanted

26:40

is they wanted the rubric to get there.

26:44

And what she said to them is like, well,

26:46

you know, do you think you're you're

26:47

going to when you have a job in the real

26:49

world, do you think that there is a way

26:51

like, you know, you're not going to be

26:52

asked to to, you know, discover the

26:55

answer or or where we should go or the

26:57

right path, I should say, um, by being

27:00

given the rubric. So it sort of seemed

27:04

very we're at this kind of acute moment

27:07

where

27:08

the way students are taught and what

27:11

they're taught to optimize for is very

27:15

at odds with where we're at right now

27:18

with AI and the fact that it is designed

27:21

to give you the answer. And I actually

27:23

talked to a teacher who said uh teaching

27:26

16 to 18 year olds and she was like okay

27:28

so what I do to try and kind of

27:31

circumvent the use of AI and writing is

27:33

I have my students write in class and

27:37

you know I do give them a bit of a

27:38

rubric like this is you know a good

27:40

structure for an essay and then when it

27:43

comes time to actually submit the essay

27:46

they go home and type it up. In some

27:48

cases, more in a few cases, that student

27:51

had kind of ripped up their essay and

27:54

basically just completely generated a

27:56

new one in chat GPT. And what that said

28:00

to me was, I mean, there was no like

28:02

time saved or cognitive load saved in

28:06

doing that. What that says to me is that

28:08

there's like a we're in a confidence

28:10

crisis. Yeah. And this is this is

28:13

potentially really detrimental to

28:14

society more broadly, not just kids, but

28:17

all of us that we become so reliant on

28:20

these technologies. We stop believing in

28:23

our own ability to make decisions. And

28:26

no matter how good technology gets at

28:28

something, there will be times when we

28:30

have to deviate from the technologies

28:32

advice. And we have to make sure we are

28:35

ready for all of those moments. And you

28:37

might even hear people talk about

28:39

optimizing every aspect of your life

28:41

with artificial intelligence. And I

28:43

somewhat take issue with that because if

28:45

des writing that email is the the one

28:48

time in the day where you think deeply,

28:50

you you move through your ideas, uh you

28:52

have to structure what you want to say

28:55

and you pass that to an AI, unless you

28:57

were replacing that time and that

28:59

thinking with something else, that's a a

29:02

dicey bridge to be walking down. And

29:05

there was a recent study, I believe it

29:06

was Microsoft and and Carnegie that that

29:09

uh joined forces for this study and it

29:11

did show that over reliance on

29:13

artificial intelligence can reduce our

29:16

ability to think critically. Uh so we

29:19

need to make sure we are strengthening

29:21

these skills as we start to move and

29:23

work alongside artificial intelligence.

29:25

And there was another study that was

29:27

really helpful that showed this in real

29:28

life in the workforce where

29:30

entrepreneurs were given access to AI

29:33

systems to help with their small

29:35

businesses. The high-erforming

29:37

entrepreneurs that had deep critical

29:39

thinking skills, AI supercharged their

29:43

performance because they knew the right

29:45

questions to ask of the AI and they knew

29:48

how to apply the AI's answer to their

29:50

business. When the lower performing

29:52

entrepreneurs asked AI questions, they

29:54

ended up doing worse and it hurt the

29:56

company because they asked the wrong

29:58

questions, they just gave up and asked

29:59

the hard questions and they didn't know

30:01

how to apply the material to their

30:03

actual startup. So, we don't want to

30:06

build societies where we are 100%

30:08

reliant on these systems. Uh, and that's

30:10

something that we have to really think

30:12

carefully about from at an adult age and

30:15

at a child age. And I think we're

30:16

already seeing it in terms of our

30:18

attention spans, spelling. I'm sure

30:20

there's a lot of people in this room,

30:21

myself included, who feel like, I

30:23

spelled that word last week and now I

30:25

have no idea how to spell it this week.

30:27

Uh we want to make sure we're not

30:28

shortcircuiting the thinking uh in this

30:30

age. So again, really centering deep

30:33

problem solving, critical thinking, um

30:35

and deep learning.

30:37

Yeah, there's been a number of studies

30:38

like the Carnegie Melon Microsoft one

30:40

that shows when you outsource your

30:43

cognitive work to an AI, you actually

30:46

become cognitively weaker. And that

30:49

seems like extremely critical at an age,

30:52

you know, in a period in time where, you

30:54

know, students are supposed to be honing

30:56

their cognitive um abilities. But then

31:01

it's like, well, how do you know how can

31:04

you engage with that AI? you know, in a

31:07

way to actually um benefit from it. And

31:11

knowing that you if you outsource the

31:14

cognitive load and you're not doing the

31:16

cognitive work yourself, not only are

31:18

you missing that moment, but you're

31:20

missing that the insight kind of living

31:24

within you and kind of settling within

31:26

you and kind of becoming who you are and

31:28

increasing your body of knowledge and

31:29

your resilience and your strength and

31:32

your expertise. And it seems like in

31:34

this day and age where it's so uncertain

31:37

what jobs will look like, what the

31:40

future will look like, kind of radical

31:43

self-dependence is something that we

31:45

should be teaching. Um, and uh, it would

31:49

be great to kind of hear a little bit

31:51

about where we think that kind of

31:53

responsibility

31:55

lies in that respect.

31:56

>> Yeah. Um, and I always hesitate to when

31:59

I think about responsibility

32:01

to bring in parents because everybody h

32:04

is coming from a different place um, and

32:06

we can't really control what happens in

32:08

the home. That's an that's an entire

32:11

other week of Southby making sure that

32:13

all homes are equal and have um, access

32:15

to the same things. Uh, but in school, I

32:18

think we really need to think about

32:19

building confidence as a skill for kids

32:22

so they can continue to trust the

32:25

questions that they're asking. um and

32:27

their own ability to generate answers.

32:29

And again, it doesn't mean, of course,

32:31

in a world where AI is a master of

32:33

quantum computing, we want kids to be

32:35

able to ask a questions, but we help

32:37

them think more deeply about the

32:38

questions that they're answering and

32:40

they have a broad understanding of the

32:42

the questions that they're asking them.

32:43

They have a broad understanding of the

32:44

answers that AI can give them. And

32:46

again, that is a fundamentally different

32:48

society, right? Where we go from what is

32:50

the answer to what is the question. And

32:52

that's why that is part of that bigger

32:54

systemwide redesign. Uh but I think

32:57

centering confidence, encouraging kids

32:59

to speak in front of class, uh

33:01

classmates engage in conversation

33:03

because that is also the interface of

33:05

the future. It's it's conversing with

33:07

these AI systems is is absolutely

33:09

critical. And then in terms of you asked

33:10

what are the jobs of the future, nobody

33:12

can really predict them. We can predict

33:14

the jobs that are going to be automated.

33:16

That's much easier to see. Um, but the

33:18

same way nobody 20 years ago could have

33:19

predicted a social media manager was

33:21

going to be vital to a company's

33:22

existence, most of the jobs we can't

33:25

really see. We know that there's going

33:26

to be some convergence of synthetic

33:28

biology and artificial intelligence in

33:30

space. Uh, but again, it's about

33:32

preparing kids for anything. uh not just

33:34

trying to I I think we need to move away

33:37

from preparing kids for jobs because

33:40

jobs are going to change and that much

33:43

we can guarantee and that also means

33:45

moving away from coupling identity to

33:50

jobs. We have to move away from that

33:52

entire philosophy, right? That that idea

33:54

that we learn, we work, we retire,

33:57

that's all changing. So instead, we

33:59

encourage kids to lean into the the

34:02

problems that they want to solve, the

34:03

skills that they want to to adopt and

34:07

the amazing ways that they want to

34:09

change the world. I mean, tell kids

34:10

about the robots and the AI systems that

34:12

they'll be living with and ask them what

34:14

they want to do with it versus coupling

34:17

identity to jobs because that is just

34:19

going to end up uh in a crisis and we're

34:20

we're moving away into an entirely

34:22

different type of world. And so some of

34:25

the skills that we can teach children to

34:27

kind of prepare for this new future um

34:31

people sort of use a term like

34:32

metacognition right so how to think and

34:36

it was interesting uh in a in a talk

34:38

yesterday so you know one educator was

34:42

saying you know well you can't

34:43

necessarily stop people from you stop

34:46

students from using chat GPT but

34:48

something that he does is like okay you

34:49

used it show me your prompts show me the

34:52

questions that you asked it show me how

34:55

you pushed chat GPT because if you can

34:58

ask good questions and if you can become

35:00

a good communicator and you don't

35:02

necessar and you kind of know where you

35:05

want the answer to go and you can prompt

35:07

in that direction then um then then

35:10

that's that's a skill that's a skill for

35:12

for today and and for the future. Um,

35:16

another skill that sort of came up as

35:18

kind of an experimental sort of skill,

35:21

the New York Times recently uh covered a

35:24

story about using this term like a vibe

35:26

engineer, which is this idea that kind

35:29

of almost anyone with the will and the

35:32

passion to do it. Um, and that's

35:33

something that I think, you know, we

35:34

need to double down on on encouraging in

35:37

every individual, but anybody with the

35:39

will and the desire to create an app can

35:42

basically do that now. And it's this

35:45

thing called like vibe engineering. So

35:47

um a lot of people kind of creating apps

35:49

for themselves or apps for just a few

35:52

people. And so one of the kind of

35:54

emerging uh skills that was discussed I

35:57

was like around human- centered design.

36:00

So if every anybody can design products

36:04

for others like how do we get into you

36:07

know well what would be good for others

36:09

and so that felt like another territory

36:12

that felt rich.

36:13

>> Yeah. Yeah, I think centering the human

36:16

experience in an age of advanced

36:17

technologies um is an investment that we

36:20

should definitely uh be doubling down

36:22

on. And yeah, and again that does mean

36:24

introducing kids to these ideas, to

36:26

these technologies um but then bringing

36:28

it back to to the human to just kind of

36:31

the core fundamentals. I mean I think

36:34

history, ethics, philosophy, these are

36:36

subjects that become more important the

36:38

more advanced and technical our

36:40

societies get. And and like you

36:42

mentioned earlier, uh you know, the

36:45

computer scientists learning today are

36:49

going to be the future tech tycoons um

36:54

of tomorrow. And so what can we be

36:57

teaching them to create more ethical AI

37:00

and f you know exponential technologies

37:03

that are good for people that are

37:04

designed in a way that is good for

37:07

society. And so I think that's a really

37:09

hopeful message that we are in that

37:11

moment now where that next generation of

37:15

builders are kind of we have that

37:17

opportunity to kind of coach them and um

37:20

help them ask the right questions and

37:23

design for the good of society.

37:26

>> Yeah. I don't think I could have said it

37:27

better myself.

37:28

>> Yeah. Um so actually a bit of a a segue

37:31

into

37:32

the question of just ethics in this

37:35

space more broadly. Um and and actually

37:39

maybe before we kind of like dive into

37:41

some some of those areas uh what do we

37:44

think of kind of the role of the

37:47

educator in in all of this and um how

37:50

does that shift? So let's say in in you

37:53

know um in a great situation where

37:56

you've got AI ch AI tutor that is you

37:59

know the the entire re entire reimagined

38:02

um approach that you mentioned where you

38:06

have an AI tutor that's giving you

38:07

adaptive learning personalized and all

38:09

of that. Um what is the role of the

38:12

educator and all of that?

38:14

>> Uh I think that that's going to evolve

38:17

the more these pilots and the more the

38:19

studies come through. uh the different

38:21

positioning that the educator takes. So

38:23

whether that's um deep expertise in some

38:26

areas which will be vital um whether

38:28

that's uh facilitating the the right

38:31

questions to ask the right way to think

38:33

about material and the right way to

38:35

think about learning. Uh I think the

38:37

role of the educator stays deeply

38:39

coupled with kids understanding and

38:42

knowing how to learn. Uh and that is

38:45

what education was supposed to be for uh

38:48

learning. And so I think it it goes back

38:50

to that. Uh we've become we've

38:53

redesigned education to prepare people

38:55

for work. Um and I think we need to move

38:58

towards preparing people for life. Um

39:00

but the the educator still stays central

39:02

to that process. I mean I don't think

39:04

many people would want to send their

39:05

kids to a school with 95 robots and no

39:08

people. I don't think that's the future

39:10

that uh we're all aiming for.

39:12

>> Right. So I guess in in in some of these

39:16

kind of very innovative models like

39:18

Alpha School where it's two hours of

39:19

intensive personalized learning with an

39:22

AI tutor, the rest of the day is all

39:24

about human connection with teachers and

39:27

instructors and and guides that help

39:29

kind of uncover the passion of that

39:32

student and help to nurture it and um

39:35

help them to have the confidence to kind

39:37

of uh deliver on that. But with that

39:40

wanted to touch on you know the ethics

39:43

of this space a bit more.

39:45

>> Yeah and this is something we have to

39:46

really think carefully about uh

39:48

artificial intelligence data and

39:51

children. Uh that's already um a deeply

39:56

questionable intersection and ethics

39:58

appears in a few ways. So the first is

40:00

what data are these AI systems going to

40:02

be collecting when it comes to children?

40:05

are parents aware and did they give

40:07

consent or are we just kind of rushing

40:09

AI tools into class and what can be

40:11

interpreted from the data that gets

40:13

collected on children. So we want to

40:15

know where their stamina is on math. Uh

40:18

we don't want to interpret other

40:21

emotional cues unless we have figured

40:23

out how to do that safely with parent

40:25

consent. Uh so that's one area that I

40:28

think we need to to really understand.

40:30

The second is the strange way bias shows

40:33

up in AI systems. We often think about

40:36

facial recognition and the cases where

40:38

we know it uh more intimately. But there

40:41

are unique ways that AI can make

40:43

predictions about you when you interact

40:46

with it and then change the level of of

40:49

advice that it gives you or how well it

40:51

performs for you based on what it knows

40:53

about you. So there was a study done um

40:56

using most of the most famous AI systems

40:58

uh and it showed that when you asked the

41:00

AI systems about African-Americans, it

41:02

gave all great positive reviews. When

41:04

you gave the AI system uh an example of

41:07

text that had more traditional

41:09

African-American English in it and ask

41:11

the AI systems questions about that

41:12

user, the AI system would say, "Oh, this

41:14

person's never going to go anywhere. Uh

41:16

I can't even imagine a job for them.

41:17

They'll be in lowwage jobs." Picture

41:19

this in education. the AI system detects

41:22

somebody has this kind of ethnic

41:24

background or is this gender and then

41:26

gives the the teacher worse feedback on

41:28

that student uh in terms of assessments

41:31

or gives the student worse advice in

41:33

problem solving because it has already

41:35

made a prediction that that student is

41:37

not going to go anywhere in life. So

41:39

these are the more subtle ways we have

41:40

to apply foresight to ethics or ethics

41:43

and and foresight in in academia. And

41:46

I'd say the final thing that we're going

41:47

to have to watch out for, and we saw

41:49

this with social media after the fact,

41:52

is the relationships kids are going to

41:54

build with these systems. We are now

41:57

giving kids access to an infinite,

42:00

neverending opportunity to engage with

42:03

an imaginary friend, something that is

42:05

always on, can answer all of their

42:07

questions. That is a recipe for a new

42:09

type of addiction. And we have to really

42:12

be looking out for this. uh we kind of

42:14

missed the boat on smartphones and now

42:16

we're all trying to get them back out of

42:17

the classrooms. We can see this line of

42:20

sight directly with AI systems and chat

42:23

bots. Uh and this isn't of course all on

42:25

educators. This has to come to you know

42:27

tech companies how we design these

42:28

systems agegating them. But something to

42:31

look out for is this kind of new

42:33

addiction that might form between kids

42:35

and chat bots and that is not going to

42:37

end up well and do our best to bring

42:40

parents on board with that. So even if

42:42

that's at parent teacher interviews just

42:44

casually saying look out for the amount

42:46

of time your kid spends chatting with a

42:48

chatbot I noticed they were a little bit

42:50

more disengaged in class that could be

42:52

where uh so this is another area that we

42:54

have to apply foresight too but we can

42:56

see that line of sight happening quite

42:58

clearly if we don't intervene

43:01

>> yeah in in a similar way that we've been

43:03

talking about you know parents and

43:05

learners having that visibility into

43:07

their own data and kind of um their

43:10

performance and how engaged they are

43:12

with their work. Should there be a case

43:14

where everybody has that visibility into

43:17

the relationships with these chat bots?

43:20

Where do you think that line can be

43:22

drawn? But I feel like if there is that

43:24

visibility, then people can be a little

43:26

bit more relaxed. But then is that

43:27

>> Yeah, I would say that question needs to

43:30

be answered by a psychiatrist and a

43:31

psychologist. That is why these are

43:33

multiddisiplinary conversations. We need

43:35

to bring everybody to the board. um an

43:38

addiction or a relationship with an with

43:40

a chatbot shouldn't be something that

43:42

kids download in the app store. Um so

43:44

psychologists, psychiatrists, doctors, I

43:47

welcome you to this conversation because

43:48

we need your voice in it. It can't just

43:50

be happening out of Silicon Valley. It

43:52

can't just be left to parents to deal

43:53

with on their own. Everybody needs to

43:55

come to the table. We saw what happened

43:57

with social media. We don't have to do

43:59

that social experiment again.

44:01

So well said. Okay, we're going to take

44:04

some questions here. This one's from

44:06

Rob. How do you see AI increasing the

44:09

digital divide especially in underserved

44:11

communities in developing nations and

44:13

how do we as leaders stop this cycle

44:17

and we can see that general purpose

44:18

technologies build on each other right

44:20

so the communities that didn't get equal

44:21

access to electricity they're the

44:23

communities that don't get that are

44:25

struggling with the digital divide and

44:26

then there will be an AI divide uh that

44:28

is why that first pillar that I

44:30

discussed AI as a hard skill teaching

44:33

kids how to use artificial intelligence

44:35

how to prompt it, how to use it safely

44:37

is vital because that may be the only

44:40

opportunity kids get access to these AI

44:43

systems. So that's why it's not pushing

44:45

AI out of schools. Um it's being very

44:47

careful about adjusting how kids learn

44:49

with AI, but making sure we build AI as

44:52

a hard skill is absolutely vital in

44:55

schools and in education. When it comes

44:57

to the broader world, uh this is a

45:00

question that nation states are facing

45:02

urgently. um making sure there are

45:04

things like sovereign AI that every

45:06

country get gets access to computing

45:08

power, the opportunity to build the STEM

45:10

skills within their population uh to

45:12

adopt these technologies. Uh that is a

45:14

global conversation um that's also

45:17

happening against a very geopolitically

45:19

uncertain time. Um but it's a really

45:22

important question and unfortunately I

45:23

wouldn't be able to answer it in in in

45:25

30 seconds.

45:26

>> Yeah. And I don't know just to add

45:28

something just small to that in a way

45:31

could AI introduce to everyone because

45:34

everyone's got a smartphone regardless

45:35

of their socioeconomic situation if if

45:39

if students aren't taught how to use it

45:41

and just over rely on it and you know

45:44

that could be um some at a disadvantage.

45:47

Uh let's take another question. Um,

45:50

we're aware that AI cannot replace

45:53

inerson instructors, but will it and

45:56

should it replace the online

45:58

asynchronous instructors

46:01

in higher ed?

46:03

>> I'm not exactly sure what you what is

46:06

meant by this question.

46:08

>> Yeah, I guess um I how I interpret this

46:10

question is so we know the value of

46:13

in-person instruction and the need for

46:16

that human connection. There are other

46:18

modalities of learning. Some is kind of

46:20

like on demand learning and then you've

46:22

got some which is like live synchron

46:25

sort of synchronous but digital. Um

46:30

my um thought on that is I think when

46:34

content is pre-recorded maybe that's not

46:36

the best use of a teacher's time to have

46:39

sat in front of a camera and kind of

46:40

read through all of that content

46:42

themselves. Maybe that is a scenario

46:44

where you can outsource that to an

46:46

avatar or an AI in a different format

46:48

that is proven to be more personalized

46:52

and adaptive. And I would imagine that

46:55

any human to human interaction that's

46:58

focused on human connection is good

47:00

whether that's in person or has to be um

47:03

online. And I think that there's also

47:05

something interesting here and we

47:07

actually don't know that the answer to

47:09

this question but if you're taking say a

47:12

physics class online what now happens

47:14

when the physics teacher is also now

47:16

powered by these supercomputers and how

47:18

their perspective on physics changes and

47:20

how they see the world and then getting

47:22

access to that person in addition to the

47:24

AI. So this the the answer I think the

47:27

jury is still out on how that would

47:28

unfold specifically as it relates to

47:30

online learning.

47:32

>> Yeah, absolutely. And you know there are

47:35

um you know I work with a company that

47:38

creates AI twins for experts and what's

47:41

going to happen next is that experts

47:43

expertise that they trade on they own

47:45

their expertise but they're going to be

47:46

able to enrich that expertise with real

47:49

time data that they choose to bring in.

47:52

And so you know would you speak to that

47:55

real expert or would you speak to that

47:57

expert's AI twin? Well, in some cases it

48:01

might be advantageous to speak to the AI

48:03

twin. Even though the expert, you know,

48:05

the real in-person experience, you can

48:07

have, you know, much more creative

48:08

conversations. There might be scenarios

48:10

where the AI twin is actually more

48:12

valuable in that for certain contexts. I

48:16

think tackling the last one is

48:17

interesting. What are the pros and cons

48:19

of developing skills for prompts when

48:22

using AI? It is becoming critical for a

48:25

career. How will it impact social

48:27

skills? Um, so the pros are the more you

48:30

understand how to direct an artificial

48:33

intelligence system, the better response

48:37

uh and access to the how the AI kind of

48:39

processes that data you'll get. Um, so

48:42

that I think is very helpful. Another

48:44

pro is teaching people how to process

48:47

what is in their mind and formulate that

48:50

into a question that can lead to some

48:52

response. The con I see is that we end

48:56

up refining all of our ideas and

48:58

knowledge and optimizing it for

49:01

algorithms. So how algorithms we have we

49:05

will become optimization engines for

49:08

algorithms and I don't think that's the

49:11

world that we want to get into. I think

49:13

it's there are unique advantages that

49:15

artificial intelligence provides and how

49:16

it interprets data and there are unique

49:19

advantages to how humans approach data

49:21

and we don't want to make our approach

49:23

to thinking optimized for artificial

49:26

intelligence. We want AI to be optimized

49:28

for us. Um and so I think that that

49:31

would be the con. I think that that this

49:32

is going to be only a temporary

49:34

challenge uh as as we're seeing the kind

49:36

of nature and the science of prompting

49:38

is continuously evolving and eventually

49:41

it will turn to be much more

49:42

conversational. So the way you talk to

49:44

your colleague or you talk to your

49:45

teacher or your friend, you'll be able

49:47

to engage with AI in that way. But that

49:49

still means communication is absolutely

49:51

vital. Understanding how to share your

49:54

ideas and that isn't something that we

49:56

always center in education. uh but being

49:59

able to vocalize your ideas, refine your

50:02

refine what the knowledge that you have

50:04

uh in a way that's easy to understand

50:06

and to interpret uh for the general

50:08

public and not just for AI uh but will

50:10

be vital in the future.

50:12

>> Yeah. And I think as AIs become better

50:15

at prompting themselves and all of that,

50:17

you know, well, where does the human go?

50:18

The human needs to go deeper. They need

50:20

to get more creative. Like what is what

50:22

are these prompts even about? What is it

50:24

that I'm trying to achieve? What could I

50:26

achieve? And so I think that trajectory

50:29

is a positive one for humans. Like how

50:31

do you dig a deeper into your human

50:34

ingenuity because all of these things

50:36

can be handled for you. And so I think

50:39

that's a a net positive for you know

50:41

using AI kind of in in the right way. I

50:44

know that so there's a question that's

50:45

received the most likes and I wonder

50:48

why. What occurs when the US Department

50:50

of Education is demolished

50:54

and how do we move forward to make sure

50:56

all states receive equal AI education?

50:59

And I think this goes back to the first

51:03

question.

51:04

Investing in children's future

51:07

is an investment in national interest.

51:11

They are fundamentally coupled. So if

51:13

you want to talk economic strength,

51:16

economic security and national security,

51:18

you are inherently talking about the

51:20

success of the next generation. Uh so I

51:24

am not involved in how this is being

51:27

dismantled but I really hope um we are

51:31

prioritizing and centering children and

51:35

their ability to self-actualize and

51:38

reach the maximum capabilities that they

51:40

can in the decisions that are made

51:43

because that is going to be deeply

51:44

coupled uh with the longevity and state

51:48

continuity. So they can't be they can't

51:50

be decoupled. And that's why I say

51:52

education is a national security issue.

51:54

They need to be in the same room.

52:03

So these are fantastic questions. Um I

52:05

did want to leave just a couple of

52:07

minutes uh for Chenade to share just

52:10

some final rounding thoughts on this

52:13

last day of South by Southwest edu on AI

52:16

and the future of education.

52:19

Um well, first of all, just a a a major

52:21

shout out to teachers because this is an

52:23

incredibly complex time and they are

52:26

dealing with the most prize prized asset

52:30

on the planet, which is children. Uh so

52:32

this is I mean I think that they don't

52:33

get enough credit for the moment that

52:35

they're navigating.

52:39

Uh and and I think something to

52:41

remember. We're going to continue to

52:44

hear about advanced artificial

52:45

intelligence systems, quantum computing,

52:47

space, all of these deeply technical

52:50

advancements, but some of the most

52:53

important skills have nothing to do with

52:55

the technology. Um, and even for for

52:58

parents, it's not being able to navigate

53:01

an iPad passively at five that will

53:04

dictate whether your child will do well

53:06

in the future. If you said, you know, my

53:08

child doesn't really like working on the

53:10

iPad, but she's reading four books a

53:12

day. She loves her sports teams. She

53:14

wants to spend too much time at the

53:16

park. I would say that child is going to

53:19

thrive in the future. So even though

53:21

there's a lot of pressure to adapt to

53:23

this moment, remember it is the

53:25

non-technical skills that we need to be

53:27

centering because we are preparing kids

53:29

for a future we cannot see, which means

53:31

we have to prepare them for anything

53:33

regardless of the way technology

53:35

evolves.

53:37

And on that note, I think we will close.

53:39

Thank you for being an absolutely

53:41

fantastic audience. What impact do you

53:43

think AI will have on the workforce? And

53:45

do you think we are headed for an

53:47

identity crisis?

53:48

>> And this is the question that's

53:49

fascinating about AI. What else can I

53:51

become? Very few people have the courage

53:53

to ask that question. Why? Because they

53:54

look in the mirror in the morning and

53:56

they see an engineer or a doctor. They

53:57

don't see a person.

53:58

>> If they're not looking at artificial

54:00

intelligence and asking, "What are we

54:01

going to become with this technology?"

54:02

Would you say it's the beginning of the

54:04

end for

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.