AI and Education: What's Changing, What Kids Must Learn, and the Path Forward | SXSW
FULL TRANSCRIPT
I think we need to move away from
preparing kids for jobs cuz jobs are
going to change. The way students are
taught and what they're taught to
optimize for is very at odds with where
we're at right now with AI. The safest
assumption we have to make in this
moment is that kids are going to be
using artificial intelligence at home.
So, the classroom really has to be a
place where the deep learning is
happening and where we're raising the
bar on knowledge. When you outsource
your cognitive work to an AI, you
actually become cognitively weaker. This
is potentially really detrimental to
society more broadly that we become so
reliant on these technologies, we stop
believing in our own ability to make
decisions. In the age of advanced
technologies, kids need to read more and
we'll have to make school harder. The
most important skills for the future
have nothing to do with technology.
What should kids be studying and
learning today for a future that's going
to be transformed by artificial
intelligence? And how does the
institution of education need to evolve
to meet the moment? In the spirit of
South by Southwest, I'm going to share a
talk that I delivered on this topic
where I go into detail on some of these
ideas from where we can expect this
technology to take us to the skills kids
in school today need to be cultivating
and the systemwide change the
institution of education is going to
need to undergo to prepare kids properly
for this world. I'm Chenade Boll and
this is I've Got Questions. Um, super
delighted to be moderating this
conversation with the fabulous Chenade
Boval. Chenade, tell us what it is to be
a strategic foresight adviser and your
lens on AI and the future of education.
>> Yeah. Um, you know, education is the
bedrock for a healthy democracy and for
a functioning society. uh and it's not
just essential for things like economic
mobility and economic security but
fairness as well uh and for well-being.
I believe there's no such thing as a
state that overinvests in children's
future. An investment in children is an
investment in national interest. Right?
So, you want to foster uh an informed
and adaptive citizenry that can not just
safeguard the future, but thrive in it.
Especially a future that's going to be
as complex as the one children in school
today are entering into, which will be
shaped by quantum computing, genetic
engineering, artificial intelligence,
commuting back and forth to space. This
is an incredibly complex world they'll
be entering into. Uh so the more that
they can understand it, the more we can
support them in that journey, the better
equipped they are. And that's an
investment in a country's economic
security, in our collective health and
well-being, and our overall security and
national security interests. Goodness.
So couldn't be a more pressing u topic.
Uh and before we dive into um some of
the kind of finer points, where would
you say taking a step back like where
are we at this moment with AI?
So if I were to say where we are, I mean
it's very very early. Maybe it's 1992.
The internet has dropped. Companies are
experimenting. We know it's maybe going
to big be a big deal, but there's still
a lot of doubt. Some people are also
kind of playing around on it. But we
have yet to fully comprehend the way it
is going to fundamentally transform our
world. the Googles, the Apples, the
Amazons of the future, they have yet to
be invented, but they're coming. And
artificial intelligence, it's also a
general purpose technology. So, similar
to something like electricity. Think of
how pervasive electricity is, we don't
even think about it at all. It's so
foundational that it's moved into the
background. So, we will soon be
streaming artificial intelligence the
way we stream electricity. that is going
to be a fundamentally different society
to live in. Uh and these general purpose
technologies they take time to get so
entrenched in society. But you know when
it's reached that point because when
people can't access general purpose
technologies whether that's at a country
level at certain neighborhoods we deem
that wildly unethical. Who doesn't get
fair access to the internet? Who doesn't
get access to electricity? That is the
path artificial intelligence is on.
So if artificial intelligence is going
to be this general purpose technology
and fade into the background,
what does AI and education
look like now? Like how should educators
be considering AI and education given
that it will be in the background but
today we're at this very early phase.
Yes. So I think that there are kind of
three pillars that are related but
distinct in terms of how we should be
thinking about AI in education. The
first pillar is safe adoption for kids
and for learners. So this means
equipping kids with the tools to
navigate artificial intelligence because
they're going to be on these tools at
home regardless. They have
supercomputers in their pockets,
supercomputers on their iPads. So giving
uh kids the the skills to utilize these
tools safely. So that's conversations
like AI isn't your friend, right? Your
chatbot isn't something that you tell
secrets to. This is what we do or don't
share with artificial intelligence. And
this is all also how you ask it good
questions and validate its answers. So
that's pillar one. Pillar two is how do
we more urgently adjust what we are
teaching in school or just the formula
for what happens in the classroom versus
what happens at home knowing that kids
are going to be leaning into these
technologies at home to do homework and
to complete assignments. The third
pillar and this is where I think we are
rushing into but this is actually the
long-term game is how do we
fundamentally redesign the entire system
of education for the age of artificial
intelligence. But what seems to be
happening in this moment is we are kind
of merging all of those pillars in a
sense of urgency. And this leads us to
deploy AI in schools for the sake of
feeling like we need to meet the moment
by bringing AI into the classroom. And
there are a lot of technologies that
aren't ready. So I think we focus on
pillar one, giving kids the tools to use
these tools safely if they're going to
be using them on their phones. We
slightly adjust what we're teaching to
account for cheating in homework. But
it's much more at the departments of
education, the ministries of education
level to take this longer term lens and
fundamentally redesign AI or school for
the age of artificial intelligence. So I
think we've heard this week a number of
different ways that educators are
experimenting with AI, different pilots,
different ways of going about it. Like
is now the time to be exounds like we
should be teaching it and educating
about it? is now the time to be
experimenting with it in deeper ways.
>> Yeah. Yeah. So, you know, teaching it is
it's more so AI is a hard skill. So,
that should be happening. Uh
experimenting with it. Yes. We need to
be running these pilots. We need to be
gathering the data as to what's working
and what's not. But it has to be in very
very intentional ways. Um and not just
assuming that we can just throw in an AI
tutor somewhere arbitrarily and that's
going to be sufficient. um and making
sure that we're not running social
experiments that jeopardize learning
outcomes uh for the sake of just feeling
like we need to quickly meet the moment.
Uh so yes, I think that these
experiments, they're vital. They should
be happening uh but they need to be very
very intentional uh and very very
controlled.
I know that you're working with
Fortune50 companies in this space and
advising them on how to navigate AI and
education. what are some of the data
points or and the advice that you've
been giving them?
>> Yeah, so it's been really really
interesting to look at some of the data
that's coming through and of course
we're still very very early in early in
the age of using artificial intelligence
in education. Uh but there's one clear
trend that stands out. So, I'm going to
walk us through a study that I find
particularly helpful. And this was done
uh by the University of Wharton, the
University of um I believe Pennsylvania
and uh Budapest British International
School and it implemented artificial
intelligence in math classes. Uh so
there were a few math classes in the
high school and they broke the class up
into three groups. The control group
which is just your traditional doing
homework problems with your textbook. uh
the GPT base group and this is the
students that got uninhibited access to
artificial intelligence. Uh and then the
GPT tutor group. So these are students
that got access to an AI that has been
designed to just guide them through
problems, give them hints but not the
answers. All students got the base
lesson for math together and then they
broke out into their respective groups
and their respective tiers of AI access
or not. So the study showed that when it
came to the practice problems, the
children that got uninhibited access to
AI did 48% better on the practice
problems than the control group. The
students that got access to the GPT
tutor did 127% better than the control
group. But when it came back time to
actually test students without access to
AI and do the final postunit test, the
kids that got the uninhibited access
performed 17% worse. So AI harmed the
learning outcome. And the children that
got access to the AI tutor performed at
the same level as the children that
didn't get access to any artificial
intelligence. And so the conclusion of
the study was that generative AI harms
learning outcomes. But then there was a
second study that happened at Harvard.
And of course we have to control for the
fact that self-directed learning is a
little bit different at a university
level. Um and clearly if you're getting
into Harvard, there's also some kind of
higher order thinking that you're able
to do. Um but that aside, it was a
physics class and they broke the physics
class into two groups. The control
group, which is the students that went
to the traditional lecture with the with
the professor. Uh then they broke out
into peer groups and and worked uh with
one another and their peers and had
instructor-led guidance on solving
problems. The second group had no
in-class lessons at all. The entire
process with was done with AI, but they
specifically designed the AI to be
self-paced uh going with the students
needs to provide immediate feedback
whether the student was on the right
direction or off the right direction
while they're doing the problems to
provide motivation and to really take
all learning best practices and
implement it into that system and
continue to adapt how it tested the
child based on where how they were
evolving um and doing the problems. when
they did the general test after those
two experiments, the kids that went the
pathway of artificial intelligence
performed twice as well as the peers
that didn't get access to AI and they
were more motivated and more engaged.
And so what we can learn from just those
two kind of isolated studies is that you
have to adapt the entire ecosystem,
right? It it's akin to inventing
electricity but only swapping out where
the steam engine was and putting light
switch there or light switch there. Not
building out the entire assembly line
and rethinking about how we design the
system. That's step one. Step two,
immediate feedback is absolutely vital
in AI learning outcomes. Gone are the
days if we're going to incorporate
artificial intelligence where we wait
for the unit test or midterms or the end
ofear exam to see where students are.
Artificial intelligence needs to be able
need to be able to extract the data in
real time. This is how somebody's
adapting or this is how they're falling
behind. And AI needs to provide that
feedback or else we learn visibility. We
lose visibility into how well things are
happening. Self-paced learning is also
vital. So if you go back to the the high
school, everybody had an hour and a half
to learn the math problem. So whether
you were working with your textbook or
working with AI, that hurt people that
were doing the AI method and it helped
people doing the traditional method. So
kids need to learn at their own pace. Uh
and the system needs to be able to adapt
in real time. So those are just a few of
the key takeaways uh when we think about
AI and education. Um, but that's why
this is a longer term redesign and that
redesign should not fall on teachers.
Uh, and that's one thing that I think we
need to really make clear that this
moment shouldn't fall on teachers. They
already have way too much on their
plate. Uh, this is something for
government departments head department
heads and those designing curricula more
broadly and I think we're getting that
wrong. Right. Yes.
Um I had um the opportunity to sit on a
few panels yesterday and there was
definitely um you know a very
understandably iate educator was like we
had co we had you know we're in an
underprivileged area we've got so many
pressures and now we have AI to learn
and we have to do this two-day course
and then the pilot ends and then we've
got to go and do another two-day course
and AI is the last thing that we need.
Um, so it sounds like there is a right
way to do it and there are definitely
ways um that can uh be harmful and some
of those right ways involve a lot of
structural redesign of how um of of how
students actually engage with AI.
>> Yeah, this is like the institution of
education. We need to approach
differently. Uh it's akin to asking the
accountant to redesign the the concrete
and the bricks. uh that's not what we
should be doing and I know that you
there was a we had talked about a school
that you had been tracking uh so I think
it could be helpful to share some of the
insights there
>> yeah absolutely um people familiar with
alpha school in the room a few people um
what it is it's a 2hour
learning process where like all the hard
skills all the knowledge that you need
to learn at school happens in a 2-hour
period with an AI tutor and the
experience is entirely personalized at
adapted to where that student is at. So
if you walk around uh the classroom,
you'll see different students working on
completely different math problems,
let's say, and as uh it's understood
what that student is interested in, the
math problems become kind of
contextualized within topics that they
love. Um and so and critically and again
sort of going back to one of the um best
practices or you know um mandatories in
AI being successful um in schools
there's real time feedback and
performance on performance of that
child. they can see their own
performance and actually start to own
that journey for themselves and uh they
get everyone into the 99th percentile no
matter where the where they've started
in their journey. So I think that's a
fascinating way um to look at it and I
think that uh besides those two hours so
the point of getting you know all of
that work done in those two hours and
some people are able to some some
students are able to accomplish more
double in those two hours and some on
the higher performing end five times
more. Um, but the critical part is
freeing those students to focus on life
skills, on EQ skills, on kind of
developing their own human ingenuity.
And so I feel like that is, you know,
from all the research that you've
discussed, just an amazing example of of
what's happening today.
>> Yeah.
>> Yeah. Totally. Uh, and and that's kind
of the moment we're in, right? were in
pilots, in experimentation, uh, and
innovation. Uh, really a a redesign,
zoom out, wide lens, um, and some in
some ways taking risks, but it should
never hurt the learning outcome and it
should never be a burden to teachers and
both of those things need to be true.
>> Absolutely. Um so did want to kind of
like dig in a little bit more into some
of the current challenges with AI that
you know educators, students, parents
are experiencing today which is around
AI and cheating
um and using you know chat GPT to to get
to the answer uh right away and um what
impact that might be having on the
learner experience and kind of the point
of being at school.
>> Yes. Oh, absolutely. And I think that
that we're in a bit of a crisis in in
this moment when it comes to artificial
intelligence and cheating. And we can
talk about what happens when you kind of
shortcircuit that that thinking. But I
think that the safest assumption we have
to make in this moment is that kids are
going to be using artificial
intelligence at home. So whatever
happens past 300 p.m., expect that to be
powered by a supercomput in some way. So
we have to start there. That means we
have to change what we are doing in the
classroom. And so in some instances that
means maybe we flip what happens at home
happens in the classroom. Uh but in
other ways maybe for example you teach
history. You give children the the
research portion and they can go home
and do all the research with CHBT that
they want but the higher order critical
thinking deep learning and discussion
all of that happens in the classroom. So
the classroom really has to be a place
where the deep learning is happening,
where the testing is happening, uh, and
where we're raising the bar on
knowledge, but we do have to assume
everything passed four is likely going
to be done, co-created by, or outsourced
to an AI system. And then again, the
broader goal and the longer term goal is
that we've entirely redesigned the
curriculum to account for the fact that
kids can lean into supercomputers
because that is the actual goal. In the
end, kids in school today are going to
step out into a world with advanced
robots, with supercomputers that are
polymaths. We want them to know how to
engage with these with these tools and
these systems, how to utilize them, how
to invent with them. And we'll have to
redesign education to account for that.
And we'll have to make school harder
because you do get access to these
supercomputers. And so in a in a kind of
superficial way, maybe that means people
are learning about quantum computing at
seven years old because that learning is
facilitated by a teacher and by a
supercomputer. But that part is going to
take time. So the more urgent kind of
redesign is flipping what happens in
school uh towards and versus what
happens at home. uh what happens if we
don't do that and I think it's quite
obvious right we we end up just
shortcircuiting the thinking uh so there
shouldn't be anything to cheat on
because what happens at home isn't what
we are evaluating uh and that is I think
the kind of the baseline that we need to
to move towards and that's I think what
we should be doing more urgently
so many places to go from everything
that you just said there um but I'd say
you know soam maybe a positive example
of how outside of hard learning and
maybe an AI tutor helping you learn the
things that you need to learn at your
own pace in the most individualized and
sort of datari manner um well how can
you use AI outside of that core learning
uh in a way that helps helps children
become more human right so like what
does human flourishing kind of look like
and does AI have a role in that I think
we need to learn how they work and learn
how to use them but also what should we
as humans be focused on because we want
to collaborate with AI. So what are the
sorts of uh subjects and skills that are
deeply human that we can focus on and
something I've been thinking about quite
a bit recently is like different types
of knowing. Uh there's a cognitive
scientist called John Vivei and he plots
out different types of knowing and kind
of the most sort of like academic or
kind of researchbased and fact and
knowledge based uh learning is
procedural knowing. Um and that's the
kind of knowing that AI is really good
at and is getting increasingly good at.
But what AI doesn't have is lived
experience and deep insights that change
you as you experience them in the world,
change your perception of the world, and
then change how you connect with others.
And so I've been really interested to to
hear some of the talks this week about
experiential in learning environments.
Um, I I learned about Thinkery, which is
here in Austin, and it's kind of this
interactive learning museum environment.
And it's just really interesting to
think about what are those deeply human
skills that we can be focused on
teaching students while they're learning
what AI is and the best ways to
collaborate with AI.
>> Yeah, I think that that's vital. And I
just wanted to quickly go back to the
cheating. Another thing we'll probably
need to do in the short term is
introduce more pop quizzes and surprise
tests and they don't need to count
towards grades but to see where students
are as we are in this kind of new
territory. So a lot of times we don't
know how much they're using AI, how much
they're cheating with it to insert more
more chances for assessment uh and be
tracking that data because that's
another thing. we don't have a lot of
visibility into how this experiment is
going in terms of what AI can do and
what AI can't do and therefore what
should we be teaching kids in particular
and what skills should be fostering. Um
my philosophy is we should never assume
AI will never be able to do something
and the the reality is we cannot predict
the future what jobs will be there how
advanced AI is going to get and how
quickly that means we have to prepare
kids for absolutely anything whichever
way the future evolves however quickly
we get to the moon start genetic
engineering kids can pivot adapt and
think critically about the world around
them. And most of those skills don't
actually have anything to do with
technology. Technology, they require
deeper thinking. Critical thinking is
absolutely vital. In the age of advanced
technologies, kids need to read more.
Read for the sake of reading and read in
a way that they can come back to school
or with their parents and discuss the
ideas and have those ideas challenged.
Kids need to play more in the age of
advanced technologies. The future Steve
Jobs of the world, they're not going to
come from a corporate cubicle. They're
going to come from people that have
imagination that can play freely,
experiment, work collaboratively,
long-term thinking. So, getting kids to
think beyond the immediate horizon and
beyond just that this unit test in
chemistry or in math, but how could this
impact things in five, 10 years to come?
uh and even cross-disciplinary thinking.
Kids in school today are likely to hold
17 jobs across five different
industries. They won't be doing just one
thing. So, we have to get them to think,
how does math connect to what I just
learned in history, which might connect
to what I do in in philosophy or in
English. So, all of these new and
they're not even new skills, but it's
just about centering these types of
skills. uh most of the important the
most important skills for the future are
ones we can foster for free and that's
what I think we can sometimes miss in
these moments where tech we feel like we
have to lean into technology for the
sake of it uh but it's actually the
other skills that we need to make sure
we are doubling down on in the age of of
advanced technologies uh and one kind of
lived example that I that I do with my
nieces and nephews constantly since the
age of about six or seven I
theoretically introduced uce them to
technology and this is what can happen
in the classroom as well. I explain
concepts in age appropriate ways like
genetic engineering and I ask them to
interpret what that would mean for their
world and their sense of ethics. So if
we could theoretically make sure nobody
gets sick in the world with these
technologies should we do that? But what
if to my nephew it meant all that
basketball practice you do somebody
didn't have to do because that same
technology allows them to suddenly be
really good at basketball. How should we
think about that? They engage in the
higher order thinking. They're exposed
to the longerterm concepts of technology
without actually having to play around
passively on an iPad. Uh so these are
the types of deep conversations and
higher order thinking that can happen
into the classroom that teachers are
uniquely positioned to deliver and to
facilitate. I mean when you think about
a teacher and they don't get enough
credit for all of the things that they
do. I mean the curriculum is one small
part of it. They are social workers,
they are therapists. They know their
children inside and out. So being able
to go deep into these types of
conversations, that's what we also need
to be focusing on. And I know it
sometimes feels counterintuitive because
I'm a futurist and I spend most of my
days in patents in technologies talking
about robots uploading brain interfaces
yet the most important skills for the
future have nothing to do with
technology.
>> Great. And I want to go back to uh
something that you said. So technology
for the sake of technology is absolutely
not the right way to go about things but
learning for the sake of learning is.
And some really interesting insights uh
this week about how schools and
testbased learning does not set students
up to enjoy or take pride in just the
the sort of act of learning. And you
know sort of encouraged and optimized to
find the answer, get the answer right.
And then even in a critical uh thinking
class that uh cognitive scientist uh
Christine Legari talked about yesterday,
even in that class where there is no
right answer, what the students wanted
is they wanted the rubric to get there.
And what she said to them is like, well,
you know, do you think you're you're
going to when you have a job in the real
world, do you think that there is a way
like, you know, you're not going to be
asked to to, you know, discover the
answer or or where we should go or the
right path, I should say, um, by being
given the rubric. So it sort of seemed
very we're at this kind of acute moment
where
the way students are taught and what
they're taught to optimize for is very
at odds with where we're at right now
with AI and the fact that it is designed
to give you the answer. And I actually
talked to a teacher who said uh teaching
16 to 18 year olds and she was like okay
so what I do to try and kind of
circumvent the use of AI and writing is
I have my students write in class and
you know I do give them a bit of a
rubric like this is you know a good
structure for an essay and then when it
comes time to actually submit the essay
they go home and type it up. In some
cases, more in a few cases, that student
had kind of ripped up their essay and
basically just completely generated a
new one in chat GPT. And what that said
to me was, I mean, there was no like
time saved or cognitive load saved in
doing that. What that says to me is that
there's like a we're in a confidence
crisis. Yeah. And this is this is
potentially really detrimental to
society more broadly, not just kids, but
all of us that we become so reliant on
these technologies. We stop believing in
our own ability to make decisions. And
no matter how good technology gets at
something, there will be times when we
have to deviate from the technologies
advice. And we have to make sure we are
ready for all of those moments. And you
might even hear people talk about
optimizing every aspect of your life
with artificial intelligence. And I
somewhat take issue with that because if
des writing that email is the the one
time in the day where you think deeply,
you you move through your ideas, uh you
have to structure what you want to say
and you pass that to an AI, unless you
were replacing that time and that
thinking with something else, that's a a
dicey bridge to be walking down. And
there was a recent study, I believe it
was Microsoft and and Carnegie that that
uh joined forces for this study and it
did show that over reliance on
artificial intelligence can reduce our
ability to think critically. Uh so we
need to make sure we are strengthening
these skills as we start to move and
work alongside artificial intelligence.
And there was another study that was
really helpful that showed this in real
life in the workforce where
entrepreneurs were given access to AI
systems to help with their small
businesses. The high-erforming
entrepreneurs that had deep critical
thinking skills, AI supercharged their
performance because they knew the right
questions to ask of the AI and they knew
how to apply the AI's answer to their
business. When the lower performing
entrepreneurs asked AI questions, they
ended up doing worse and it hurt the
company because they asked the wrong
questions, they just gave up and asked
the hard questions and they didn't know
how to apply the material to their
actual startup. So, we don't want to
build societies where we are 100%
reliant on these systems. Uh, and that's
something that we have to really think
carefully about from at an adult age and
at a child age. And I think we're
already seeing it in terms of our
attention spans, spelling. I'm sure
there's a lot of people in this room,
myself included, who feel like, I
spelled that word last week and now I
have no idea how to spell it this week.
Uh we want to make sure we're not
shortcircuiting the thinking uh in this
age. So again, really centering deep
problem solving, critical thinking, um
and deep learning.
Yeah, there's been a number of studies
like the Carnegie Melon Microsoft one
that shows when you outsource your
cognitive work to an AI, you actually
become cognitively weaker. And that
seems like extremely critical at an age,
you know, in a period in time where, you
know, students are supposed to be honing
their cognitive um abilities. But then
it's like, well, how do you know how can
you engage with that AI? you know, in a
way to actually um benefit from it. And
knowing that you if you outsource the
cognitive load and you're not doing the
cognitive work yourself, not only are
you missing that moment, but you're
missing that the insight kind of living
within you and kind of settling within
you and kind of becoming who you are and
increasing your body of knowledge and
your resilience and your strength and
your expertise. And it seems like in
this day and age where it's so uncertain
what jobs will look like, what the
future will look like, kind of radical
self-dependence is something that we
should be teaching. Um, and uh, it would
be great to kind of hear a little bit
about where we think that kind of
responsibility
lies in that respect.
>> Yeah. Um, and I always hesitate to when
I think about responsibility
to bring in parents because everybody h
is coming from a different place um, and
we can't really control what happens in
the home. That's an that's an entire
other week of Southby making sure that
all homes are equal and have um, access
to the same things. Uh, but in school, I
think we really need to think about
building confidence as a skill for kids
so they can continue to trust the
questions that they're asking. um and
their own ability to generate answers.
And again, it doesn't mean, of course,
in a world where AI is a master of
quantum computing, we want kids to be
able to ask a questions, but we help
them think more deeply about the
questions that they're answering and
they have a broad understanding of the
the questions that they're asking them.
They have a broad understanding of the
answers that AI can give them. And
again, that is a fundamentally different
society, right? Where we go from what is
the answer to what is the question. And
that's why that is part of that bigger
systemwide redesign. Uh but I think
centering confidence, encouraging kids
to speak in front of class, uh
classmates engage in conversation
because that is also the interface of
the future. It's it's conversing with
these AI systems is is absolutely
critical. And then in terms of you asked
what are the jobs of the future, nobody
can really predict them. We can predict
the jobs that are going to be automated.
That's much easier to see. Um, but the
same way nobody 20 years ago could have
predicted a social media manager was
going to be vital to a company's
existence, most of the jobs we can't
really see. We know that there's going
to be some convergence of synthetic
biology and artificial intelligence in
space. Uh, but again, it's about
preparing kids for anything. uh not just
trying to I I think we need to move away
from preparing kids for jobs because
jobs are going to change and that much
we can guarantee and that also means
moving away from coupling identity to
jobs. We have to move away from that
entire philosophy, right? That that idea
that we learn, we work, we retire,
that's all changing. So instead, we
encourage kids to lean into the the
problems that they want to solve, the
skills that they want to to adopt and
the amazing ways that they want to
change the world. I mean, tell kids
about the robots and the AI systems that
they'll be living with and ask them what
they want to do with it versus coupling
identity to jobs because that is just
going to end up uh in a crisis and we're
we're moving away into an entirely
different type of world. And so some of
the skills that we can teach children to
kind of prepare for this new future um
people sort of use a term like
metacognition right so how to think and
it was interesting uh in a in a talk
yesterday so you know one educator was
saying you know well you can't
necessarily stop people from you stop
students from using chat GPT but
something that he does is like okay you
used it show me your prompts show me the
questions that you asked it show me how
you pushed chat GPT because if you can
ask good questions and if you can become
a good communicator and you don't
necessar and you kind of know where you
want the answer to go and you can prompt
in that direction then um then then
that's that's a skill that's a skill for
for today and and for the future. Um,
another skill that sort of came up as
kind of an experimental sort of skill,
the New York Times recently uh covered a
story about using this term like a vibe
engineer, which is this idea that kind
of almost anyone with the will and the
passion to do it. Um, and that's
something that I think, you know, we
need to double down on on encouraging in
every individual, but anybody with the
will and the desire to create an app can
basically do that now. And it's this
thing called like vibe engineering. So
um a lot of people kind of creating apps
for themselves or apps for just a few
people. And so one of the kind of
emerging uh skills that was discussed I
was like around human- centered design.
So if every anybody can design products
for others like how do we get into you
know well what would be good for others
and so that felt like another territory
that felt rich.
>> Yeah. Yeah, I think centering the human
experience in an age of advanced
technologies um is an investment that we
should definitely uh be doubling down
on. And yeah, and again that does mean
introducing kids to these ideas, to
these technologies um but then bringing
it back to to the human to just kind of
the core fundamentals. I mean I think
history, ethics, philosophy, these are
subjects that become more important the
more advanced and technical our
societies get. And and like you
mentioned earlier, uh you know, the
computer scientists learning today are
going to be the future tech tycoons um
of tomorrow. And so what can we be
teaching them to create more ethical AI
and f you know exponential technologies
that are good for people that are
designed in a way that is good for
society. And so I think that's a really
hopeful message that we are in that
moment now where that next generation of
builders are kind of we have that
opportunity to kind of coach them and um
help them ask the right questions and
design for the good of society.
>> Yeah. I don't think I could have said it
better myself.
>> Yeah. Um so actually a bit of a a segue
into
the question of just ethics in this
space more broadly. Um and and actually
maybe before we kind of like dive into
some some of those areas uh what do we
think of kind of the role of the
educator in in all of this and um how
does that shift? So let's say in in you
know um in a great situation where
you've got AI ch AI tutor that is you
know the the entire re entire reimagined
um approach that you mentioned where you
have an AI tutor that's giving you
adaptive learning personalized and all
of that. Um what is the role of the
educator and all of that?
>> Uh I think that that's going to evolve
the more these pilots and the more the
studies come through. uh the different
positioning that the educator takes. So
whether that's um deep expertise in some
areas which will be vital um whether
that's uh facilitating the the right
questions to ask the right way to think
about material and the right way to
think about learning. Uh I think the
role of the educator stays deeply
coupled with kids understanding and
knowing how to learn. Uh and that is
what education was supposed to be for uh
learning. And so I think it it goes back
to that. Uh we've become we've
redesigned education to prepare people
for work. Um and I think we need to move
towards preparing people for life. Um
but the the educator still stays central
to that process. I mean I don't think
many people would want to send their
kids to a school with 95 robots and no
people. I don't think that's the future
that uh we're all aiming for.
>> Right. So I guess in in in some of these
kind of very innovative models like
Alpha School where it's two hours of
intensive personalized learning with an
AI tutor, the rest of the day is all
about human connection with teachers and
instructors and and guides that help
kind of uncover the passion of that
student and help to nurture it and um
help them to have the confidence to kind
of uh deliver on that. But with that
wanted to touch on you know the ethics
of this space a bit more.
>> Yeah and this is something we have to
really think carefully about uh
artificial intelligence data and
children. Uh that's already um a deeply
questionable intersection and ethics
appears in a few ways. So the first is
what data are these AI systems going to
be collecting when it comes to children?
are parents aware and did they give
consent or are we just kind of rushing
AI tools into class and what can be
interpreted from the data that gets
collected on children. So we want to
know where their stamina is on math. Uh
we don't want to interpret other
emotional cues unless we have figured
out how to do that safely with parent
consent. Uh so that's one area that I
think we need to to really understand.
The second is the strange way bias shows
up in AI systems. We often think about
facial recognition and the cases where
we know it uh more intimately. But there
are unique ways that AI can make
predictions about you when you interact
with it and then change the level of of
advice that it gives you or how well it
performs for you based on what it knows
about you. So there was a study done um
using most of the most famous AI systems
uh and it showed that when you asked the
AI systems about African-Americans, it
gave all great positive reviews. When
you gave the AI system uh an example of
text that had more traditional
African-American English in it and ask
the AI systems questions about that
user, the AI system would say, "Oh, this
person's never going to go anywhere. Uh
I can't even imagine a job for them.
They'll be in lowwage jobs." Picture
this in education. the AI system detects
somebody has this kind of ethnic
background or is this gender and then
gives the the teacher worse feedback on
that student uh in terms of assessments
or gives the student worse advice in
problem solving because it has already
made a prediction that that student is
not going to go anywhere in life. So
these are the more subtle ways we have
to apply foresight to ethics or ethics
and and foresight in in academia. And
I'd say the final thing that we're going
to have to watch out for, and we saw
this with social media after the fact,
is the relationships kids are going to
build with these systems. We are now
giving kids access to an infinite,
neverending opportunity to engage with
an imaginary friend, something that is
always on, can answer all of their
questions. That is a recipe for a new
type of addiction. And we have to really
be looking out for this. uh we kind of
missed the boat on smartphones and now
we're all trying to get them back out of
the classrooms. We can see this line of
sight directly with AI systems and chat
bots. Uh and this isn't of course all on
educators. This has to come to you know
tech companies how we design these
systems agegating them. But something to
look out for is this kind of new
addiction that might form between kids
and chat bots and that is not going to
end up well and do our best to bring
parents on board with that. So even if
that's at parent teacher interviews just
casually saying look out for the amount
of time your kid spends chatting with a
chatbot I noticed they were a little bit
more disengaged in class that could be
where uh so this is another area that we
have to apply foresight too but we can
see that line of sight happening quite
clearly if we don't intervene
>> yeah in in a similar way that we've been
talking about you know parents and
learners having that visibility into
their own data and kind of um their
performance and how engaged they are
with their work. Should there be a case
where everybody has that visibility into
the relationships with these chat bots?
Where do you think that line can be
drawn? But I feel like if there is that
visibility, then people can be a little
bit more relaxed. But then is that
>> Yeah, I would say that question needs to
be answered by a psychiatrist and a
psychologist. That is why these are
multiddisiplinary conversations. We need
to bring everybody to the board. um an
addiction or a relationship with an with
a chatbot shouldn't be something that
kids download in the app store. Um so
psychologists, psychiatrists, doctors, I
welcome you to this conversation because
we need your voice in it. It can't just
be happening out of Silicon Valley. It
can't just be left to parents to deal
with on their own. Everybody needs to
come to the table. We saw what happened
with social media. We don't have to do
that social experiment again.
So well said. Okay, we're going to take
some questions here. This one's from
Rob. How do you see AI increasing the
digital divide especially in underserved
communities in developing nations and
how do we as leaders stop this cycle
and we can see that general purpose
technologies build on each other right
so the communities that didn't get equal
access to electricity they're the
communities that don't get that are
struggling with the digital divide and
then there will be an AI divide uh that
is why that first pillar that I
discussed AI as a hard skill teaching
kids how to use artificial intelligence
how to prompt it, how to use it safely
is vital because that may be the only
opportunity kids get access to these AI
systems. So that's why it's not pushing
AI out of schools. Um it's being very
careful about adjusting how kids learn
with AI, but making sure we build AI as
a hard skill is absolutely vital in
schools and in education. When it comes
to the broader world, uh this is a
question that nation states are facing
urgently. um making sure there are
things like sovereign AI that every
country get gets access to computing
power, the opportunity to build the STEM
skills within their population uh to
adopt these technologies. Uh that is a
global conversation um that's also
happening against a very geopolitically
uncertain time. Um but it's a really
important question and unfortunately I
wouldn't be able to answer it in in in
30 seconds.
>> Yeah. And I don't know just to add
something just small to that in a way
could AI introduce to everyone because
everyone's got a smartphone regardless
of their socioeconomic situation if if
if students aren't taught how to use it
and just over rely on it and you know
that could be um some at a disadvantage.
Uh let's take another question. Um,
we're aware that AI cannot replace
inerson instructors, but will it and
should it replace the online
asynchronous instructors
in higher ed?
>> I'm not exactly sure what you what is
meant by this question.
>> Yeah, I guess um I how I interpret this
question is so we know the value of
in-person instruction and the need for
that human connection. There are other
modalities of learning. Some is kind of
like on demand learning and then you've
got some which is like live synchron
sort of synchronous but digital. Um
my um thought on that is I think when
content is pre-recorded maybe that's not
the best use of a teacher's time to have
sat in front of a camera and kind of
read through all of that content
themselves. Maybe that is a scenario
where you can outsource that to an
avatar or an AI in a different format
that is proven to be more personalized
and adaptive. And I would imagine that
any human to human interaction that's
focused on human connection is good
whether that's in person or has to be um
online. And I think that there's also
something interesting here and we
actually don't know that the answer to
this question but if you're taking say a
physics class online what now happens
when the physics teacher is also now
powered by these supercomputers and how
their perspective on physics changes and
how they see the world and then getting
access to that person in addition to the
AI. So this the the answer I think the
jury is still out on how that would
unfold specifically as it relates to
online learning.
>> Yeah, absolutely. And you know there are
um you know I work with a company that
creates AI twins for experts and what's
going to happen next is that experts
expertise that they trade on they own
their expertise but they're going to be
able to enrich that expertise with real
time data that they choose to bring in.
And so you know would you speak to that
real expert or would you speak to that
expert's AI twin? Well, in some cases it
might be advantageous to speak to the AI
twin. Even though the expert, you know,
the real in-person experience, you can
have, you know, much more creative
conversations. There might be scenarios
where the AI twin is actually more
valuable in that for certain contexts. I
think tackling the last one is
interesting. What are the pros and cons
of developing skills for prompts when
using AI? It is becoming critical for a
career. How will it impact social
skills? Um, so the pros are the more you
understand how to direct an artificial
intelligence system, the better response
uh and access to the how the AI kind of
processes that data you'll get. Um, so
that I think is very helpful. Another
pro is teaching people how to process
what is in their mind and formulate that
into a question that can lead to some
response. The con I see is that we end
up refining all of our ideas and
knowledge and optimizing it for
algorithms. So how algorithms we have we
will become optimization engines for
algorithms and I don't think that's the
world that we want to get into. I think
it's there are unique advantages that
artificial intelligence provides and how
it interprets data and there are unique
advantages to how humans approach data
and we don't want to make our approach
to thinking optimized for artificial
intelligence. We want AI to be optimized
for us. Um and so I think that that
would be the con. I think that that this
is going to be only a temporary
challenge uh as as we're seeing the kind
of nature and the science of prompting
is continuously evolving and eventually
it will turn to be much more
conversational. So the way you talk to
your colleague or you talk to your
teacher or your friend, you'll be able
to engage with AI in that way. But that
still means communication is absolutely
vital. Understanding how to share your
ideas and that isn't something that we
always center in education. uh but being
able to vocalize your ideas, refine your
refine what the knowledge that you have
uh in a way that's easy to understand
and to interpret uh for the general
public and not just for AI uh but will
be vital in the future.
>> Yeah. And I think as AIs become better
at prompting themselves and all of that,
you know, well, where does the human go?
The human needs to go deeper. They need
to get more creative. Like what is what
are these prompts even about? What is it
that I'm trying to achieve? What could I
achieve? And so I think that trajectory
is a positive one for humans. Like how
do you dig a deeper into your human
ingenuity because all of these things
can be handled for you. And so I think
that's a a net positive for you know
using AI kind of in in the right way. I
know that so there's a question that's
received the most likes and I wonder
why. What occurs when the US Department
of Education is demolished
and how do we move forward to make sure
all states receive equal AI education?
And I think this goes back to the first
question.
Investing in children's future
is an investment in national interest.
They are fundamentally coupled. So if
you want to talk economic strength,
economic security and national security,
you are inherently talking about the
success of the next generation. Uh so I
am not involved in how this is being
dismantled but I really hope um we are
prioritizing and centering children and
their ability to self-actualize and
reach the maximum capabilities that they
can in the decisions that are made
because that is going to be deeply
coupled uh with the longevity and state
continuity. So they can't be they can't
be decoupled. And that's why I say
education is a national security issue.
They need to be in the same room.
So these are fantastic questions. Um I
did want to leave just a couple of
minutes uh for Chenade to share just
some final rounding thoughts on this
last day of South by Southwest edu on AI
and the future of education.
Um well, first of all, just a a a major
shout out to teachers because this is an
incredibly complex time and they are
dealing with the most prize prized asset
on the planet, which is children. Uh so
this is I mean I think that they don't
get enough credit for the moment that
they're navigating.
Uh and and I think something to
remember. We're going to continue to
hear about advanced artificial
intelligence systems, quantum computing,
space, all of these deeply technical
advancements, but some of the most
important skills have nothing to do with
the technology. Um, and even for for
parents, it's not being able to navigate
an iPad passively at five that will
dictate whether your child will do well
in the future. If you said, you know, my
child doesn't really like working on the
iPad, but she's reading four books a
day. She loves her sports teams. She
wants to spend too much time at the
park. I would say that child is going to
thrive in the future. So even though
there's a lot of pressure to adapt to
this moment, remember it is the
non-technical skills that we need to be
centering because we are preparing kids
for a future we cannot see, which means
we have to prepare them for anything
regardless of the way technology
evolves.
And on that note, I think we will close.
Thank you for being an absolutely
fantastic audience. What impact do you
think AI will have on the workforce? And
do you think we are headed for an
identity crisis?
>> And this is the question that's
fascinating about AI. What else can I
become? Very few people have the courage
to ask that question. Why? Because they
look in the mirror in the morning and
they see an engineer or a doctor. They
don't see a person.
>> If they're not looking at artificial
intelligence and asking, "What are we
going to become with this technology?"
Would you say it's the beginning of the
end for
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.