TRANSCRIPTEnglish

From Writing Code to Managing Agents. Most Engineers Aren't Ready | Stanford University, Mihail Eric

14m 7s3,298 words470 segmentsEnglish

FULL TRANSCRIPT

0:00

there is this emergence of kind of like

0:01

a new I would say class of like engineer

0:04

which is like the AI native engineer and

0:06

AI is that language. AI is that new

0:08

language. This particular generation of

0:11

junior developers of junior engineers of

0:12

people that are now entering the

0:13

workforce will I think be the first kind

0:15

of generation of that new shift. A

0:18

single developer become a manager of

0:20

agents. Adding more agents doesn't

0:22

always create for a better system. In

0:23

fact, it can make for a lot worse

0:25

systems actually if if you just let them

0:26

go and do whatever they want. So really

0:28

knowing how to like properly handle

0:30

multiple agents is like the last boss in

0:32

a game. Like if you can do that really

0:33

really well, then you are like literally

0:35

like the top top.1% of of users even

0:38

today. I'm Mihel. I lead AI at an early

0:41

stage startup here in San Francisco. I

0:43

also teach class at Stanford. The title

0:45

of the class is the modern software

0:46

developer. It's definitely the first

0:47

class where the focus is AI across the

0:49

SDLC. within like a few hours of the

0:51

class being announced and it kind of

0:53

opened up for enrollment. Filled up over

0:55

100 students trying to get into the

0:57

class.

1:06

>> What is happening to junior software

1:08

engineers?

1:11

there was this huge momentum around

1:14

something kind of crazy is happening

1:16

software development and AI is really

1:18

starting to make its way into every

1:20

single part of how software is being

1:22

done and and clearly something was

1:23

changing and I've heard some pretty

1:25

scary anecdotes where I was talking to

1:27

someone that had just graduated at

1:29

Berkeley and they were saying that they

1:30

had applied to like a thousand places

1:32

and had heard back from like only heard

1:34

back from like two places. So not even

1:36

like got interviews and you know had

1:38

gone through the pipeline but really

1:39

were just heard back. So the reality is

1:41

for a lot of junior engineers it's very

1:43

difficult for them to you know get some

1:45

of these roles. It's it's an interesting

1:46

time in in the ecosystem actually the

1:48

soft ecosystem where where basically

1:50

three things happened came together in

1:52

this kind of like perfect storm. The

1:53

first thing that happened was in around

1:55

2021 there was this a huge surge of like

1:57

hiring soon after co there was just a

1:59

bunch of companies that felt they needed

2:01

to increase their employee count and

2:02

then I think a lot of companies realized

2:05

that they like overhired. So there was

2:07

like massive layoffs that happened where

2:08

all these companies that had hired a ton

2:10

of people realized that actually we can

2:11

like reduce our workforce by 20% 30% and

2:14

it's still okay. That was combined with

2:15

the fact that the growth of the CS

2:17

curriculum like the CS major nationally

2:20

and internationally has grown

2:21

tremendously in the last like 10 to 15

2:22

years. So when I was graduating, you

2:24

know, there's like some number of

2:25

graduates and I think since then like

2:27

it's doubled to maybe 3x in terms of how

2:28

many graduates from CS are graduating

2:30

every year. And so you have a huge

2:32

workforce of people that have

2:33

essentially like been laid off. you have

2:35

this new overwhelming like new

2:37

generation of engineers. We want jobs.

2:39

And then the third thing that I think

2:41

was contributed to all this was AI

2:43

became popular, right? Like people

2:44

started like really paying attention to

2:45

AI. And so for a lot of employers, they

2:47

started considering do I need to hire

2:49

more people to fill my gaps or can I

2:52

just hire fewer people that are maybe

2:54

native at AI and that way cover the

2:56

quota that I have maybe for employment.

2:58

And so this particular generation of

3:01

junior developers, of junior engineers,

3:02

the people that are now entering the

3:04

workforce will, I think, be the first

3:05

kind of generation of that new shift,

3:08

right, where they have to both have good

3:10

fundamentals, but also know how to be

3:12

fully AI native.

3:13

>> How top 1% AI native engineers

3:16

orchestrate agents. At its core, I think

3:19

that the AI native engineer is one that

3:21

both has like a strong backing and and a

3:24

foundation in traditional programming,

3:26

system design, and algorithmic thinking,

3:29

but is very competent at using like a

3:32

gentic workflows. I always teach them

3:34

like build it up peace meal. You know,

3:35

Boris from Claude said he does like 10

3:37

agents at once and so I should start

3:38

doing 10 agents at once. Like that

3:40

that's like the wrong outcome that you

3:41

should emphasize. Again, I would build

3:43

it one at a I would say, hey, like I'm

3:45

really good at doing one agent workflow

3:46

quite well and I can build like a

3:48

complex piece of software with one

3:49

agent, but then I know that I have to do

3:51

this like other thing which is like

3:52

maybe a small change. Thinking about

3:54

your tasks as something that are

3:55

isolated and that can be done with

3:59

confidence by something that is that is

4:00

a second or third agent. And so you add,

4:02

you know, a second agent to fix the logo

4:03

and you're like, well, this agent is

4:04

fixing the logo. Another agent maybe

4:06

could also um update the copy on the

4:09

header of of the website. And again,

4:11

this is like an isolated change that has

4:12

nothing to do with what the second agent

4:13

was doing. And so, the way I would think

4:15

about it is iteratively add more work

4:18

for the agents. Make sure that you first

4:21

understand what has to be done and then

4:23

know where the lines are between those

4:25

those items of work. And then like when

4:26

you're feeling good about how one agent

4:28

is doing something, then add a second

4:29

one. Then if the second one's doing well

4:30

and you're feeling confident, then add a

4:31

third one, you know? So, I would build

4:33

it up more step by step rather than 10

4:36

agents at once. The second thing that I

4:37

think is really really important there

4:38

is knowing how to like context switch.

4:40

In practice, what you're doing is you're

4:42

like kicking off these like interns.

4:44

Basically, they're like very eager,

4:46

savvy interns, these agents, and they're

4:48

doing a thing and then you're just

4:50

watching them in the terminal or like in

4:52

the IDE and you're just like seeing them

4:54

do work and they're like contributing

4:56

code and it's just getting written

4:57

somewhere and but sometimes they get

4:58

stuck, right? How do you go from like

4:59

one to another to like understanding,

5:02

hey, this agent one was working on this

5:04

particular task, agent two was working

5:05

on another task, agent three was working

5:07

on another task, and then you're like

5:08

constantly switching back and forth. And

5:09

it's a very difficult thing to do even

5:11

as a human, right? To know how to like

5:12

remember what the last thing was working

5:13

on, but still have enough context to

5:15

meaningfully push that task forward. And

5:17

so that switching, I think, is probably

5:19

one of the core skills of of getting

5:20

multi- aent workflows to work really

5:22

well. That what I've described is

5:24

basically what makes a good manager,

5:25

like a good human manager. It has

5:26

nothing to do with like an agent. Like

5:28

if you can do that task really really

5:29

well then you also are like a very good

5:30

you'll be like a good human manager in

5:32

general and so the people that I've seen

5:33

best at doing that are the ones that are

5:34

also have been managers of like humans

5:37

you know or human developers and have

5:39

learned how to do that context switching

5:40

and then apply similar principles to to

5:42

agents there's this concept that I'm

5:44

calling like an agentfriendly codebase

5:46

or an agent friendly development

5:48

ecosystem. Uh, and what I mean here is

5:50

if an agent was released into your

5:52

codebase, would it know how to

5:55

understand what's happening in the

5:56

codebase? When you release an agent to

5:58

go and build in the context of your

6:00

codebase, the way you ensure that

6:01

they're going to like not break

6:02

something and that whatever they

6:03

contribute will work, is they test it

6:06

against your tests, which are basically

6:08

contracts that define the correctness of

6:10

software. You need to define these

6:12

contracts. If if you don't have enough

6:13

test coverage, then you don't have

6:14

contracts for your software. agents only

6:16

can operate on contracts like explicitly

6:17

defined contracts of software. Any

6:19

developer who's been in the industry

6:20

knows that readmes get out of date with

6:22

what's happening in the code almost

6:23

immediately. And so you have these like

6:25

two descriptions of the same thing. The

6:28

code says one thing but the readme says

6:29

a completely different thing. If your

6:31

code has that kind of a situation then

6:33

the agent will read the readme and maybe

6:35

the code and they'll and they'll ask

6:36

them like which of these what's the

6:38

right interpretation? Should I follow

6:39

the read memes what the read me says or

6:40

what the codebase says? And so make sure

6:42

they're consistent, right? This is like

6:43

a simple thing. When you get spaghetti

6:46

code, it's typically when an agent has

6:49

maybe gone on and built something for

6:51

multiple iterations, maybe multiple

6:53

features, and it just started kind of

6:54

like going off the rails a little bit.

6:56

One bad thing that they're really good

6:57

at is agents can compound errors very

7:00

quickly. If an agent has one

7:01

misunderstanding in a code, and then it

7:04

sees that misunderstanding that it

7:05

created in step one, it can double down

7:07

and and create another error in step

7:09

two, it'll magnify it. The most

7:11

important thing is like having making

7:12

sure that the first thing that the agent

7:14

sees is completely robust and it's

7:16

completely airtight in terms of design,

7:17

in terms of testing, in terms of like

7:19

the build, like a lot of these like kind

7:20

of core parts of the the codebase itself

7:23

before you even think about the agent.

7:25

So again, like making sure that like the

7:26

first version of your code that an agent

7:28

sees is self-consistent, making sure

7:31

that it's well tested, making sure that

7:32

you have linting in place and style

7:34

checking so that you know your your

7:35

codebase is is consistently formatted. A

7:37

lot of these things will ensure that

7:39

your agent is always adhering to the the

7:42

kind of the rules of your codebase that

7:43

you've already defined. And then the

7:44

last thing that I'll add just just to

7:46

give another example of like agent

7:47

friendly agent first code bases, are you

7:49

consistent about like design patterns in

7:52

your code? What I mean here is if if

7:54

there's one part of your codebase where

7:56

when you create a certain kind of

7:57

object, you use this one API and there's

8:00

another part of the codebase, you also

8:02

create the same object, but you're using

8:03

a different API. when an agent now has

8:06

to develop in your codebase, which of

8:09

the two should it use? Should you use

8:10

the API 1 or API 2? And if people have

8:13

an agent that goes and picks the wrong

8:15

API, well, a human would also have been

8:17

confused. If I were walking to your

8:19

codebase and saw the two different ways

8:20

of doing it, I would also ask myself,

8:22

should I do one or two? I don't know. I

8:23

see both. And I would probably end up

8:25

asking a teammate, hey, which of these

8:27

are we actually supposed to use?

8:28

consistent design patterns and and kind

8:30

of programmatic patterns I think is also

8:31

something that the best agent friendly

8:33

codebases I've seen use

8:34

>> functional software versus incredible

8:37

software

8:38

>> a few things that define functional

8:40

software from like incredible software

8:42

the one version of the answer is just

8:44

taste like what is good software taste

8:47

right and genuinely there's people that

8:49

have taste and don't have taste or just

8:50

people that have taste that spend more

8:52

time developing that taste when I look

8:53

at sort of the the students in my class

8:55

we had some requirements like you have

8:56

to build like five different flows or

8:57

something like you can create those

8:58

flows, but if you want to push yourself

9:00

like doing the bonus, you know, the

9:01

bonus and then the extra credit, that is

9:03

like I think where the difference starts

9:05

to arise is when someone is like, I know

9:06

that I've already like hit 100% on this

9:08

or, you know, got most of the credit for

9:10

the assignment or the project, but I

9:11

really want to like I'm invested in like

9:13

building the most complex thing because

9:16

I want to solve a problem more than just

9:17

get the grade, right? But the taste

9:19

building happens in that like that last

9:21

mile like where you go spend and you

9:23

like do the extra work to like expand

9:25

the feature, make it more robust, make

9:28

more things possible in the application.

9:30

You know, the students again that I

9:31

think did the best were the ones that

9:32

like are now literally building startups

9:34

around their projects because they like

9:36

see that there's something there and

9:37

they're going to like they're rolling

9:38

with it. You know, like the class ended,

9:39

but they're like we're still but we're

9:41

still working on the exact same thing

9:42

because we think there's more to build

9:44

here. And that I think is where the way

9:46

the top engineers think. Experimentation

9:49

is sort of the name of the game in

9:51

becoming an AI native software

9:53

developer. One example that comes to

9:54

mind is when Boris came from cloud code

9:57

came to speak. Someone like Boris even a

9:59

team like you know Claude at Anthropic

10:01

that is building such an amazing piece

10:02

of software they basically rewrite

10:04

Claude every week or like week or two

10:06

weeks using Claude, right? So they are

10:08

like constantly rewriting their own

10:10

piece of software with software like

10:12

that they've built. And so they

10:13

themselves are also figuring things out

10:16

as they go like they are building their

10:18

system but they are experimenting and

10:19

constantly iterating based on feedback

10:21

from their users and even if they seem

10:24

like they have all the answers they

10:25

don't you know they themselves are also

10:27

discovering what works and what what

10:28

doesn't work and so the more important

10:30

thing is to build experimentation into

10:32

your own workflows and I tried to

10:34

reinforce in the students was look I can

10:36

come here and I can give you suggestions

10:38

I can say you should try this tool

10:40

here's what I think is good about this

10:41

tool but at At the end of the day, you

10:43

have to sort of like beat your head

10:44

against the wall a little bit yourself.

10:46

You have to be able to experiment. You

10:47

have to be able to see what works for

10:48

you and what doesn't work for you and

10:50

really just kind of make that a part of

10:52

the kind of the new way of doing

10:53

software development, experimentation,

10:54

hacking, and just making that a part of

10:56

your workflow.

10:57

>> Why the world still needs junior

10:59

software engineers?

11:00

>> Senior developers historically tend to

11:02

be a little bit resistant to AI tools

11:04

because they're so ingrained in their

11:06

own way of doing things cuz they've been

11:07

developing for 20 years and they're

11:08

like, "Oh, the only way to do this is

11:10

the way that I've done it. I use them."

11:11

the senior developer sometimes going to

11:12

be the most stubborn, but someone who is

11:15

coming to the industry for the first

11:16

time, they're like they're like a

11:17

sponge. Like everything is possible to

11:19

them. Like they're learning things for

11:20

the first time. And so all of the things

11:22

that are difficult about the world and

11:24

society and and industries and

11:25

verticals, they don't yet. They haven't

11:27

internalized that yet. They're not like

11:28

scarred by like how hard healthcare is.

11:30

They just see like, oh, I see a problem.

11:32

I don't know. Like why don't I go try

11:33

and do it? And so there's like a good

11:35

naivity to how young young people think,

11:38

which is perfect for a startup founder.

11:40

they're going to be be brave enough to

11:41

go and tackle the thing. In those

11:42

situations, they end up being the best

11:45

people that have adopted that skill set

11:46

that everyone is now asking for. Even if

11:48

there is concern that, you know, it's

11:50

becoming harder to to kind of get

11:51

employed, I think the people that are

11:54

learning these skills for the first time

11:55

end up being the most nimble and end up

11:57

being the most like fast at like kind of

11:59

using those skills. So, I actually think

12:01

they can still succeed in ways that

12:03

senior developers cannot. fundamentally

12:04

like what you're teaching with software

12:06

is like how to think about building of a

12:09

of a complex system using digital means

12:12

and like learning how to use algorithms

12:14

to solve that system. This is almost

12:16

like more like math than it is like CS,

12:19

right? It's it's like like you're

12:20

learning like math skills almost. And I

12:22

think that is just like teaching someone

12:23

how to like think because so much of the

12:25

CS profession is is breaking things up

12:28

and seeing how things work and then

12:30

fixing things and then expanding on

12:32

things and kind of iterating on things.

12:33

And so I think that the people that are,

12:35

you know, developers by trade, they're a

12:37

lot more willing to customize things.

12:40

They're a lot more willing to to kind of

12:41

fix things when they don't work. They're

12:42

a lot more willing to say like, "Hey,

12:44

why did this happen? Let me see if I can

12:46

kind of get into that, you know, get

12:47

into internals a little bit in ways that

12:48

other people are more like the system

12:50

doesn't work. Okay, I guess I need to

12:51

move away from it. Almost like

12:53

arrogance. Like the arrogance of a

12:54

developer sees any problem and thinks

12:56

software is the solution to the problem.

12:58

It's like the confidence to say like,

12:59

hey, I I'm going to try and fix this in

13:00

a way that I know how to use and I'm

13:01

going to use the tools that I know how

13:02

to use and let's see if we can make this

13:04

work. And that that I think is the the

13:05

kind of the most powerful properties of

13:07

of CS developers.

13:10

>> So you're like, Claude, make me

13:12

something. Codeex, make me something.

13:13

And then you're like, let's add this

13:14

other feature. And then like let's do

13:15

another one. And a month goes by and

13:17

you've built the most beautiful piece of

13:18

software. It's crazy overengineered and

13:21

then you launch and nobody wants it. Hi,

13:22

my name is Rem Coning. I'm a professor

13:24

at Harvard Business School and I study

13:26

entrepreneurship and AI. I think we're

13:28

in a world where increasingly what

13:29

matters is your ability to allocate

13:31

intelligence. The key for AI native is

13:34

that you're not just using it to do the

13:35

work. You're embedding it in the product

13:37

so that the AI can directly do the work

13:40

with the customer. You want to take you

13:43

as the human out of the loop. That's the

13:45

key to building AI native organizations.

13:47

What happens when the AI starts talking

13:49

to one another? What happens when the AI

13:50

start collaborating? What do they need

13:52

for one another? I think is a big

13:55

interesting open question. It's a little

13:56

provocative to think that way. Um, but I

13:58

think it's one where there'll probably

14:00

be some trillion dollar companies that

14:02

come out of answering that question.

14:03

Well,

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.