TRANSCRIPTEnglish

You’re Not Behind (Yet): How to Learn AI in 17 Minutes

17m 23s2,748 words405 segmentsEnglish

FULL TRANSCRIPT

0:00

Most people using AI are doing it wrong,

0:02

which is why it's surprisingly easy to

0:04

get ahead of 99% of them. I have spent

0:07

over 20 years in tech and AI as a CEO,

0:10

board member, investor, building

0:12

billiondoll companies. And here's what

0:15

I'm seeing. The gap between people who

0:17

understand AI and those who don't is

0:19

getting wider faster. In this video,

0:22

I'll give you a clear sevenstep road map

0:24

to master AI like the top 1%. And the

0:28

best part is you can actually do it in

0:30

just 30 days, even if you're a total

0:32

beginner. Let's dive in. Week one starts

0:35

with learning what I call machine

0:37

English. Most people talk to AI like

0:40

it's a person. And that's a huge

0:42

mistake. Why? Because the generative AI

0:44

systems like Chad GPT don't actually

0:47

understand our language. They predict

0:48

it. And that's where most people get

0:51

stuck. If I said Humpty Dumpty sat on a

0:55

Your brain's going to fire wall, you

0:58

knew what was coming. Your brain

0:59

predicted it. You could have said Humpty

1:01

Dumpty sat on a roof. Now it's accurate,

1:04

but you knew wall was more likely based

1:07

on what you've seen before. Think about

1:09

Google search. It does autocomplete the

1:12

same way. Why? Because it has seen so

1:15

many search queries before. It has

1:17

learned from it and now is giving you

1:19

the most likely option. AI models like

1:21

Chat GPT or Gemini work in a similar

1:23

fashion, but they're different than

1:25

search engines because they don't store

1:27

any [music] pre-baked answers. They

1:29

generate the answer on the fly. How do

1:32

they generate it? Like at a very high

1:33

level, AI breaks your text into smaller

1:36

parts called tokens. Each token is a

1:40

word or sometimes a part of a word.

1:42

Humpty is probably one token. Dumpty

1:45

could be another token. Sat another

1:47

token. Wall another token. [music]

1:49

Then AI converts each token into a list

1:52

of numbers, also known as

1:54

multi-dimensional vectors. Those numbers

1:56

are placed inside a massive mathematical

2:00

space called an embedding space. And in

2:03

that massive space, similar ideas tend

2:06

to live closer together. The system has

2:09

learned from previous experiences. So,

2:11

it knows that the word Humpty, egg,

2:14

wall, and fall will be closer, [music]

2:16

but they're going to be far from words

2:18

like motorcycle or chocolate. Now, when

2:21

it's time to generate the answer, AI

2:23

looks at the context and predicts the

2:26

most likely next token. So, when it sees

2:29

Humpty Dumpty had a great, it weighs all

2:32

the options. Humpty Dumpty had a great

2:34

party. Humpty Dumpty had a great day.

2:36

Humpty Dumpty had a great chocolate. and

2:39

it sees that the word fall is the most

2:41

likely outcome. So the line is generated

2:44

and finished not from memory, not from

2:47

stored facts, but from probability and

2:50

proximity. That's why AI can feel so

2:54

smart, but also so alien. Now,

2:56

[clears throat] I'm skipping a lot of

2:58

details here, but the important takeaway

3:00

here is that when your prompt is vague,

3:03

[music] this guessing machine called

3:04

Chat GPT or Gemini will produce guesses

3:08

that are also vague. And if your prompt

3:11

is sharp and targeted, AI will come back

3:14

to you with sharp and targeted guesses.

3:17

That's what I call machine English. It

3:19

helps AI to compute your intent, not

3:22

just try to comprehend it. So, what does

3:25

a sharper prompt look like? I call it

3:28

aim. A for actor. Tell the model who

3:31

it's acting as. I is for input. Give it

3:34

the context and data it needs. And M for

3:36

mission. What do you want it to do?

3:38

Instead of typing, let's say, fix my

3:40

resume, try typing, [music]

3:42

hey, at GPT, you are the world's most

3:45

sought after ré editor and business

3:47

writer. You've reviewed thousands of

3:50

résumés that led to interviews at top

3:52

tech companies. You've told the AI what

3:55

its persona is, [music] what it's acting

3:58

as. A second line, I'm attaching my

4:02

resume and the job description for a

4:04

senior product manager role at a fintech

4:07

company. That's your input. Third,

4:09

mission. Review it and give me a bullet

4:12

list of 10 specific ideas [music] on how

4:15

to improve clarity, measurable impact,

4:18

align with the role. Your mission is to

4:21

help me build the best resume that gets

4:24

me hired. That's how you take aim. It

4:27

turns a prompt into a structure. The

4:30

model can understand, compute, and

4:32

reason with. You can use this three-part

4:35

structure in almost all prompts. And

4:38

from now on, you will start seeing the

4:40

results to be at least five or 10 times

4:42

better than before. Only when you learn

4:44

its language does AI finally start

4:47

working for you. Now that you understand

4:50

how to speak to AI, we're going to pick

4:52

your instrument. Here's the thing. Most

4:54

people start their AI journey the wrong

4:56

way. They Google top 50 AI tools. They

4:59

pick 10 and they jump from one to the

5:02

other. They skim through all of them.

5:04

That's a recipe for failure because

5:07

there's so much out there. My

5:09

recommendation, pick one, go deep. Think

5:12

of learning AI the same way you would

5:15

learn an instrument. You know, there is

5:17

a study in Frontier Psychology that

5:19

found that drummers pick up guitar

5:21

faster than complete beginners. [music]

5:23

Drumming is not even about melody and it

5:26

requires very different physical skills.

5:28

[music]

5:29

But I personally had the same

5:30

experience. I spent tens of thousands of

5:33

hours as a drummer. [music] And when I

5:36

picked up guitar, it wasn't easy, but it

5:39

wasn't uncomfortable because I already

5:41

knew [music] how to practice and my

5:43

brain was trained to see structures and

5:46

patterns. [music] The deeper you dig

5:48

into one foundational model, the faster

5:50

you will find the rhythm of all the

5:52

others. So, which one do you pick? If

5:54

you want the most mature one, pick Chat

5:57

GPT. If you're deep into Google stack

6:00

and Google's ecosystem, try Gemini. If

6:03

you want more business and projectbased

6:06

AI, go with Claude. But really, it

6:09

doesn't matter what you pick. In the

6:11

first week, spend time with one of them

6:13

[music] and learn its personality, its

6:15

cadence, its limits, its strengths. The

6:19

goal is to [music] start feeling the

6:22

rhythm. Once you get comfortable, try

6:24

using the aim [music] framework that we

6:26

talked about. By the end of week one,

6:29

you should be able to write a structured

6:31

prompt without thinking. All right, so

6:33

we've started using AI. Now, let's talk

6:36

about what actually makes your outputs

6:39

smart, and that's context. [music] The

6:41

world's smartest AI will sound clueless

6:43

unless you feed it context. Every answer

6:46

AI gives depends on how it understands

6:49

the question. If you don't give it

6:51

context, it has no grounding. Remember

6:53

[music]

6:53

that inside these AI models, there is

6:56

nothing but a crazy mathematical space

6:59

filled with billions of numbers. [music]

7:01

Context is the map that helps you

7:04

navigate that space to tell AI where to

7:07

look and what matters. And the best way

7:10

to build that map is with an acronym I

7:13

call [music] map. M is for memory. the

7:17

conversation history or the notes that

7:19

carry over from previous chat sessions

7:21

that you've had with the AI. Now, you

7:23

can repaste the thread or ask the model

7:26

to summarize before starting again.

7:28

That's how you'll start building

7:30

continuity [music] in your

7:31

conversations. A is for assets. The

7:34

files, data, the resources [music] that

7:37

you attach or copy paste in your prompt.

7:40

These assets help you ground the model

7:44

in reality. Second A is for actions. Now

7:48

these are the tools that the model can

7:50

call to do work. The action could be

7:52

search the web or scan your drive or

7:56

write this code or create a notion doc

7:59

and P is the prompt and the prompt is

8:01

the instruction itself. So the better

8:04

you get with memory assets and external

8:07

actions, the better context you'll give

8:10

AI in the prompt. And the richer the

8:13

context, the better the AI reasoning and

8:16

response. Once you start using these

8:18

frameworks like AIM and MAP, you have

8:21

joined the top 10% of AI users. But if

8:25

you want to hit that absolute expert

8:27

level, there is one more thing that you

8:29

really need. Debug your thinking, which

8:32

is step four. When you're not getting

8:34

the right answer, the problem is not the

8:35

AI, it's your thinking. [music] I

8:38

remember the first time I ever prompted

8:40

an AI. It was one of those earliest

8:43

models from OpenAI and I spent an entire

8:47

day trying to make sense of it and by

8:50

the end of it I was super frustrated

8:52

because it was random. It was

8:54

unpredictable. But back then no one

8:57

understood. The phrase prompt

8:59

engineering hadn't even existed yet

9:02

because prompting isn't typing. It's

9:04

iterating. When the output is weak, I

9:07

assume the fault is mine because it is.

9:11

[music]

9:12

Did I get it the right persona? Did I

9:15

provide the right context? Did I give it

9:17

the right goal? And sometimes I even ask

9:19

the model itself, what did you do? And

9:21

why did you choose that answer? [music]

9:23

It will explain its logic. He'll explain

9:25

his chain. And that's when the magic

9:28

starts. You're not just using AI, you're

9:31

learning how it thinks. There are three

9:34

cheat codes I use for that. The first is

9:36

the chain of thought pattern. When the

9:39

answer seems off, I would say think step

9:42

by step. Show your reasoning. Then give

9:45

me the final concise answer. The second

9:47

is the verifier pattern. I would say to

9:50

the AI, ask me three questions that

9:52

would clarify my intent to you. Ask them

9:55

[music] one at a time and then combine

9:57

what you've learned and try again. And

10:00

the third is the refinement pattern

10:03

where you're refining your input itself.

10:06

Before answering, propose two sharper

10:08

versions of my question. Ask which one I

10:10

prefer. So AI will tell me how to ask

10:13

the right [music] way. And then we

10:15

continue. And you have to keep iterating

10:17

with these patterns because these loops

10:20

can teach the model how to understand

10:22

you [music] and teach you how to

10:24

understand the model. test, tweak, tune

10:27

up, push until you can tell why [music]

10:30

something is working and why something

10:32

is off. That's when it clicks. You're

10:35

not talking at AI anymore. You're having

10:38

an ongoing conversation. You and AI are

10:41

learning together from each other. But

10:44

here's the thing, it's not enough to

10:46

just debug your mind. If your post

10:49

sounds like every other LinkedIn post I

10:52

see that's pasted from [music] chat GPT,

10:54

you still have a problem. And that's why

10:56

step five is to steer to experts. When

11:01

you ask Chat GPT [music] a question,

11:03

you're not searching a database of

11:05

answers. You're sampling from millions

11:08

of probable ideas that AI has learned

11:11

over time [music]

11:12

and is storing as billions of numbers.

11:15

is some are brilliant, some are average,

11:18

some are completely made up, [music] and

11:20

some are flat out wrong. If you prompt

11:23

vaguely, like explain how to make a team

11:27

more innovative, the model will give you

11:29

a superficial generic blah answer full

11:32

of buzzwords. And you'll read it and

11:35

think, "Yeah, I already knew that." So,

11:38

how do you [music] fix that? You direct

11:40

the model away from the middle and

11:42

toward the sharper edges of its brain.

11:45

[music] So instead of that vague prompt,

11:47

you can say this. Explain how to make a

11:49

team more innovative using ideas from

11:52

Pixar's brain trust, Satya dea strategy,

11:55

[music] and Harvard's research. Now you

11:58

pull the model from mediocrity into

12:02

mastery by navigating it toward experts,

12:05

[music]

12:05

frameworks, depth. What if you want to

12:09

learn about black holes and you don't

12:11

know who the experts [music] are? No

12:13

problem. Ask AI first. List the top

12:17

experts, researchers, and [music]

12:19

research papers and current thinking on

12:22

black holes. Then feed the same thing

12:25

back to [music] the model and prompt

12:27

using these experts and sources

12:30

synthesize the original framework that

12:33

fills the current gap on the science of

12:35

black holes or whatever it is that

12:37

you're after. That's [music] the way you

12:39

make sure AI is not an echo chamber

12:42

anymore. But remember, you're going to

12:43

need to verify what you get. That's our

12:46

step six. Sometimes AI will tell you

12:49

things like 68% of Americans are getting

12:51

divorced. I mean, you know, it's not

12:53

true. But the scary part is AI will

12:56

sound just as confident when it's wrong

12:59

as when it's right. So, you can tell AI

13:02

100 times, stop making stuff up. [music]

13:06

But all models are essentially

13:09

generative by design. [music] Making

13:11

things up is why they exist. So, what do

13:14

you do about that? You simply verify.

13:17

[music]

13:17

Don't just consume. Critique. There are

13:20

five ways to separate intelligence from

13:23

illusion. Assumptions, sources, counter

13:27

evidence, auditing, and cross model

13:29

verification. Let's take one at a

13:31

[music] time. Assumptions, ask. List

13:34

every assumption you made and rank them

13:37

each by confidence. Second is sources.

13:40

[music] Ask. Site two independent

13:41

sources for each major claim that you

13:43

just made. Include title, [music]

13:46

URL, and a oneline quote. Now you can

13:48

check it yourself. That's the [music]

13:50

scaffolding behind the answer. Counter

13:52

evidence. Push it. Find one credible

13:55

source [music] that disagrees with your

13:57

answer. Explain the dependencies. That's

14:00

where real reasoning lives. Auditing is

14:02

the fourth one. Ask. [music]

14:04

Recomputee every figure. Show your math

14:07

or code. You'll be shocked how often the

14:10

numbers change once you make it slow

14:13

down and [music] start auditing. And

14:15

finally, crossmodel verification. This

14:18

one's my favorite. I run the same prompt

14:21

in ChatgPT and Gemini and Claude.

14:23

[music] I take the output from one model

14:26

and ask another to critique it. Or

14:28

[music] I feed the claims of one model

14:30

into the other and say, "Verify this."

14:33

That's how you separate [music] noise

14:34

from knowledge. By the end of your third

14:37

week, you'll start feeling more [music]

14:39

in control of your output. But here's

14:42

the problem. The best AI output aren't

14:45

the ones that sound the most original,

14:47

[music] they're the ones that sound like

14:49

you. That's why step seven is about

14:52

developing tastes. Most people use AI

14:55

like a vending machine. They push a

14:58

button, grab the same junk food output

15:01

everyone else gets, and call it a day.

15:03

If you did that, most people will know

15:05

you just copy pasted it. But you are

15:08

past that now, right? It's your fourth

15:10

week. It's time to step into the ring.

15:12

Treat AI like your sparring partner.

15:15

Argue with it. Push back. Sharpen your

15:18

thinking. Sharpen its thinking. That's

15:21

where the ocean framework comes in. Is

15:23

how you turn generic answers into

15:26

tasteful insights. Something that sounds

15:29

like you. Oh, original. Look at the

15:32

response. Is there a nonobvious idea in

15:35

it? If not, push it. [music] Ask, give

15:39

me three angles. no one else has thought

15:41

about. Label one as risky and recommend

15:44

the one that you like the most. C

15:46

concrete. Are there names, examples, and

15:49

numbers that make sense? If not, ask.

15:52

Back every claim with one real example.

15:55

E is [music] evident. Is the reasoning

15:58

visible? Is there enough evidence? If

16:00

not, ask. Show your logic in three

16:02

bullets. [music] Provide evidence before

16:04

you provide final answer. A assertive.

16:08

Does it take a stance? you could agree

16:10

or disagree with. If not, push it again.

16:13

Don't tell me what I want to hear. Pick

16:15

a side. State your thesis, defend

16:17

[music] it, and then address the best

16:19

counterpoint. Narrative.

16:22

What's the story? [music] Does it flow?

16:24

Is it tight? Guide it. Write it like a

16:26

story. Hook, problem, insight, proof,

16:28

actions, whatever you want in that

16:30

story. So, that's the ocean framework to

16:33

add taste to your output. Now, as you

16:36

apply this over 30 days, you will start

16:39

noticing something [music] deeper. Every

16:42

prompt you write, every revision you

16:45

push, every judgment you make, you're

16:49

not just [music] training the model, you

16:51

are training you. AI is coming whether

16:54

we like it or [music] not. To some, it

16:57

might be triggering lots of deep fears,

17:00

but I remain a perpetual optimist.

17:04

[music] I think AI is not here to

17:07

replace human work. It's here to restore

17:10

human worth. If you like this video,

17:13

[music] don't forget to subscribe and

17:15

check out my most recent video here.

17:18

Thank you and I love

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.