TRANSCRIPTEnglish

omfg | This changes EVERYTHING in AI [NVDA / Groq]

29m 10s4,766 words696 segmentsEnglish

FULL TRANSCRIPT

0:00

This changes absolutely everything for

0:03

Nvidia. This is remarkable, especially

0:06

with Nvidia trading for just a $1.67

0:09

peg, suggesting that Nvidia could be

0:12

worth over $300 per share. Now, in this

0:16

video, I'm going to break down and

0:18

reconcile why would it make sense not to

0:21

hold Nvidia, what are the big downsides,

0:25

especially after this massive $20

0:26

billion acquisition. But why did Nvidia

0:30

actually make this acquisition? What's

0:33

the play here? What are they getting at?

0:35

And importantly, what does it mean for

0:37

companies like Coreweave, Nbby, XAI?

0:40

What does it mean for these other

0:42

companies that have been hoarding Nvidia

0:45

chips? And better yet, did Nvidia

0:49

hint that this was exactly what was

0:51

going to happen by themselves getting

0:54

out of the data center rental or leasing

0:58

business. Ah, a lot to break down in

1:00

this video. Let's get into it. A quick

1:02

note, obviously, I'm not in my studio.

1:05

Uh, I saw the snow in Tahoe and I'm

1:08

like, Jack, let's go skiing. I asked

1:11

Max, too, but he didn't want to come.

1:12

But anyway, we're gonna be skiing this

1:14

weekend again. [laughter] All right, so

1:16

let's get into it. So, obviously, Grock,

1:19

some people pronounce it grog, some

1:21

people say Grock. I'm just going to

1:22

pronounce it Grock. That's not to

1:24

confuse you with X AI's Gro or Elon

1:28

Musk's Grock, the chatbot. This is a

1:31

separate company also called Grock

1:34

spelled GR OQ

1:36

that is a chip designer and more

1:40

importantly a software designer for

1:43

those chips. Keep that in mind. So this

1:46

is a $20 billion deal. I wrote down some

1:48

notes to keep this smooth for us. They

1:50

were valued at $6.9 billion in

1:53

September. Now they have a $20 billion

1:56

essential buyout deal going on. It's not

1:59

a buyout. It's a licensing deal. They're

2:01

doing that to skirt and like regul play

2:04

regulatory arbitrage and get around uh

2:06

merger and acquisition uh antirust uh

2:09

laws and blockades and two-year approval

2:12

processes. So Nvidia is just basically

2:14

like, nah, you guys keep doing your data

2:17

centers. We're just going to take your

2:18

top talent and your intellectual

2:20

property and we'll give the company $20

2:22

billion. So that's good for the

2:25

shareholders of obviously Grock, but it

2:28

gives Nvidia exactly what they want. And

2:30

you'll see in just a moment why this is

2:32

so important. I'm I'm personally very

2:34

excited about this. I I when we had our

2:37

course member live stream this morning,

2:38

I'm like, this is bullish Nvidia. I

2:40

don't know how long it's going to take

2:41

for the market to sort of digest it, but

2:43

this is overall bullish for Nvidia. That

2:45

was part of our alpha report this

2:47

morning. If you're not a member of that

2:48

yet, go to mekevin.com. You can see

2:50

that, see that analysis. and of course

2:53

get my top 10 stocks to buy for the next

2:56

10 years. Now, Grock chips. Now, here's

3:00

the key. A lot of people are going to

3:02

say this is a play around memory that

3:06

Nvidia's GPUs

3:09

utilize high bandwidth memory, which is

3:12

really just stacked up DRAM memory,

3:15

which that's kind of redundant because

3:16

the M and D RAM is memory, but anyway,

3:18

just to make it simple to understand.

3:20

Um, Grock chips use a different style of

3:24

memory. It's more expensive and you

3:27

actually need substantially less

3:29

megabytes compared to gigabytes of

3:32

memory to process what Grock does well.

3:36

Now, before I really get into the

3:38

technical weeds, I think it's worth

3:40

understanding what Jensen said they're

3:43

planning on doing and what the CEO of

3:47

Grock, the founder of the company, said

3:49

their intentions are. So, Jonathan Ross,

3:53

the founder of Grock, who worked on TPUs

3:55

at Google, was a contributor to the

3:57

invention of TPUs at Google in 2020 in

4:00

2016 and then moved on. He argues uh

4:04

that these chips are really designed to

4:08

be the low margin inference computers

4:12

for the artificial intelligence wave.

4:14

And he made this pitch and this argument

4:17

possibly in a way not to piss off Nvidia

4:20

to say, "Oh, we actually help Nvidia."

4:23

And there are many interviews of him

4:25

saying we help Nvidia because Nvidia can

4:28

sell their high margin GPUs and we'll

4:31

just take the inference market while

4:33

they take training and inference is the

4:35

low margin business. Now I'm going to

4:38

get to why this was a perfect setup

4:43

right now. See the CEO of Grock never

4:48

said anything bad about Nvidia. instead

4:52

sold his product as something that could

4:54

actually help Nvidia. And I think that

4:57

is exactly what set up the acquisition.

5:00

Now, why is this acquisition so

5:03

brilliant by Nvidia? And then we can get

5:05

into some more of the technical weeds.

5:08

Here's a simple way to compare the two

5:11

chips. This is the simple part. We're

5:13

going to leave some of the detailed weed

5:15

stuff for a little bit later in the

5:16

segment. So the detailed weeds separate

5:19

right now. Overview GPUs great at

5:23

training models. Think uh all like

5:27

memorizing the entire internet and

5:29

trying to generate some kind of world

5:32

view that we can then ask questions

5:34

about and generate new answers or new

5:36

ideas or new images from. Genai. TPUs

5:41

from Google have been an attempt to do

5:44

something similar at a cheaper cost.

5:48

Basically, cutting out Nvidia's margins.

5:50

That's why Google likes their own TPUs.

5:53

That's why Amazon is making their own

5:55

chips. There's an incentive for

5:56

companies to make their own chips to

5:58

really cut out those 75% gross margins

6:01

that Nvidia has. Now Gro is really or

6:05

Grock is really interesting because

6:07

Grock doesn't prop to say hey we're

6:10

going to help you train this worldview

6:12

model instead Grock comes in and says

6:16

hey we're going to help you where people

6:19

actually utilize artificial intelligence

6:21

and that's in communicating with AI bots

6:26

customer service agents live

6:28

translations through like metagasses

6:31

and what you'll find today is people

6:34

meme AI chat bots with how slow they

6:37

are. If you've ever gone on Tik Tok and

6:39

you've seen people joke with the the

6:41

sort of like uh talking to GPT in real

6:44

life, people be like, "Oh, uh, hey GPT.

6:48

Uh, can you let me know where the

6:49

closest pizza restaurant is?"

6:53

Ah, cool. You want to know where the

6:55

closest pizza restaurant is? Are you

6:57

looking for Chicago style or more of an

7:00

Italian style? You know, it's it's you

7:02

know, the latency is the awkward part

7:05

and people joke about that. Pilots have

7:08

made jokes about having AI as a co-pilot

7:10

and you'll get something like

7:13

GPT gear downoop.

7:19

Ah, I see. Uh, so you're requesting that

7:22

I put the gear down. Uh, is there a

7:24

reason you want to put the gear down now

7:26

compared to later? That just put the

7:27

damn gear down. You know, the captain

7:29

then freaks out, right? That's this sort

7:31

of joke. But that joke really helps us

7:34

understand where the problem in AI sits

7:36

today. Now, I'm going to inject my

7:39

opinion here for a moment. I've been

7:40

providing in my opinion facts and data

7:42

and kind of trying to build this story

7:44

up because it's a lot. It's a lot to

7:46

digest and so we're going to take this

7:47

elephant in small bites together. That's

7:49

how I like to teach. Uh so I want you to

7:53

think for a moment of uh overall when

7:58

you ask a chatbot a question uh you're

8:03

looking for a rapid response but you

8:04

don't really mind waiting for a moment

8:07

because it's a chatbot it's in your app.

8:10

But what happens when you're on the

8:12

phone and you're selling customer

8:16

service that seems humanlike,

8:19

but it has a massive delay. And that's

8:22

where that human effect from that humor

8:26

really highlights the problem with

8:29

delays. And that gets to my opinion and

8:34

it has to do with the commoditization of

8:36

AI. See, watch this.

8:39

I don't believe that generative uh AI or

8:44

any of the AI we have today is going to

8:46

lead to artificial general intelligence.

8:48

So to understand my conclusion here, you

8:51

have to understand that my bias is that

8:53

AGI is bull crap. It is this moonshot

8:57

idea that helps people raise money for

8:59

their startups, but artificial general

9:01

intelligence is not where we're going.

9:03

That's not where the money is going to

9:04

be made. So you have to understand that

9:06

first. Two, not only do I not believe

9:09

we're going to have artificial general

9:11

intelligence, but I also believe, and

9:13

I've been saying this for years, that

9:14

language learning models are going to be

9:16

commodities. There's nothing going to be

9:19

in the future. Nothing will be unique

9:21

about using Gemini or using uh uh

9:25

Twitter's Grock, you know, Elon's Grock

9:27

or using Llama uh or Anthropics Claude,

9:31

whatever. None of that will matter.

9:33

They'll all give you probably within one

9:36

percentage point the same answer. In

9:39

which case, if they commoditize, then we

9:42

already have a risk that we've overbuilt

9:44

our GPU capacity. Right? This is why I

9:47

have the opinion that in the long term

9:49

Oracle, Coreweave, NBIS, these companies

9:52

are in the long term untouchable because

9:55

they're bag holders of GPUs that Nvidia

9:58

wants to sell and not hold. This is

10:01

partly why I think Nvidia got out of the

10:03

chip holding business.

10:06

Now, they've made some creative deals

10:08

with Coree to lease some of those back

10:10

and that creates some circular

10:11

questions, but ignore that for a moment.

10:13

So, no artificial general intelligence,

10:16

commoditized chat bots. Got it? So, then

10:19

really the companies that win are

10:22

companies that utilize the chat bots

10:25

along with their own custom MLS the

10:28

best. Those could be aggregators. Those

10:30

could be aggregators like UIP path for

10:33

healthcare and finance. They could be

10:34

aggregators like uh you've got the

10:37

ontological setup where you kind of

10:39

structure data over at Palunteer and

10:41

then aggregate which LLMs were going to

10:43

use to create or decipher conclusions

10:47

from this big data helping basically big

10:49

corporations get into big data. Right?

10:51

Palunteer where UiPath might appeal to

10:54

sort of everyone. Palanteer sort of

10:56

applies or appeals I should say to the

10:59

richest companies and the rich

11:02

government. Okay. So they've really

11:03

niched into the expensive customer

11:05

cohort whereas UiPath is maybe more of

11:08

the discount version player, right? The

11:10

we're here for everybody to try to set

11:12

up these AI agents for everybody if you

11:14

will. Salesforce is in that boat as

11:16

well. Okay. Uh so so then you have

11:19

aggregators. Okay. So artificial general

11:21

intelligence don't see it happen. Uh

11:22

LLMs are just commoditized

11:24

encyclopedias. Think about it like

11:26

Wikipedia in your eyeballs, okay? You've

11:29

got Wiki right in front of your face all

11:31

the time. Uh and then on the flip side,

11:34

we have aggregators that can make money

11:36

or people who figure out how to actually

11:38

make the AI technology we have today

11:41

productive for mass industry. This is

11:45

where obviously I'm biased, but I think

11:47

it's a great example. I like that we are

11:50

creating real estate AI that does not

11:53

rely on artificial general intelligence.

11:55

It actually relies on our own machine

11:58

learning training and tuning partly in

12:02

combination with LLMs which are

12:04

commoditized. We love that they're

12:06

commoditized because they reduce the

12:08

expense of using LLMs. But our ML is

12:12

what gives us proprietary data and skill

12:16

and technology at House Hack and

12:18

Reinvest, right? you know that already.

12:19

We have a fundra for that closing at the

12:21

end of the year. If you want to buy the

12:23

AI product, which releases next week,

12:25

you can do that at reinvest.co or

12:28

househack.com. It's the same company.

12:30

This video is not a solicitation. Yes,

12:33

we are ending the 5% yield uh starting

12:36

December 31st. Now, if you get in before

12:38

that, you still get the 5% yield through

12:40

conversion. Obviously, it's a real

12:41

estate backed company and we're

12:42

launching our AI based on the skills

12:45

that we've been building and the MLS

12:47

that we've been building since 2018.

12:49

Before the AI boom, we have been working

12:51

on our MLS and so we're so excited to

12:53

bring these to mass market for agents

12:55

and otherwise. So, you have to

12:56

understand this baseline. Now, where

12:59

does Grock come into all of this? Where

13:02

Grock comes into all of this is speed.

13:05

They eliminate the latency of live AI

13:10

translations for language. This is one

13:12

of the reasons why I think Dualingo has

13:14

kind of been tanking a little bit

13:16

because yes, you can learn a language in

13:19

a fantastic manner with Dualingo. But

13:20

why do you need to learn a language if

13:22

you could just put on AI glasses or use

13:24

your phone or use AirPods like what

13:26

Apple is doing now? Say these are, you

13:28

know, simple. Put them in, two people,

13:30

put them in, what do you got? Live

13:32

translations. any form of latency kills

13:35

the functionality of that. Gro Gro's

13:38

product solves that and they solve that

13:41

by using SRAMM on the actual chips. Now,

13:46

that does end up leading to a larger

13:49

footprint for each chip, which isn't

13:51

ideal, but it solves that problem by

13:53

embedding the memory where the compute

13:56

is occurring. The reason they can do

13:58

this and the reason they can do this

13:59

with so much of a smaller set of memory.

14:03

Yes, SRAMM is much more expensive than

14:05

DRAM. It's like replacing, you know,

14:08

copper with gold. It's very expensive,

14:10

but you need so much less of it. Why?

14:13

Software. And this is the 4D chess move

14:17

from Nvidia. What I've just built out

14:20

for you are my baseline opinions, the

14:22

deal acquisition that happened and how

14:26

Grock's product is faster even though

14:29

the actual underlying memory SRAMM is

14:32

more expensive. This is a faster

14:34

product. But why is Nvidia doing this?

14:37

What is the 4D chess move? Well, this is

14:40

where I wrote down that I believe this

14:42

is a brilliant move by Jensen. Even

14:45

though people talk about this being a

14:47

memory hedge to hedge against rising

14:50

DRAM prices, to me, this is actually a

14:53

hedge against artificial general

14:55

intelligence, as in AGI never happening.

14:58

Because if AGI doesn't happen and LLMs

15:01

get commoditized, then GPUs aren't

15:04

actually that valuable. And guess who's

15:07

left holding the bag? Not Nvidia. No,

15:10

Oracle, Nbby, Iron, and the other uh

15:16

plays like frankly XAI uh or any GPU

15:19

hoarder. They're the bag holders. See,

15:22

AGI would need a lot of GPU and compute.

15:26

But commoditized LMS do not.

15:28

Commoditized LMS, in my opinion, need

15:31

speed. If we could do a deep research

15:33

and sit around for 45 minutes waiting

15:36

for the deep research to run or we could

15:38

get it instantaneously,

15:39

that's where the money is. Now, one of

15:41

the reasons practically that Grock is

15:44

able to pull this off is they utilize a

15:47

software infrastructure to predetermine

15:52

how the chips are going to operate. Now,

15:54

I'm not going to get into the deep weeds

15:56

of how the transistors and the

15:58

capacitors and the refresh rates and all

16:00

that stuff works inside of the actual

16:02

chips. We're going to keep this a little

16:03

bit more basic. But what I am going to

16:06

say is that this speed is critical and

16:10

the speed is created by software. Now,

16:12

if you remember what has Nvidia's moat

16:15

always been, it's not necessarily that

16:17

they have TSMC supply chains on lock.

16:21

That is very true. It's not necessarily

16:23

that uh they have uh SKH Highex's supply

16:26

chains unlocked or Micron or Samsung

16:28

supply chains unlocked to make sure

16:30

Nvidia gets what Nvidia needs.

16:34

And it's not necessarily to say that

16:37

Nvidia's GPUs won't be valuable, but it

16:41

is to say that what has made Nvidia's

16:44

GPUs even more valuable and has enabled

16:46

them to get a lock on the industry is

16:48

their CUDA software. But their CUDA

16:51

software today doesn't actually

16:54

integrate the same software that Grock

16:58

uses to speed up inference output.

17:04

That's what this is a play about. If you

17:07

read between the lines with what Jensen

17:09

shared in a leaked email, he said,

17:12

quote, "We plan to integrate Grock's low

17:15

latency processes into the NVIDIA AI

17:18

factory architecture, extending the

17:21

platform to serve an even broader range

17:23

of AI inference and realtime workloads."

17:26

Ah, realtime workloads. in other words

17:32

haven't actually been able to output

17:34

realtime workloads for the AI factories

17:39

the digital twins that Jensen always

17:41

pitches because they don't have as fast

17:45

of an inference technology because even

17:47

their Reuben chip which was supposed to

17:49

be faster at inference still relies so

17:52

heavily on GPUs that because the GPUs

17:56

themselves the way they are architected

17:58

out have built-in traffic cops they are

18:00

slower. There is underutilization in the

18:03

GPUs and you're just not going to get

18:05

answers as quickly. Now, this is where

18:08

we could finally put the pieces of the

18:10

puzzle together to understand the 4D

18:12

chess move here. And I want to be clear,

18:13

I'm not trying to shill Nvidia. Uh like

18:16

I think there are big risks with Nvidia.

18:18

We're going to talk about that here. But

18:21

this is the 4D chess move. Nvidia buys

18:24

Gro's IP. They integrate the software

18:28

technology of using software to be the

18:30

traffic cop rather than the chip to

18:32

organize in a deterministic way how the

18:35

chips are going to operate. In English,

18:37

make it damn fast through software.

18:40

They're going to integrate that software

18:41

intellectual property into CUDA. Now

18:44

CUDA becomes from behemoth to mega

18:47

behemoth of the future. See, I think we

18:51

go from a future of AGI, which ain't

18:55

happening anytime soon, so nowhere,

18:57

right? I think we go from this world

19:00

view language model or world model

19:04

generation through GPUs to just give us

19:08

answers fast and real-time answers fast.

19:11

And that will be useful in piloting, in

19:13

factories, in machinery, in healthcare,

19:16

and in surgeries, anywhere you need

19:19

realtime

19:20

solutions. Now, Nvidia builds that

19:24

software into CUDA, making CUDA even

19:27

more of a behemoth for the next

19:30

generation of artificial intelligence.

19:33

And ironically, the next generation of

19:35

artificial intelligence isn't actually

19:37

any different than what we use today.

19:39

The only difference is faster. It's the

19:42

same crap delivered to you faster.

19:44

Nvidia moes it into CUDA. When Nvidia

19:48

moes it into CUDA, guess who

19:52

[clears throat]

19:52

is going to be building the chips for

19:55

these inference purposes that then the

19:58

next bag holders can buy from Nvidia at

20:01

no not low margins but at high Nvidia

20:05

margins. Well, the people who are going

20:07

to buy it are going to be the next

20:09

generation of data centers. Whether

20:11

they're data centers in space, probably

20:13

not, or just NBIS 2.0, Coreweave 2.0,

20:17

Oracle 2.0, whatever. Those new bag

20:21

holders are then going to buy these

20:23

SRAMM centric chips that speed up our

20:27

artificial intelligence inference

20:29

output. So, the answers that we get from

20:31

AI. So, the next generation of bag

20:33

holders will buy those chips. The first

20:35

generation again, Oracle, Enbis,

20:37

Cororeweave, uh, whatever, XAI, they'll

20:40

suffer. Gen two will then meme up until

20:43

the next innovation comes and then those

20:47

become the next bag holders. But guess

20:49

who wins every time? Every generation of

20:53

bag holders continuously the company

20:56

that wins is the one selling the product

20:59

at the highest possible margin because

21:01

they've locked the industry down with

21:03

CUDA for worldview training and now CUDA

21:07

for inference output. It's Nvidia.

21:11

It's brilliant. Let me take a look at

21:13

some of my other notes because that was

21:14

all kind of from the top of my head

21:15

there. But I've been I've just been

21:17

absorbing so much of this. I love this.

21:18

This I think is very exciting. I I I

21:20

think Jensen is not only the most

21:22

brilliant salesman in the world uh for

21:25

selling his stock and his company and

21:27

his product

21:29

next to maybe what you see at Palanteer.

21:31

Uh but but he actually puts out like

21:36

these margins are incredible. So let's

21:38

take a look at some uh some of these

21:40

other notes here. So uh let's see. We

21:42

spend okay there's this thesis that you

21:45

spend money on training and you make

21:46

money on inference. So I kind of agree

21:48

with that. I think people don't really

21:50

care that you're training a model. They

21:53

want the output. It's kind of like us

21:54

with my little AI startup disrupting

21:57

real estate, right? We're trying to

21:59

democratize wedge deals across the

22:01

entire country. Nobody knows wedge deals

22:03

more than me. We created the wedge deal.

22:06

Basically, it's like buying a Warren

22:08

Buffett real estate with a margin of

22:10

safety. And if we can put that into an

22:11

app and sell it and help people make

22:13

tens to hundreds of thousands of

22:15

dollars, knock on wood, hopefully the

22:17

value of that app will be massive. And

22:20

the point there is not again to pitch

22:22

house hack. It's to simply say that when

22:25

you can utilize AI in places that hasn't

22:27

been used before, like this old school

22:30

industry like real estate that most tech

22:32

bros know nothing about today. That is

22:35

where money, actual money can be made.

22:38

So I I couldn't be more excited about

22:39

that. So um the founder of Grock also

22:43

talks about how much latency matters. I

22:46

love this. This was from a 2024

22:47

interview. He says if you show a human A

22:50

or B and say which is faster, they won't

22:52

know. It's imperceptible these days to

22:54

tell you which is 100 milliseconds

22:56

faster A or B. But if let's say B is 100

23:00

milliseconds faster, conversion with

23:03

that 100 millisecond improvement will

23:05

increase by 8% on desktop and 30% on

23:08

mobile. In other words, it matters.

23:12

Speed matters. That's what he's trying

23:14

to say. But we we already know this. I

23:16

mean, do you want a humanoid robot

23:17

that's walking around likeoop?

23:21

No. [laughter]

23:22

Even Jack knows he doesn't want that.

23:24

He's laughing about it over there

23:25

because nobody wants aoop

23:28

robot. We want fast answers, baby. So,

23:31

this is very exciting. Now, um, Grock

23:35

LPUs, language processing units. This is

23:37

just sort of the fancy name they have

23:39

for it. Great for voice AI. Instant

23:41

text. Relies on static. A random access

23:43

memory. That's SRAMM. It's fast. It's

23:45

expensive. It's low capacity. The

23:47

bottleneck is how much capacity you

23:49

could have, but you could string

23:50

hundreds of these together if you want,

23:52

which is exactly what Nvidia loves to

23:54

hear. Like, think about that. Nvidia

23:55

hearing what what somebody wants to

23:57

string hundreds of these together. We'd

23:59

love to sell that product. It's kind of

24:01

brilliant. Uh, okay. uh they uh kill

24:05

logic control on the actual chip which

24:07

lets the chip just do what it wants to

24:08

do which is focus on speed. Uh the

24:11

analogy that I came up with this uh with

24:14

on this is think about the GPU kind of

24:16

like a semi-truck. Massive volume,

24:18

efficiency of volume, not very nimble,

24:21

unstoppable once it's going like it can

24:23

move a lot of product across the United

24:25

States really well. But if you're like,

24:28

"Bro, I just need this letter from

24:29

Kevin's house to the post office fast."

24:32

You take the Tesla Roadster baby, you

24:34

take the roof off, you put the jetpack

24:36

on the back, you get airborne, and you

24:38

go deliver that piece of mail. That's

24:39

the difference between Gro's product and

24:43

a high bandwidth memory GPU semitr. Uh

24:47

so uh you know some people say Nvidia is

24:49

just hedging against a DRAM uh you know

24:52

supply chain collapse or some kind of

24:54

inefficiency coming in the future. Uh

24:56

maybe maybe I think that's like uh

24:58

that's like basic level analysis like

25:01

okay sure. Uh I think you can also say

25:03

that you know Gro's had a lot of deals

25:05

with the Saudis and uh and has raised

25:08

money from the Saudis and uh has has

25:10

made deals with the Saudis. I think

25:12

there could be an opportunity there for

25:13

Nvidia also to sort of get deeper into

25:15

the door of the Middle East because for

25:17

Nvidia to really keep growing. They kind

25:19

of need to start selling to like rich

25:20

countries like Singapore or which they

25:23

already do to some extent but more to

25:25

the Middle East, the oil rich countries,

25:27

right? uh Nvidia gets the brains and the

25:29

AP. They're leaving or the the IP and

25:32

they're leaving the cloud business

25:34

intact again because Nvidia doesn't want

25:36

that. They want to sell to the cloud

25:37

business. If anything, Grock ends up the

25:40

leftover business ends up being a buyer

25:42

of these Nvidia chips, right? Uh these

25:44

now Nvidia Grock chips. It's the

25:46

ultimate irony. It's like you bought the

25:49

IP of the software and the technology

25:51

and the chip from Grock and then turn

25:53

around and sell to what's left of Grock.

25:56

Now Nvidia branded Grock chips. It's so

25:59

brilliant. This to me is offense on

26:01

commoditization, defense on

26:03

commoditization. It's you're basically

26:05

making a bet that everything's going to

26:07

commoditize and so you're going on

26:09

offense on that, but then you're also

26:10

defending against the decline of revenue

26:12

that's probably going to happen on GPUs,

26:14

which is bad for AMD, right?

26:16

Uh then uh and AMD will catch up. You

26:20

know, I'm not trying to be bearish AMD.

26:21

Uh let's see. Oh, yeah. Okay. And then I

26:24

did want to say that like, you know,

26:25

does yes, I I've got a $300 price target

26:28

for Nvidia, but what what gives me the

26:30

biggest concern for Nvidia? Okay.

26:33

Biggest concern for Nvidia,

26:36

Sam Alman. Uh and this actually reduces

26:40

some of that concern. I've continuously

26:42

believed that Sam Alman with his $ 1.4

26:44

$4 trillion of spend is a ridiculous

26:46

ludicrous joke and it's going to fail.

26:49

It's it's not a bet I want to make on.

26:51

So, I've been escaping Nvidia because of

26:53

that. This actually flips that because

26:58

over the next not necessarily short

27:00

term, but over the next two to four

27:01

years, this actually makes Nvidia a

27:04

crazy powerhouse. Now, that doesn't mean

27:06

there can't be a correction in the short

27:07

term, especially if you know we actually

27:09

do end up going into the commoditization

27:10

direction and Sam Alman falters. Who

27:12

knows, maybe Open AAI goes bankrupt.

27:14

They stop getting funding. Private

27:15

credit creates problems. Labor market

27:17

stress leads the market to fall.

27:19

Whatever. If the market falls, then

27:20

spending amongst the upper 40 to 60% of

27:24

American spenders falls. That leads the

27:26

economy into a recession. There'll be a

27:28

better buying opportunity to buy Nvidia.

27:30

And then that $300 price target will

27:32

come true in maybe a decade, right? But,

27:35

uh, if if Samman stays up and running,

27:37

then hey, maybe it'll happen sooner. But

27:39

those are like that's my concern with

27:41

Nvidia. So I can't say like oh my gosh

27:44

yolo calls on Nvidia because there are

27:46

road bumps on the way or in in the path.

27:49

But if you are a very long-term hodler

27:52

here, this is a a moat builder. They

27:54

just deepen their mode against Nvidia,

27:56

against Google, uh it doesn't matter.

27:58

They're they're deepening their mode

28:00

against anyone and they're staying away

28:01

from the bag holders which are buying

28:03

the chips from Nvidia at 75% gross

28:06

margins, 55% net margins. This is

28:09

absolutely brilliant. 4D chess. Like,

28:13

man, I think it's so freaking awesome.

28:15

Uh, this this is the kind of stuff that

28:16

like I want to emulate after with with

28:19

my startup house. Sorry, I always go

28:21

back to that, but it's my baby, you

28:22

know? I want to do this kind of stuff

28:23

over the next 60 years. I think this is

28:26

this is so cool. Knock on wood, maybe

28:27

even longer. But, um, yeah, very

28:30

excited. So, if you want to learn more,

28:31

remember, none of this this video can't

28:33

be a solicitation. Just go to

28:34

househack.com, reinvest.co, learn more.

28:37

If your browser for some reason doesn't

28:39

work, there's one little IP sector

28:41

somewhere in the country uh for I think

28:43

it's Spectrum. Uh use your cell phone

28:46

and then the website should work. But

28:47

like 99.9% of you have access uh with no

28:51

issues at all. And uh if you want to

28:53

actually buy the AI product, you can do

28:55

that. We'll be raising the price on

28:57

December 29th and we'll have a big

28:59

coupon expiration because the product

29:01

will be releasing and the coupon code is

29:03

release the app. [laughter] Anyway,

29:05

thanks so much for watching. We'll see

29:06

you in the next one. Goodbye and good

29:07

luck out there.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.