omfg | This changes EVERYTHING in AI [NVDA / Groq]
FULL TRANSCRIPT
This changes absolutely everything for
Nvidia. This is remarkable, especially
with Nvidia trading for just a $1.67
peg, suggesting that Nvidia could be
worth over $300 per share. Now, in this
video, I'm going to break down and
reconcile why would it make sense not to
hold Nvidia, what are the big downsides,
especially after this massive $20
billion acquisition. But why did Nvidia
actually make this acquisition? What's
the play here? What are they getting at?
And importantly, what does it mean for
companies like Coreweave, Nbby, XAI?
What does it mean for these other
companies that have been hoarding Nvidia
chips? And better yet, did Nvidia
hint that this was exactly what was
going to happen by themselves getting
out of the data center rental or leasing
business. Ah, a lot to break down in
this video. Let's get into it. A quick
note, obviously, I'm not in my studio.
Uh, I saw the snow in Tahoe and I'm
like, Jack, let's go skiing. I asked
Max, too, but he didn't want to come.
But anyway, we're gonna be skiing this
weekend again. [laughter] All right, so
let's get into it. So, obviously, Grock,
some people pronounce it grog, some
people say Grock. I'm just going to
pronounce it Grock. That's not to
confuse you with X AI's Gro or Elon
Musk's Grock, the chatbot. This is a
separate company also called Grock
spelled GR OQ
that is a chip designer and more
importantly a software designer for
those chips. Keep that in mind. So this
is a $20 billion deal. I wrote down some
notes to keep this smooth for us. They
were valued at $6.9 billion in
September. Now they have a $20 billion
essential buyout deal going on. It's not
a buyout. It's a licensing deal. They're
doing that to skirt and like regul play
regulatory arbitrage and get around uh
merger and acquisition uh antirust uh
laws and blockades and two-year approval
processes. So Nvidia is just basically
like, nah, you guys keep doing your data
centers. We're just going to take your
top talent and your intellectual
property and we'll give the company $20
billion. So that's good for the
shareholders of obviously Grock, but it
gives Nvidia exactly what they want. And
you'll see in just a moment why this is
so important. I'm I'm personally very
excited about this. I I when we had our
course member live stream this morning,
I'm like, this is bullish Nvidia. I
don't know how long it's going to take
for the market to sort of digest it, but
this is overall bullish for Nvidia. That
was part of our alpha report this
morning. If you're not a member of that
yet, go to mekevin.com. You can see
that, see that analysis. and of course
get my top 10 stocks to buy for the next
10 years. Now, Grock chips. Now, here's
the key. A lot of people are going to
say this is a play around memory that
Nvidia's GPUs
utilize high bandwidth memory, which is
really just stacked up DRAM memory,
which that's kind of redundant because
the M and D RAM is memory, but anyway,
just to make it simple to understand.
Um, Grock chips use a different style of
memory. It's more expensive and you
actually need substantially less
megabytes compared to gigabytes of
memory to process what Grock does well.
Now, before I really get into the
technical weeds, I think it's worth
understanding what Jensen said they're
planning on doing and what the CEO of
Grock, the founder of the company, said
their intentions are. So, Jonathan Ross,
the founder of Grock, who worked on TPUs
at Google, was a contributor to the
invention of TPUs at Google in 2020 in
2016 and then moved on. He argues uh
that these chips are really designed to
be the low margin inference computers
for the artificial intelligence wave.
And he made this pitch and this argument
possibly in a way not to piss off Nvidia
to say, "Oh, we actually help Nvidia."
And there are many interviews of him
saying we help Nvidia because Nvidia can
sell their high margin GPUs and we'll
just take the inference market while
they take training and inference is the
low margin business. Now I'm going to
get to why this was a perfect setup
right now. See the CEO of Grock never
said anything bad about Nvidia. instead
sold his product as something that could
actually help Nvidia. And I think that
is exactly what set up the acquisition.
Now, why is this acquisition so
brilliant by Nvidia? And then we can get
into some more of the technical weeds.
Here's a simple way to compare the two
chips. This is the simple part. We're
going to leave some of the detailed weed
stuff for a little bit later in the
segment. So the detailed weeds separate
right now. Overview GPUs great at
training models. Think uh all like
memorizing the entire internet and
trying to generate some kind of world
view that we can then ask questions
about and generate new answers or new
ideas or new images from. Genai. TPUs
from Google have been an attempt to do
something similar at a cheaper cost.
Basically, cutting out Nvidia's margins.
That's why Google likes their own TPUs.
That's why Amazon is making their own
chips. There's an incentive for
companies to make their own chips to
really cut out those 75% gross margins
that Nvidia has. Now Gro is really or
Grock is really interesting because
Grock doesn't prop to say hey we're
going to help you train this worldview
model instead Grock comes in and says
hey we're going to help you where people
actually utilize artificial intelligence
and that's in communicating with AI bots
customer service agents live
translations through like metagasses
and what you'll find today is people
meme AI chat bots with how slow they
are. If you've ever gone on Tik Tok and
you've seen people joke with the the
sort of like uh talking to GPT in real
life, people be like, "Oh, uh, hey GPT.
Uh, can you let me know where the
closest pizza restaurant is?"
Ah, cool. You want to know where the
closest pizza restaurant is? Are you
looking for Chicago style or more of an
Italian style? You know, it's it's you
know, the latency is the awkward part
and people joke about that. Pilots have
made jokes about having AI as a co-pilot
and you'll get something like
GPT gear downoop.
Ah, I see. Uh, so you're requesting that
I put the gear down. Uh, is there a
reason you want to put the gear down now
compared to later? That just put the
damn gear down. You know, the captain
then freaks out, right? That's this sort
of joke. But that joke really helps us
understand where the problem in AI sits
today. Now, I'm going to inject my
opinion here for a moment. I've been
providing in my opinion facts and data
and kind of trying to build this story
up because it's a lot. It's a lot to
digest and so we're going to take this
elephant in small bites together. That's
how I like to teach. Uh so I want you to
think for a moment of uh overall when
you ask a chatbot a question uh you're
looking for a rapid response but you
don't really mind waiting for a moment
because it's a chatbot it's in your app.
But what happens when you're on the
phone and you're selling customer
service that seems humanlike,
but it has a massive delay. And that's
where that human effect from that humor
really highlights the problem with
delays. And that gets to my opinion and
it has to do with the commoditization of
AI. See, watch this.
I don't believe that generative uh AI or
any of the AI we have today is going to
lead to artificial general intelligence.
So to understand my conclusion here, you
have to understand that my bias is that
AGI is bull crap. It is this moonshot
idea that helps people raise money for
their startups, but artificial general
intelligence is not where we're going.
That's not where the money is going to
be made. So you have to understand that
first. Two, not only do I not believe
we're going to have artificial general
intelligence, but I also believe, and
I've been saying this for years, that
language learning models are going to be
commodities. There's nothing going to be
in the future. Nothing will be unique
about using Gemini or using uh uh
Twitter's Grock, you know, Elon's Grock
or using Llama uh or Anthropics Claude,
whatever. None of that will matter.
They'll all give you probably within one
percentage point the same answer. In
which case, if they commoditize, then we
already have a risk that we've overbuilt
our GPU capacity. Right? This is why I
have the opinion that in the long term
Oracle, Coreweave, NBIS, these companies
are in the long term untouchable because
they're bag holders of GPUs that Nvidia
wants to sell and not hold. This is
partly why I think Nvidia got out of the
chip holding business.
Now, they've made some creative deals
with Coree to lease some of those back
and that creates some circular
questions, but ignore that for a moment.
So, no artificial general intelligence,
commoditized chat bots. Got it? So, then
really the companies that win are
companies that utilize the chat bots
along with their own custom MLS the
best. Those could be aggregators. Those
could be aggregators like UIP path for
healthcare and finance. They could be
aggregators like uh you've got the
ontological setup where you kind of
structure data over at Palunteer and
then aggregate which LLMs were going to
use to create or decipher conclusions
from this big data helping basically big
corporations get into big data. Right?
Palunteer where UiPath might appeal to
sort of everyone. Palanteer sort of
applies or appeals I should say to the
richest companies and the rich
government. Okay. So they've really
niched into the expensive customer
cohort whereas UiPath is maybe more of
the discount version player, right? The
we're here for everybody to try to set
up these AI agents for everybody if you
will. Salesforce is in that boat as
well. Okay. Uh so so then you have
aggregators. Okay. So artificial general
intelligence don't see it happen. Uh
LLMs are just commoditized
encyclopedias. Think about it like
Wikipedia in your eyeballs, okay? You've
got Wiki right in front of your face all
the time. Uh and then on the flip side,
we have aggregators that can make money
or people who figure out how to actually
make the AI technology we have today
productive for mass industry. This is
where obviously I'm biased, but I think
it's a great example. I like that we are
creating real estate AI that does not
rely on artificial general intelligence.
It actually relies on our own machine
learning training and tuning partly in
combination with LLMs which are
commoditized. We love that they're
commoditized because they reduce the
expense of using LLMs. But our ML is
what gives us proprietary data and skill
and technology at House Hack and
Reinvest, right? you know that already.
We have a fundra for that closing at the
end of the year. If you want to buy the
AI product, which releases next week,
you can do that at reinvest.co or
househack.com. It's the same company.
This video is not a solicitation. Yes,
we are ending the 5% yield uh starting
December 31st. Now, if you get in before
that, you still get the 5% yield through
conversion. Obviously, it's a real
estate backed company and we're
launching our AI based on the skills
that we've been building and the MLS
that we've been building since 2018.
Before the AI boom, we have been working
on our MLS and so we're so excited to
bring these to mass market for agents
and otherwise. So, you have to
understand this baseline. Now, where
does Grock come into all of this? Where
Grock comes into all of this is speed.
They eliminate the latency of live AI
translations for language. This is one
of the reasons why I think Dualingo has
kind of been tanking a little bit
because yes, you can learn a language in
a fantastic manner with Dualingo. But
why do you need to learn a language if
you could just put on AI glasses or use
your phone or use AirPods like what
Apple is doing now? Say these are, you
know, simple. Put them in, two people,
put them in, what do you got? Live
translations. any form of latency kills
the functionality of that. Gro Gro's
product solves that and they solve that
by using SRAMM on the actual chips. Now,
that does end up leading to a larger
footprint for each chip, which isn't
ideal, but it solves that problem by
embedding the memory where the compute
is occurring. The reason they can do
this and the reason they can do this
with so much of a smaller set of memory.
Yes, SRAMM is much more expensive than
DRAM. It's like replacing, you know,
copper with gold. It's very expensive,
but you need so much less of it. Why?
Software. And this is the 4D chess move
from Nvidia. What I've just built out
for you are my baseline opinions, the
deal acquisition that happened and how
Grock's product is faster even though
the actual underlying memory SRAMM is
more expensive. This is a faster
product. But why is Nvidia doing this?
What is the 4D chess move? Well, this is
where I wrote down that I believe this
is a brilliant move by Jensen. Even
though people talk about this being a
memory hedge to hedge against rising
DRAM prices, to me, this is actually a
hedge against artificial general
intelligence, as in AGI never happening.
Because if AGI doesn't happen and LLMs
get commoditized, then GPUs aren't
actually that valuable. And guess who's
left holding the bag? Not Nvidia. No,
Oracle, Nbby, Iron, and the other uh
plays like frankly XAI uh or any GPU
hoarder. They're the bag holders. See,
AGI would need a lot of GPU and compute.
But commoditized LMS do not.
Commoditized LMS, in my opinion, need
speed. If we could do a deep research
and sit around for 45 minutes waiting
for the deep research to run or we could
get it instantaneously,
that's where the money is. Now, one of
the reasons practically that Grock is
able to pull this off is they utilize a
software infrastructure to predetermine
how the chips are going to operate. Now,
I'm not going to get into the deep weeds
of how the transistors and the
capacitors and the refresh rates and all
that stuff works inside of the actual
chips. We're going to keep this a little
bit more basic. But what I am going to
say is that this speed is critical and
the speed is created by software. Now,
if you remember what has Nvidia's moat
always been, it's not necessarily that
they have TSMC supply chains on lock.
That is very true. It's not necessarily
that uh they have uh SKH Highex's supply
chains unlocked or Micron or Samsung
supply chains unlocked to make sure
Nvidia gets what Nvidia needs.
And it's not necessarily to say that
Nvidia's GPUs won't be valuable, but it
is to say that what has made Nvidia's
GPUs even more valuable and has enabled
them to get a lock on the industry is
their CUDA software. But their CUDA
software today doesn't actually
integrate the same software that Grock
uses to speed up inference output.
That's what this is a play about. If you
read between the lines with what Jensen
shared in a leaked email, he said,
quote, "We plan to integrate Grock's low
latency processes into the NVIDIA AI
factory architecture, extending the
platform to serve an even broader range
of AI inference and realtime workloads."
Ah, realtime workloads. in other words
haven't actually been able to output
realtime workloads for the AI factories
the digital twins that Jensen always
pitches because they don't have as fast
of an inference technology because even
their Reuben chip which was supposed to
be faster at inference still relies so
heavily on GPUs that because the GPUs
themselves the way they are architected
out have built-in traffic cops they are
slower. There is underutilization in the
GPUs and you're just not going to get
answers as quickly. Now, this is where
we could finally put the pieces of the
puzzle together to understand the 4D
chess move here. And I want to be clear,
I'm not trying to shill Nvidia. Uh like
I think there are big risks with Nvidia.
We're going to talk about that here. But
this is the 4D chess move. Nvidia buys
Gro's IP. They integrate the software
technology of using software to be the
traffic cop rather than the chip to
organize in a deterministic way how the
chips are going to operate. In English,
make it damn fast through software.
They're going to integrate that software
intellectual property into CUDA. Now
CUDA becomes from behemoth to mega
behemoth of the future. See, I think we
go from a future of AGI, which ain't
happening anytime soon, so nowhere,
right? I think we go from this world
view language model or world model
generation through GPUs to just give us
answers fast and real-time answers fast.
And that will be useful in piloting, in
factories, in machinery, in healthcare,
and in surgeries, anywhere you need
realtime
solutions. Now, Nvidia builds that
software into CUDA, making CUDA even
more of a behemoth for the next
generation of artificial intelligence.
And ironically, the next generation of
artificial intelligence isn't actually
any different than what we use today.
The only difference is faster. It's the
same crap delivered to you faster.
Nvidia moes it into CUDA. When Nvidia
moes it into CUDA, guess who
[clears throat]
is going to be building the chips for
these inference purposes that then the
next bag holders can buy from Nvidia at
no not low margins but at high Nvidia
margins. Well, the people who are going
to buy it are going to be the next
generation of data centers. Whether
they're data centers in space, probably
not, or just NBIS 2.0, Coreweave 2.0,
Oracle 2.0, whatever. Those new bag
holders are then going to buy these
SRAMM centric chips that speed up our
artificial intelligence inference
output. So, the answers that we get from
AI. So, the next generation of bag
holders will buy those chips. The first
generation again, Oracle, Enbis,
Cororeweave, uh, whatever, XAI, they'll
suffer. Gen two will then meme up until
the next innovation comes and then those
become the next bag holders. But guess
who wins every time? Every generation of
bag holders continuously the company
that wins is the one selling the product
at the highest possible margin because
they've locked the industry down with
CUDA for worldview training and now CUDA
for inference output. It's Nvidia.
It's brilliant. Let me take a look at
some of my other notes because that was
all kind of from the top of my head
there. But I've been I've just been
absorbing so much of this. I love this.
This I think is very exciting. I I I
think Jensen is not only the most
brilliant salesman in the world uh for
selling his stock and his company and
his product
next to maybe what you see at Palanteer.
Uh but but he actually puts out like
these margins are incredible. So let's
take a look at some uh some of these
other notes here. So uh let's see. We
spend okay there's this thesis that you
spend money on training and you make
money on inference. So I kind of agree
with that. I think people don't really
care that you're training a model. They
want the output. It's kind of like us
with my little AI startup disrupting
real estate, right? We're trying to
democratize wedge deals across the
entire country. Nobody knows wedge deals
more than me. We created the wedge deal.
Basically, it's like buying a Warren
Buffett real estate with a margin of
safety. And if we can put that into an
app and sell it and help people make
tens to hundreds of thousands of
dollars, knock on wood, hopefully the
value of that app will be massive. And
the point there is not again to pitch
house hack. It's to simply say that when
you can utilize AI in places that hasn't
been used before, like this old school
industry like real estate that most tech
bros know nothing about today. That is
where money, actual money can be made.
So I I couldn't be more excited about
that. So um the founder of Grock also
talks about how much latency matters. I
love this. This was from a 2024
interview. He says if you show a human A
or B and say which is faster, they won't
know. It's imperceptible these days to
tell you which is 100 milliseconds
faster A or B. But if let's say B is 100
milliseconds faster, conversion with
that 100 millisecond improvement will
increase by 8% on desktop and 30% on
mobile. In other words, it matters.
Speed matters. That's what he's trying
to say. But we we already know this. I
mean, do you want a humanoid robot
that's walking around likeoop?
No. [laughter]
Even Jack knows he doesn't want that.
He's laughing about it over there
because nobody wants aoop
robot. We want fast answers, baby. So,
this is very exciting. Now, um, Grock
LPUs, language processing units. This is
just sort of the fancy name they have
for it. Great for voice AI. Instant
text. Relies on static. A random access
memory. That's SRAMM. It's fast. It's
expensive. It's low capacity. The
bottleneck is how much capacity you
could have, but you could string
hundreds of these together if you want,
which is exactly what Nvidia loves to
hear. Like, think about that. Nvidia
hearing what what somebody wants to
string hundreds of these together. We'd
love to sell that product. It's kind of
brilliant. Uh, okay. uh they uh kill
logic control on the actual chip which
lets the chip just do what it wants to
do which is focus on speed. Uh the
analogy that I came up with this uh with
on this is think about the GPU kind of
like a semi-truck. Massive volume,
efficiency of volume, not very nimble,
unstoppable once it's going like it can
move a lot of product across the United
States really well. But if you're like,
"Bro, I just need this letter from
Kevin's house to the post office fast."
You take the Tesla Roadster baby, you
take the roof off, you put the jetpack
on the back, you get airborne, and you
go deliver that piece of mail. That's
the difference between Gro's product and
a high bandwidth memory GPU semitr. Uh
so uh you know some people say Nvidia is
just hedging against a DRAM uh you know
supply chain collapse or some kind of
inefficiency coming in the future. Uh
maybe maybe I think that's like uh
that's like basic level analysis like
okay sure. Uh I think you can also say
that you know Gro's had a lot of deals
with the Saudis and uh and has raised
money from the Saudis and uh has has
made deals with the Saudis. I think
there could be an opportunity there for
Nvidia also to sort of get deeper into
the door of the Middle East because for
Nvidia to really keep growing. They kind
of need to start selling to like rich
countries like Singapore or which they
already do to some extent but more to
the Middle East, the oil rich countries,
right? uh Nvidia gets the brains and the
AP. They're leaving or the the IP and
they're leaving the cloud business
intact again because Nvidia doesn't want
that. They want to sell to the cloud
business. If anything, Grock ends up the
leftover business ends up being a buyer
of these Nvidia chips, right? Uh these
now Nvidia Grock chips. It's the
ultimate irony. It's like you bought the
IP of the software and the technology
and the chip from Grock and then turn
around and sell to what's left of Grock.
Now Nvidia branded Grock chips. It's so
brilliant. This to me is offense on
commoditization, defense on
commoditization. It's you're basically
making a bet that everything's going to
commoditize and so you're going on
offense on that, but then you're also
defending against the decline of revenue
that's probably going to happen on GPUs,
which is bad for AMD, right?
Uh then uh and AMD will catch up. You
know, I'm not trying to be bearish AMD.
Uh let's see. Oh, yeah. Okay. And then I
did want to say that like, you know,
does yes, I I've got a $300 price target
for Nvidia, but what what gives me the
biggest concern for Nvidia? Okay.
Biggest concern for Nvidia,
Sam Alman. Uh and this actually reduces
some of that concern. I've continuously
believed that Sam Alman with his $ 1.4
$4 trillion of spend is a ridiculous
ludicrous joke and it's going to fail.
It's it's not a bet I want to make on.
So, I've been escaping Nvidia because of
that. This actually flips that because
over the next not necessarily short
term, but over the next two to four
years, this actually makes Nvidia a
crazy powerhouse. Now, that doesn't mean
there can't be a correction in the short
term, especially if you know we actually
do end up going into the commoditization
direction and Sam Alman falters. Who
knows, maybe Open AAI goes bankrupt.
They stop getting funding. Private
credit creates problems. Labor market
stress leads the market to fall.
Whatever. If the market falls, then
spending amongst the upper 40 to 60% of
American spenders falls. That leads the
economy into a recession. There'll be a
better buying opportunity to buy Nvidia.
And then that $300 price target will
come true in maybe a decade, right? But,
uh, if if Samman stays up and running,
then hey, maybe it'll happen sooner. But
those are like that's my concern with
Nvidia. So I can't say like oh my gosh
yolo calls on Nvidia because there are
road bumps on the way or in in the path.
But if you are a very long-term hodler
here, this is a a moat builder. They
just deepen their mode against Nvidia,
against Google, uh it doesn't matter.
They're they're deepening their mode
against anyone and they're staying away
from the bag holders which are buying
the chips from Nvidia at 75% gross
margins, 55% net margins. This is
absolutely brilliant. 4D chess. Like,
man, I think it's so freaking awesome.
Uh, this this is the kind of stuff that
like I want to emulate after with with
my startup house. Sorry, I always go
back to that, but it's my baby, you
know? I want to do this kind of stuff
over the next 60 years. I think this is
this is so cool. Knock on wood, maybe
even longer. But, um, yeah, very
excited. So, if you want to learn more,
remember, none of this this video can't
be a solicitation. Just go to
househack.com, reinvest.co, learn more.
If your browser for some reason doesn't
work, there's one little IP sector
somewhere in the country uh for I think
it's Spectrum. Uh use your cell phone
and then the website should work. But
like 99.9% of you have access uh with no
issues at all. And uh if you want to
actually buy the AI product, you can do
that. We'll be raising the price on
December 29th and we'll have a big
coupon expiration because the product
will be releasing and the coupon code is
release the app. [laughter] Anyway,
thanks so much for watching. We'll see
you in the next one. Goodbye and good
luck out there.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.