TRANSCRIPTEnglish

Warning: Nvidia Stock vs Apple Stock [Buy One, SELL One].

33m 30s5,638 words799 segmentsEnglish

FULL TRANSCRIPT

0:00

well don't sue me bro but in this video

0:02

you're going to learn some really big

0:04

and key differences between two very

0:07

massive investment opportunities we're

0:08

talking about3 trillion investment

0:11

opportunities Apple versus Nvidia and so

0:14

in this video I'm going to give you my

0:16

thoughts in terms of what I'm personally

0:18

doing with my portfolio when it comes to

0:20

Nvidia and maybe some things that you

0:22

ought to consider between the two

0:24

companies Nvidia and apple we'll also

0:25

talk valuation first we'll start a

0:27

little broad with what's going on

0:29

between between the AI spaces but I want

0:33

to give you a quick disclaimer don't

0:37

don't sue me bro this is just my opinion

0:39

this is not personalized Financial

0:41

advice and frankly it could be wrong but

0:43

I do also want to start by saying I've

0:45

been very bullish Nvidia I loaded up on

0:48

Nvidia originally back in November of

0:52

2022 as we sort of bottomed out I added

0:55

a bunch in the 400s obviously I wish I

0:58

uh had more EXP osure leading up to that

1:01

run oh well I took some gains then I got

1:04

back in a little higher but I'm glad I

1:06

did because obviously the company's done

1:07

extremely well the question now though

1:10

is what is a better opportunity going

1:12

forward hindsight analysis only does so

1:15

much and in this video I'll also go

1:17

through what Apple just transformed in

1:21

the AI space so this is really important

1:24

okay now something that we have to also

1:27

cover before we start is dang yes I did

1:30

make some money on an Apple trade today

1:33

I had a sweet sweet set of call options

1:36

on Apple today after that event

1:38

yesterday and yes we made 100K if you

1:41

want all my Buy sell alerts in the

1:43

stocks and psychology of money group

1:45

make sure you click that link down below

1:47

join me in the stocks and sight group

1:48

you get my course member live stream

1:50

analysis all my deep dive analyses that

1:52

I'm doing uh before we post about them

1:54

here if we even post them here we do a

1:56

lot of analysis there that doesn't make

1:58

it to the main Channel and I sell said

2:00

all my Buy sell alerts whether I make

2:01

money or I lose money you get it all

2:03

last week I had a rough week this week

2:05

let's just say it's starting out really

2:08

good I actually tweeted about the

2:09

options that I had and you could see I

2:11

had about $80,000 of exposure to to

2:14

Apple today so the options did really

2:17

really well so uh and that's after the

2:19

gain I mean this one was up 103 thou

2:21

103% that one was up 76% I go in and out

2:24

of them uh but anyway as always remember

2:27

past performance doesn't guarantee

2:28

future results and it's clear that if

2:30

you join you'll make money I just want

2:32

to be very clear about that the goal is

2:33

teaching and education so what's going

2:36

on with Nvidia well first we need to

2:38

know what this significant paper is that

2:41

just came out regarding Nvidia because

2:44

this is a risk factor for NVIDIA and all

2:48

of what we had presented by Apple is a

2:50

risk factor as well that is going to

2:52

compound a risk factor for NVIDIA let's

2:55

just be very clear though Nvidia is the

2:57

leader of the pack nobody stands in the

2:59

way way of nvidia's Mo Nvidia is a

3:02

company that I want long-term exposure

3:05

to I want Nvidia exposure not just

3:08

because of the innovation of the GPU

3:10

cycle that is we're going from the h100s

3:12

to the next levels to the black Wells to

3:14

the next levels thereafter which they're

3:15

already announcing getting on sort of

3:17

this one-year product refresh cycle we

3:19

know we're going from your sort of

3:21

typical dumb compute to AI compute

3:23

servers we already know all of that we

3:25

know we're on a new Innovative scurve

3:28

the question now is at what point does

3:31

that scurve Peter out and does it still

3:35

make sense to buy Nvidia now post spit

3:38

especially since the posts spit

3:40

performance of stocks historically is

3:44

good but in recent history has actually

3:47

not been that good consider over the

3:50

last 4ish years after Google stock split

3:54

uh on July 18th 2022 the stock was up 3%

3:58

over the next 3 days but then fell 10%

4:01

after Amazon stock split in 2022 it fell

4:05

21% after Apple stock split in August of

4:08

2020 it was up 10% at first and then

4:11

plummeted 20% and after tesel stock

4:14

split well that that one has just pretty

4:17

much been straight down but anyway the

4:18

six days thereafter it was down 12% so

4:20

stock splits don't necessarily mean in

4:23

the short term the stock is definitely

4:25

going to go up of course great companies

4:27

split their stock their earnings grow

4:28

and they keep going up in the longer

4:30

term so do I really want to bet against

4:32

Nvidia for the very long term no but

4:35

would I be willing to make a bet or a

4:38

trade between Nvidia and apple now

4:41

absolutely and the question that comes

4:44

down to everything for me is how much

4:47

actual GP GPU compute do we really need

4:51

picture this for a moment I want to draw

4:53

this for you before we get into this

4:55

study because I think it's going to be

4:57

um enlightening let's just say

5:01

if in March of

5:03

2023 to probably the 3 months thereafter

5:07

you have this bucket I'll call this the

5:09

uh

5:11

March and to let's call it July 2023 AI

5:16

the class of AI okay that's a good I

5:19

like it sort of like Navy Seals but it's

5:20

the class of AI okay and let's say you

5:23

have 1,000 people in the class of AI the

5:27

question that I have for you now is how

5:29

many people or how many companies rather

5:31

do we think is in the class of AI not

5:35

necessarily today but in the 2025 class

5:39

well my guess is we're probably going to

5:41

go down to somewhere between 50 to 100

5:46

in the class of AI which means if we're

5:49

building out server compute for the

5:53

class of a thousand assuming that

5:55

they're all going to maximize all their

5:57

compute because you and myself and

5:59

businesses instead of just trying the

6:01

one AI that we need we try five

6:03

different AIS so we're actually creating

6:06

five times the demand that we actually

6:09

ultimately need for AI well then what

6:11

happens is everybody buys AI chips

6:14

everybody overpays for the AI chips and

6:16

Nvidia goes to the moon and companies

6:18

that have nominal AI exposure like Dell

6:22

which is just a ripoff giving away

6:24

server Stacks doubles at stock valuation

6:27

uh AMD uh vertage

6:30

other companies like super micro

6:31

computer all the adjacents also end up

6:33

doing really well but is it actually

6:36

based on a fugazi set of demand now of

6:40

course not everybody got into AI day one

6:45

so of course that class theoretically

6:48

should grow to let's say 1,200 as more

6:51

people slowly trickle into the class

6:54

they show up late basically but does

6:57

that just push us to maybe a 60 to 20

7:00

class right maybe 20% more I don't know

7:03

this speculation obviously but the point

7:06

is that let's say the compute we need

7:08

for this basket over here is X okay so X

7:12

compute well the compute you need over

7:15

here in a more win or take most

7:18

environment you could even say that AI

7:20

will just boil down to five to six

7:22

different AI companies in the future

7:23

there'll still be a set amount of AI

7:25

demand but the point is you won't have

7:27

the duplicative training and the

7:29

duplicative inference as people test out

7:32

different AIS you're probably looking at

7:34

AI demand that's something like x

7:38

divided by

7:39

10 H and and then maybe you grow that by

7:42

like 10% a year right so you multiply it

7:45

by 1.1 now that's not supposed to look

7:47

like and remind you of your Calculus

7:49

class or your algebra class it's just

7:51

simply to say that demand is probably

7:55

substantially smaller than what we saw

7:58

in 20 23 for most

8:01

AI uh demands right now now a lot of

8:05

companies are still trying to build out

8:06

llms but even llm demand might not use

8:10

as much GPU compute as we think to train

8:13

new llms and this is going to get really

8:15

important in just a moment but the point

8:17

is as you go from the many to the few

8:21

you're going to reduce the amount of

8:23

demand that you have and then you'll

8:25

grow that reduced uh set because you

8:28

have people con consolidating the

8:31

products they're using they're probably

8:32

the best products of the bunch which are

8:34

the most efficient products and the E

8:36

most efficient products won't need uh

8:39

frankly as much compute power that's why

8:41

they're the best that's why they're the

8:43

most efficient products okay so so now

8:46

we've reduced our overall demand for

8:49

artificial intelligence is there any

8:51

evidence to back up that that is

8:54

happened well yes there actually is

8:57

Tesla delayed its delivery of h100s

9:02

because they say they don't have enough

9:04

room for the server Stacks but if they

9:07

really needed them they would have been

9:10

ready for the shipment reality is these

9:12

chips at Nvidia are so good that Tesla

9:17

as Elon himself says is no longer comput

9:19

constrained and we don't need as many

9:22

chips especially since the black Wells

9:24

are supposed to be 4X as fast as the

9:26

h100s you need 4X fewer

9:30

chips so if they introduced the h100

9:34

back in 2023 with the same strength as

9:36

the black well today you would need 4X

9:40

fewer of them which means You' have

9:43

potentially 1/4 of the AI chip revenue

9:46

for NVIDIA see where I'm going with this

9:49

as the competitors in the AI space

9:52

shrink demand Falls by virtue of a lack

9:56

of duplicative work yes it's still going

9:58

to grow AI demand is going to grow don't

10:00

worry about that that's what I'm saying

10:01

times 10% you're still going to get that

10:04

growth but this is something that is

10:07

made

10:08

worse by three different things one of

10:12

these is Matrix removal one of them is

10:16

caching one of them is

10:19

Apple Matrix removal Matrix matrices are

10:25

used in uh large language models they

10:28

are used used in conjunction with bias

10:32

weights and they're used inside of

10:34

neural Nets to compute uh a basically

10:39

likely outcome for something whether in

10:41

training or inference or

10:44

whatever there is a paper that just came

10:47

out uh from uh the University of

10:49

California Santa Cruz uh in conjunction

10:51

with Davis Davis is where my daughter

10:54

summer was by the way she's back home

10:56

really grateful for that she's beautiful

10:57

she was just out there smiling at me

10:59

she's so cute I love love walking out

11:00

there giving a big smooch uh anyway

11:04

scalable Matrix uh free basically

11:08

language modeling so here's basically a

11:11

paper suggesting can we get away from

11:13

using

11:14

matrices in uh language learning models

11:19

and uh here's what they find in their

11:20

abstract and we'll look at the

11:21

conclusion this is very very important

11:23

if you're either an Nvidia or an apple

11:26

investor because you're going to see

11:27

something that's going on here

11:29

and I'm wearing don't sue me bro because

11:31

I know there are a lot of people who see

11:32

this stuff that are like oh Kevin you

11:34

know you're just biased against a video

11:36

or whatever it's like no I'm I'm long

11:38

exposed to Nvidia but yeah I am hedging

11:42

myself in the short term and I'll show

11:43

you why take a look at this in this work

11:46

we showed that the matte mole which is

11:48

matrix multiplication models uh

11:50

basically typically use most of the GPU

11:54

performance so let's say you have 100

11:57

watts of power being used let's say the

12:00

matrices use like 90 watts of that power

12:03

so they're making the argument that a

12:05

lot of power of as a percentage uh

12:09

compute power actual energy that has

12:11

different implications as well goes into

12:14

Matt moles Matrix

12:16

multiplications okay interesting so what

12:20

experiment did they run well they ran an

12:21

experiment that says our experiments

12:23

show that a matte mole free so no

12:27

matrices model achieves performance on

12:31

par with state-of-the-art Transformers

12:34

that require far more memory during the

12:37

inference scale up in addition to that

12:40

we also provide a GPU efficient

12:43

implementation of this model which

12:45

reduces memory usage by up to 61% over

12:49

an unoptimized Baseline during

12:52

training in other words we can create

12:56

large language models using field

12:58

program

12:59

programmable Gateway arrays which are

13:02

quick for low power operations

13:04

especially mobile devices basically we

13:06

could use

13:07

fpgas instead of gpus and get quick

13:10

results with less energy consumption

13:14

which I want you to think about this for

13:15

a moment when somebody uses a computer

13:19

when you use GPT or Siri do you want a

13:24

damn pin wheel every time you're getting

13:26

an

13:27

answer of course not it's like when you

13:30

shoot a gun when you're playing rust and

13:33

and you're raiding your enemies do you

13:35

want a big lag latency time of course

13:37

not you want fast you want as little

13:40

latency as possible to maximize your

13:43

ability to make money or to succeed at

13:45

whatever it is you're doing sometimes

13:47

that could just be having a video load

13:48

while you're all alone I don't

13:51

know point is

13:54

regarding this as a competitor or or

13:57

something to Nvidia is it's a way of

14:01

saying hey if people can find more

14:03

efficient ways of running large language

14:06

models and the great big innovation

14:08

that's really come out of the latest AI

14:10

revolution has been frankly glorified

14:12

chat Bots that are really good don't get

14:14

me wrong they're great great sales

14:15

assistance and otherwise but outside of

14:17

generative AI these large language

14:20

models if you could run them off gpus

14:23

with less power less latency and less

14:26

cost then yeah people are going to do

14:29

that especially if the results are

14:30

similar in fact you could jump over to

14:33

the conclusion and you can see they say

14:35

by prioritizing the development and the

14:37

deployment of matte mole free

14:39

architectures as in architectures that

14:41

don't require as much of a GPU lift the

14:45

future of large language models will

14:47

only become more accessible efficient

14:49

and sustainable in other words as soon

14:51

as we get gpus out of our

14:53

life oh we can actually have more llms

14:56

and more competition which more

14:58

competition in llms puts even less or or

15:02

I should say puts companies like open AI

15:04

at risk uh for more competition and

15:07

let's just say I'm making this extreme

15:09

argument because obviously open AI could

15:11

use these matte mole free models as well

15:13

but let's just for Giggles say that uh

15:16

GPT is going to be 100% GPU based and a

15:22

competitor comes along and it's 0% GPU

15:26

and we're just going to say comp over

15:27

here and let's say this one is faster

15:30

and just as good as GPU GPT but you

15:32

don't have any pin Wheels well then GPT

15:34

will go bankrupt and the competitor will

15:37

succeed the competitor won't be using or

15:39

relying on those gpus which is bad for

15:42

NVIDIA interesting this is something

15:45

maybe we haven't thought of before now

15:47

obviously it's likely that GPT you know

15:49

open AI would adapt and use these matte

15:51

mole free models as well it's also

15:52

possible this paper is wrong but you may

15:55

not have thought yet that oh damn I

15:59

thought AI meant GPU which meant Nvidia

16:02

man that that's the far as I got that

16:05

that may have been your thesis but now

16:08

we actually have to go oh wait a minute

16:11

we can have GPU

16:14

free

16:16

AI messes the thesis up a little Kevin

16:19

what are you doing oh just waitting

16:21

until I get to the Apple part we're just

16:23

getting started okay what about cashing

16:26

risk that's risk number two the cashing

16:29

risk caching is basically a way of

16:31

saying hey for of the image generation

16:33

or video generation that or or just

16:35

quite frankly text generation that

16:37

people need a lot of that we don't

16:39

actually need to run to a GPU server a

16:41

GPU based server every time to operate a

16:45

result instead if a million people every

16:49

single year ask why George Washington

16:51

had wooden teeth let's just run that

16:53

calculation one time and then the other

16:57

999,000 999 times somebody asked about

17:00

George Washington's teeth we're just

17:02

going to feed them the same answer from

17:06

basically downloaded memory like a hard

17:08

drive like imagine downloading on a

17:10

future iPhone 50 GB of a GPT in

17:14

cyclopedia that has cached answers text

17:17

based answers for everything even

17:19

potentially canned uh uh image

17:21

generation or video

17:23

generations and then the only portion

17:26

you have to send to the GPU based cloud

17:30

is just what's different in the question

17:33

so then you can provide an answer that's

17:35

partly encyclopedia based in storage or

17:37

memory and the other that is based on

17:39

actual Cloud

17:41

compute okay that's another risk cashing

17:45

that's not like the cash in your pocket

17:48

that is c a c h e would be cash often

17:53

people think weapons cash um obviously

17:57

cash is what I'm trying to teach teach

17:59

everybody how to get more of that's why

18:01

I teach how to build wealth in the

18:03

courses on building your wealth link

18:04

down below we've got an expiring coupon

18:06

code tomorrow uh tomorrow is June 12th

18:09

it is CPI day in fed day big expiring

18:11

coupon code really hope that uh I could

18:14

keep positive performance going uh we

18:17

are up year today quite a bit this is

18:19

the p&l today you saw the trade sizes on

18:21

these two uh they weren't uh you know

18:24

they were less than this so so we did

18:25

really well uh coinbase was a little

18:27

larger but that was also not a

18:29

derivative play that was a um uh a share

18:31

position but uh but anyway check it out

18:35

link down below coupon expires tomorrow

18:37

again cannot guarantee uh or uh imply

18:40

that you're definitely going to make

18:41

money but the goal is to learn how to

18:43

make money and as much as possible see

18:44

we got a little disclaimer Banner

18:46

now okay

18:49

so if you have questions by the way

18:51

email us at staff kevin.com so now the

18:53

question becomes all right we have risks

18:56

to Nvidia we have Matrix remove Al which

18:59

is a GPU risk we have cashing risk which

19:02

is an Nvidia risk we have Nvidia that

19:04

trades at about a 3.5 Peg right now

19:06

which is a price to earnings growth

19:08

level which basically suggests that

19:10

their growth is going to slow to

19:12

somewhere around 13 to 15% over the next

19:14

four years on average per year that

19:17

you'll go from basically this doubling

19:19

to to a slowing to potentially even

19:21

negative so you get this average 13 to

19:23

14% growth and there's a real risk that

19:26

those growth expectations even in those

19:28

years are too high that those growth

19:31

expectations might actually go or the

19:33

actual growth numbers might actually go

19:35

negative way earlier which would make

19:37

nvidia's valuation even higher which is

19:41

somewhat scary because it's already kind

19:43

of rich now it is a cash generating

19:46

Behemoth it makes a lot of

19:50

money but what about Apple and what is

19:54

Apple doing that could potentially hurt

19:58

a lot

20:00

Nvidia introducing on device artificial

20:05

intelligence now you might think to

20:07

yourself okay on iPhone who cares but

20:09

wait a second you don't like latency we

20:12

already talked about latency and you're

20:14

not going to like latency on your phone

20:16

I hate when GPT pin wheels or Siri has

20:18

to think it should be fast with the

20:21

fastest highest quality product is going

20:23

to be a product that can answer my

20:25

question with an on device neural or

20:29

language large language model or

20:31

whatever basically AI on the device if

20:35

Apple can protect perfect which we don't

20:37

know that they will be able to but if

20:38

they can perfect on device artificial

20:40

intelligence forget about upgrade cycle

20:43

for a moment for apple and what that

20:45

means for introducing new iPads laptops

20:48

desktops and phones but what I want you

20:51

to think of for a moment is if Apple

20:54

does on device guess who's going to do

20:57

on device next

21:00

Android will because do you want to buy

21:03

an Android that always sends your data

21:05

to the cloud that's not good for privacy

21:07

that's not good for operational security

21:09

offsec and it's certainly not good for

21:12

efficiency because you're waiting for

21:13

the pin wheel so if Apple does on device

21:17

then Android will probably do on device

21:21

which means probably all Hardware in the

21:23

future will do on device which means in

21:27

the future iPads will do all the on

21:31

device work this is the iPad by the way

21:33

I got because I had uh I I made a bet on

21:37

uh X on Twitter follow me there if you

21:38

don't already real me Kevin but I made a

21:40

bet if I made trading profits on a

21:42

certain trade I'd buy a new iPad even

21:44

though I don't need it then I bought it

21:46

and then I realized that classic apple

21:47

move the apple pencil uh doesn't

21:50

actually work with uh with the new one

21:53

the apple pencil Pro you have to get the

21:54

new Apple pencil Pro and I'm like of

21:57

course of course but anyway so the

22:01

iPhone the Android transitions the iPad

22:03

transitions what's going to transition

22:05

next well of

22:07

course the laptops are going to

22:10

transition next to on device artificial

22:12

intelligence which if the laptops at

22:15

Apple transition then the competitors

22:17

will transition which means the PCS may

22:19

also

22:21

transition now PCS won't use Apple's M1

22:25

chips where you have sort of the

22:26

built-in Graphics into you know the M1

22:28

chip uh and the neural net built into

22:31

there PCS might still use AMD you know

22:35

490s or or whatever I mean these are

22:37

these are fantastic chips let's be real

22:40

okay I'm not going to play video games

22:42

on uh a Mac it's just not going to

22:45

happen I'm going to play video games on

22:48

a delicious PC because well after all

22:51

they're really good in fact I love these

22:54

things so much you can actually see

22:56

right on screen here that I got my 4090

23:00

on this computer that I use when I show

23:03

a PC so I've got a lot of these I think

23:06

I've got three or four computers that I

23:07

own that have the 490s I love GeForce

23:10

experience I love Nvidia I think they're

23:12

a great company they have great products

23:14

and they're just good they're best of

23:17

class but that's for video gaming we're

23:19

talking AI here and what is Apple

23:21

potentially going to be able to pull off

23:23

well first of all if we don't need

23:25

gpus to create

23:29

large language models because of the

23:30

matric risk if that ends up working out

23:33

and then Apple uses on device artificial

23:36

intelligence to do something which

23:38

honestly I was blown away by I almost

23:39

had tears in my eyes during the presser

23:41

of the Apple event which was oh we're

23:44

going to let you take themes from your

23:47

memories or your pictures or video or

23:50

whatever that you took on a vacation and

23:53

then ask the phone to basically create a

23:55

custom video out of it we'll throw music

23:57

on for it we'll edit it together for you

23:59

and you could show off this movie that

24:01

was custom made for you using your own

24:04

pictures and video on device so it's not

24:06

leaving your device using the compute

24:09

power on your

24:11

phone that's insane like I wish I had

24:14

the patience to go through and make

24:16

collages of all my vacations because I

24:19

would have many of them that would be

24:21

really fun and I'd be very grateful for

24:23

that I love children and I love

24:25

vacations but if the phone can do that

24:28

on device then that's a huge competitive

24:31

advantage over me creating a generic

24:33

piece of imagery or video that I don't

24:37

really care about it's not personal now

24:40

obviously the onice aspects that we

24:42

learned about from Apple some of them

24:44

were great as well such as artificial

24:46

intelligence to let you know which of

24:49

the uh notifications you get are

24:51

pressing right now while Kevin is

24:53

filming a video versus not pressing

24:56

while Kevin is busy filming a video you

24:58

know little things like tldrs or summary

25:00

or summaries or the calculator app or

25:02

the notes and being able to make

25:04

modifications these are great these are

25:05

really functional useful applications uh

25:08

making Siri better finally these are

25:10

great applications of AI but we have to

25:13

go Way Beyond this idea of a product

25:15

refresh cycle for a moment and how

25:17

useful some of these services are going

25:18

to be as disappointed I was in the event

25:21

up in the first half the second half was

25:23

actually pretty

25:24

good we actually have to look and say

25:27

wait a minute

25:28

is there a chance that the next AI play

25:32

is not Nvidia that the next AI play is

25:35

actually Apple now Apple also has a peg

25:38

of about three and a half so their

25:39

valuation is ironically similar but if

25:42

nvidia's valuation is based on growth

25:44

rates that are too high then their PEG

25:47

ratio is going to go up as their growth

25:49

rates come down and if apple is being

25:53

thought of as a dead company that

25:55

doesn't innovate then their growth rates

25:57

are way too low

25:58

which means their PEG ratio is probably

26:00

going to come down so in my estimates

26:03

Nvidia probably has a PEG ratio now of

26:05

five and apple probably has a PEG ratio

26:09

of closer to 1 and a half to

26:12

two in other words people expect this

26:15

for NVIDIA and that for Apple well that

26:20

changed yesterday thanks to WWDC 2024

26:25

but it didn't just change Apple's

26:27

trajectory

26:28

it also creates real concerns for

26:31

NVIDIA so what's the trade here's my

26:35

trade short term I'm short

26:38

Nvidia now that's in a trading portfolio

26:42

I've also reduced my long-term exposure

26:45

to Nvidia I've brought i' I've roughly I

26:49

think a little bit more than haved my

26:52

Nvidia exposure right around 120 bucks a

26:55

share that's $1,200 uh post split or

26:58

presplit rather so I've substantially

27:00

reduced my exposure to Nvidia don't get

27:04

me wrong I still think they have PP

27:06

pricing power I still think Nvidia has a

27:08

lot of pricing power don't get me wrong

27:10

I just think their valuation has gotten

27:12

them carried away so it's worth reducing

27:14

exposure to them at the same time it is

27:18

worth in my opinion

27:20

maintaining pretty high exposure to

27:22

Apple because this could be the next

27:25

sort of AI play that everybody forgot

27:28

in fact if you recall I made a video on

27:32

Apple Ai and people got mad at me people

27:35

got so mad meet Kevin Apple artificial

27:39

intelligence let's take a look at this

27:41

okay so we made a video called leaked

27:45

Apple documents reveal AI Siri plans

27:49

Apple artificial intelligence that's

27:52

what I called it back then a AI I should

27:54

have put it together that they were

27:56

going to call it Apple intelligence it's

27:58

kind of smart uh but anyway you can see

28:01

that video yourself here and I actually

28:03

really encourage you to check it out

28:05

because you're going to see that on

28:06

April 2nd I was talking about how I

28:09

wanted to buy more Apple stock on April

28:12

April 2nd and go back for a moment to

28:15

where April 2 is April 2nd is right here

28:19

Apple Stock's $170 per share from then

28:24

to now Apple stock is up about 21 to 22%

28:28

% and it's only been about 2 months and

28:30

2 weeks here's a video that only got

28:33

that got less than 50,000 views that

28:37

talked about a lot of what I talked

28:39

about in this video where they're going

28:41

to do something incredible on device and

28:43

it's going to be a game

28:47

changer so I was transparent about it

28:50

then I'm transparent about it now I

28:52

think this is a big deal now uh so my

28:57

take is

28:58

uh longer term less explo exposure

29:01

Nvidia so let's say if I'm like 5%

29:04

Nvidia 10% Apple I let those ride for

29:07

long term this is not personalized

29:09

Financial advice I want to be really

29:10

clear about that when I give these like

29:12

allocations and stuff this this is not

29:14

for you disclaimer not personalized

29:17

advice this is very clearly just

29:20

theoretical and what I'm personally

29:22

doing uh with with my own money where I

29:24

want my own money to be I personally

29:27

like investing via an actively managed

29:29

ETF that I don't have to you know sort

29:31

of um how should I say uh worry about

29:34

too much concentration risk in because

29:35

there are legal compliance issues with

29:37

those uh which is nice you can't get

29:39

like overly heavy in one

29:41

thing so um that's my Nvidia Apple

29:45

thesis longer term but shorter term I'm

29:49

just going straight short Invidia

29:52

through options you could go out 3 weeks

29:56

60 days 90 days whatever ever you decide

29:58

ultimately is for you again this isn't

30:00

personalized advice um but that's what

30:03

I'm thinking uh and then the same thing

30:05

for Apple call options shorter term 21

30:08

days to 60 days somewhere in that range

30:11

because what I think you'll see is

30:12

you'll see money flow out of Nvidia and

30:14

into

30:15

Apple it's just a thesis I have it it

30:18

could be very very wrong uh but this

30:21

also worth noting brings up wait a

30:23

minute what about energy usage remember

30:26

how everybody's like oh energy usage is

30:28

going to Skyrocket because of AI data

30:31

centers in fact one of my team members

30:34

brought me an entire research piece and

30:36

they're like oh Kevin like our investing

30:38

in data center is a big play you know

30:40

there there are a lot of people saying

30:41

energy use is going to explode because

30:43

of these these chips and um it's funny

30:46

because I I I said I don't think so and

30:49

they're like wait what what are you

30:51

doubting AI I go no like what what do

30:53

you mean then I go all it's going to

30:56

take is a more efficient chip that does

30:58

four times the work at a fraction of the

31:01

energy

31:03

output and the fact of the matter is

31:06

human productivity just isn't going

31:08

to go up at the same rate that energy

31:12

costs for artificial intelligence go

31:14

down or energy demand goes down like for

31:17

example let's say creating an app

31:20

through artificial intelligence is going

31:21

to take me a 100,000 units of power you

31:25

know kilowatts let's call it kilowatt

31:27

hours okay

31:28

is is um and that's that's at today's

31:31

level okay well if I could get that down

31:34

to 25,000 Kow of power am I going to

31:36

make four apps no I only need one okay

31:40

maybe I'll have a second do you see what

31:41

I'm saying okay but then I'm only at

31:43

50,000 kilowatt hours of use that's less

31:45

than the prayer estimate so I think

31:47

that's overblown that's my opinion again

31:49

I could be wrong sorry I like I know

31:51

sometimes people like Kevin you're so

31:52

disclaimer heavy and like I just I

31:55

really just want to be very clear that

31:56

like I think I try to give really good

31:59

perspective we did an amazing course

32:00

member live stream tomorrow this morning

32:02

probably one of the best course member

32:03

live streams with a big trade we've got

32:05

going uh that I think I've ever done uh

32:08

it's really good I encourage you to

32:09

watch it and we'll see how the trade

32:11

plays out but we got a huge one cooking

32:14

and uh I'm really enthusiastic so check

32:16

out the courses on Building Wealth link

32:17

down below expiring coupon code hits uh

32:21

tomorrow and uh if you have questions

32:23

email us at staff atm.com thanks so much

32:25

for watching we'll see you next one

32:26

goodbye and good luck I not advertise

32:28

these things that you told us here I

32:30

feel like nobody else knows about this

32:31

we'll we'll try a little advertising and

32:33

see how it goes congratulations man you

32:35

have done so much people love you people

32:36

look up to you Kevin PA there financial

32:39

analyst and YouTuber meet Kevin always

32:41

great to get your

32:42

take even though I'm a licensed

32:44

financial adviser licensed real estate

32:45

broker and becoming a stock broker this

32:47

video is not personalized advice for you

32:49

it is not tax legal or otherwise

32:50

personalized advice tailor to you this

32:52

video provides generalized perspective

32:53

information and commentary any third

32:55

party content I show shall not be deemed

32:57

Endor forced by me this video is not and

32:59

shall never be deemed reasonably

33:00

sufficient information for the purposes

33:02

of evaluating a security or investment

33:04

decision any links or promoted products

33:05

or either paid affiliations or products

33:07

or Services we may benefit from I also

33:09

personally operate an actively managed

33:11

ETF I may personally hold or otherwise

33:13

hold long or short positions in various

33:15

Securities potentially including those

33:17

mentioned in this video however I have

33:18

no relationship to any issuer other than

33:20

house act nor am I presently acting as a

33:22

market maker make sure if you're

33:23

considering investing in hous Haack to

33:25

always read the PPM at house.com

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.