トランスクリプトEnglish

Why you NEED to be running local AI models (FULL beginners guide)

21m 28s4,209 単語578 segmentsEnglish

全トランスクリプト

0:00

I'm about to show you the future of AI,

0:02

AI agents, and OpenClaw. Over the past

0:05

two months, I've spent over $50,000 to

0:08

use, test, and learn about local AI

0:11

models. What I learned, I think, can

0:13

dramatically change your life and save

0:15

you tons of money, even if you're on a

0:17

cheap computer. You don't need to buy

0:18

Mac Studios like me. In this video, I

0:20

will cover everything local AI models.

0:23

I'll cover what computers you need, what

0:25

local AI models even are, which models

0:28

you can run, what use cases you can do

0:31

today, and how this lets you use Open

0:33

Claw completely for free. I'll also show

0:36

you a glimpse of the future that I am

0:37

100% confident is what's going to

0:40

happen. By the end of this video, you'll

0:41

be a local AI master and you'll be

0:44

running your own local super

0:46

intelligence on your computer. So, let's

0:48

lock in and get into it. So, this might

0:50

be my most important video yet. I'm so

0:52

excited to take you through what I've

0:54

learned over the last couple months,

0:55

even if you have no idea what local AI

0:57

models are. You're going to get so much

0:59

out of this video. So, let's start off

1:01

with why local AI is so important and

1:04

why you need to be using it. This is

1:06

what you're probably doing today. If

1:08

you're on Chad GBT or Claude or using

1:11

any of the AI frontier models that

1:13

everyone knows about, you're using a

1:15

cloud model. That means you're using AI

1:17

that are running on these big servers

1:19

that might be underground or one day in

1:22

space because of Elon or might be on an

1:24

island. But because you're using these

1:25

models that are running on these

1:27

servers, that means a lot of different

1:28

things. One, it's expensive. You're

1:30

paying for every token you use. Every

1:33

time you send a prompt to these servers,

1:36

it's doing a bunch of calculations and

1:38

they're charging you for each one of

1:40

those calculations. These are where

1:42

these massive API bills are coming from.

1:45

These are where the $200 month

1:47

subscription plans are coming from and

1:49

all your API usage. It's very expensive

1:52

to be running AI models on these

1:54

billions of dollars of chips. It also

1:56

has many other downsides. Zero control.

1:59

A lot of people complain all the time.

2:00

They feel like their AI models are

2:02

getting stupider. In reality, they

2:04

probably are. These AI companies are

2:06

constantly dialing the knobs and

2:08

changing things to try to save money.

2:10

You have zero control over the AI models

2:13

running on these servers. You have zero

2:15

privacy. Every message you send to Chat

2:19

GPT or Claude or Gemini or any AI model

2:22

you're using on the cloud, those

2:24

employees can read those logs. Nothing

2:27

you say is private and secure. Every

2:29

question you ask about your health or

2:31

maybe if you're a sicko and you have

2:33

your own AI girlfriends, they can read

2:36

all of those messages you're sending.

2:38

It's also laggy. You need to be

2:39

connected to the internet. If you don't

2:41

have great internet, it could take a

2:42

while to get your prompt sent there and

2:44

sent back. So, there's a high latency.

2:46

And on top of that, it's just not

2:47

scalable. A lot of people have been

2:48

learning this lately with OpenClaw.

2:50

Maybe you connect it to Opus 46 API. You

2:53

send a bunch of prompts. You look at

2:54

your API bill and whoops, you spent

2:56

$1,000 over the last day cuz you sent a

2:58

bunch of prompts. It's not scalable at

3:00

all. And if you want super intelligence

3:02

working for you 247, it's going to cost

3:05

you millions of dollars. But with all

3:07

that being said, you do get one benefit,

3:09

which is you get front tier AI. You're

3:12

getting the best AI models. They're

3:14

running on these servers and you're

3:15

getting the best performance. That is

3:17

probably what you're used to today. But

3:19

where I strongly strongly believe the

3:22

future is going and what I actually

3:25

believe you will be doing in the next 12

3:27

months is you will be using local AI

3:30

models. What are local AI models? These

3:32

are AI models. Instead of all these

3:34

complex multiplication equations

3:37

happening on servers across the world,

3:39

they're happening locally on the Mac

3:42

Mini on your desk or the Mac Studio or

3:44

the old dusty Lenovo laptop, whatever

3:46

you're using. The models run locally and

3:49

that has a tremendous amount of

3:50

benefits. First of all, it's completely

3:52

free, right? You're not paying for

3:54

tokens. It is just the cost of the

3:56

electricity going into the computer you

3:58

have plugged into the wall. It's fully

4:00

customizable. If you want to take a

4:02

local model and make it sound like you

4:04

or make it rap like Kendrick Lamar, you

4:06

can do that. They are fully

4:07

customizable. It's also fully secure and

4:10

private. So, every message you're

4:12

sending to your local AI running on your

4:15

computer on your desk stays on your

4:18

computer. It does not go to the

4:19

internet. Nobody can read your prompts

4:22

or your messages back and forth. If you

4:24

want to get freaky deicky and make your

4:26

own AI boyfriend or girlfriend, you can

4:28

do that and no one will read those

4:29

messages. Not that I would know anything

4:31

about that, but also zero latency. There

4:34

is no messages going to the internet.

4:36

It's all staying on your device. So, you

4:38

literally can unplug this from the

4:40

internet and the AI would still work.

4:43

You can be on an airplane vibe coding to

4:45

your heart's content and it doesn't

4:47

matter because there's no internet. It's

4:49

all local on your computer. And here's

4:51

the best part. Here's why I'm bought it.

4:53

And here's why I think everyone will be

4:55

using local AI in the next 12 months.

4:58

Because it's local. Because it's free,

5:00

it is extremely scalable, which means

5:03

you can have AI doing work for you 24/7,

5:06

365. I have right now, and I'm going to

5:09

demo this later in the video, so make

5:10

sure to stick around for this. I have

5:12

right now four local AI models, doing

5:16

things for me continuously, going on the

5:18

internet and scraping websites, writing

5:20

me content, writing me newsletters,

5:22

writing code for me, just doing things

5:24

at all times of the day. It's like I

5:26

have multiple employees working for me.

5:29

This is an advantage I have over all of

5:32

my competition because they are not

5:34

running local AI models. And if you do

5:36

the things I'm about to show you in this

5:38

video, you will have the same crazy

5:41

advantage over everyone else in the

5:43

marketplace as well. That's why it's

5:45

super critical stick here till the end.

5:46

Now, the one downside, what's the one

5:48

downside to all of this? Local models

5:51

aren't quite as smart as the frontier

5:53

models. I'd say they're about 6 months

5:56

behind at all times. So, so 6 months ago

5:58

was like Opus 45, Sonnet 45, around that

6:02

realm. The local models are about there.

6:04

Now, if you think back to 6 months ago

6:06

when Opus 45 came out, it absolutely

6:09

blew people's mind. So, we're still

6:11

we're at that point when it comes to

6:13

local models. So, it's still really

6:15

really strong. So, that brings us to our

6:17

next point, which is what computers do

6:19

you need? Do you need to run out and buy

6:21

$50,000 worth of Mac Studios and DJX

6:24

Sparks like me? Well, the answer to that

6:26

is no. You can literally run local

6:29

models on any computer you have. So, you

6:32

have an old crappy laptop in your closet

6:34

from like college or something, you can

6:36

take that out and run local models. If

6:38

you have the new $600 Mac Mini that

6:40

everyone was running out and buying a

6:41

few months ago, you can run local models

6:43

on that. That was a very good purchase.

6:45

Now, are the models you're running on

6:47

these cheaper, smaller machines going to

6:49

be Opus 45 level? Well, no. But there's

6:52

still use cases you can run. You can

6:55

still do things like memory management

6:57

for your open claw. Having a very small

7:00

local model, deciding which memories get

7:04

loaded into context for your OpenClaw or

7:06

your AI agent or whatever is still a

7:09

really powerful use case that you can

7:10

run on your $600 Mac Mini. And I'll go

7:13

through the exact models you should be

7:15

downloading for each device in a second.

7:17

But even if you have these old dusty

7:18

computers, you can still run local

さらにアンロック

無料でサインアップしてプレミアム機能にアクセス

インタラクティブビューア

字幕を同期させ、オーバーレイを調整し、完全な再生コントロールでビデオを視聴できます。

無料でサインアップしてアンロック

AI要約

動画コンテンツ、キーポイント、および重要なポイントのAI生成された要約を即座に取得します。

無料でサインアップしてアンロック

翻訳

ワンクリックでトランスクリプトを100以上の言語に翻訳します。任意の形式でダウンロードできます。

無料でサインアップしてアンロック

マインドマップ

トランスクリプトをインタラクティブなマインドマップとして視覚化します。構造を一目で理解できます。

無料でサインアップしてアンロック

トランスクリプトとチャット

動画コンテンツについて質問します。AIを利用してトランスクリプトから直接回答を得られます。

無料でサインアップしてアンロック

トランスクリプトをもっと活用する

無料でサインアップして、インタラクティブビューア、AI要約、翻訳、マインドマップなどをアンロックしてください。クレジットカードは不要です。

    Why you NEED to be running loc… - 全文書き起こし | YouTubeTranscript.dev