TRANSCRIPTEnglish

Sam Altman Says "Vist Me In Jail"... Insane OpenAI Damage Control AMA...

21m 41s4,481 words633 segmentsEnglish

FULL TRANSCRIPT

0:00

The cracks are starting to show with

0:02

OpenAI and Sam Alman. This is bonkers.

0:06

Now, if you believe these numbers, Chat

0:08

GPT has suffered over 700,000

0:12

people unsubscribing from their paid

0:15

services. Now, this is resulting in

0:17

Anthropic getting a ton of positive

0:20

press simply because they are the ones

0:22

that are well not willing to just do

0:24

mass surveillance on the US citizens and

0:27

also allow fully autonomous systems with

0:30

their AI in terms of military weapon

0:31

systems, which of course Open AI seems

0:33

to have no problem with. I've covered

0:35

this in a previous video, so I'm not

0:36

going to go over that in detail in this

0:37

one. But what I am going to say is this.

0:39

Sam Alman is [ __ ] the bed. And uh

0:42

it's quite evident by what he's doing

0:43

now. Look at this. It is it is crazy.

0:45

This is Robert Bay by uh product manager

0:47

at Anthropic and look at the top

0:49

downloads on the app store. Claude is

0:52

number one. Chat GPT has been knocked

0:54

off that spot by obviously their

0:56

behavior over the last couple of days uh

0:58

when it comes to their um duplicitus,

1:02

let's say, uh CEO and all of that.

1:04

Basically a mask off moment uh for the

1:06

company. And also Dick Sporting's goods.

1:10

So I looked into this. Basically, the

1:11

reason why that's there is they've got

1:13

an offer where I think if you do

1:15

activities and track them with their

1:16

app, you get discount codes to use in

1:19

their store. So, that's why they're

1:20

number three. Um, but the big story here

1:21

is Claude for the first time ever has

1:23

knocked Chat GPT off the top rank spot.

1:27

So, why? That's the question. Well, we

1:28

know why. It is because Anthropic stood

1:31

up for their values and said, "Hey, we

1:33

don't feel very comfortable using our AI

1:36

systems for fully autonomous weapons and

1:38

we don't like the idea of using it for

1:40

mass surveillance." That same day, Sam

1:42

Orman says, "Oh, yeah, I agree with

1:44

Anthropic. That's great." And then that

1:46

very evening, he then decides, "No,

1:48

actually, we'll just take over the

1:50

Anthropic contract." As obviously,

1:52

Anthropic was deemed a supply chain risk

1:54

by the US government and removed from US

1:57

systems. although uh it's going to take

1:59

6 months to remove them and there was a

2:01

deadline if you remember the Friday

2:03

deadline. Now, what's interesting about

2:04

that, and we'll get on to it in this

2:06

video, is obviously what happened after

2:08

Friday, um, was a, well, the USIsraeli

2:12

strikes on Iran, which, as I'm going to

2:15

show you, used Claude, they used their

2:18

AI. So, they wanted the AI systems to be

2:20

in place to use them for that um,

2:23

mission planning and all that kind of

2:24

stuff that they were doing. Not using

2:26

them for autonomous weapons, not yet,

2:28

but using them at a mission intelligence

2:30

level. So, this prompts Sam Alman to do

2:33

this. Now, this was uh yesterday or

2:36

well, it's today if you're watching this

2:38

video today, but it was early hours of

2:40

the morning for me. You can see it was

2:41

12:13 a.m. Um, in some ways, it's

2:44

bonkers. We live in a world where the

2:46

CEO of, you know, the biggest private

2:49

company there is. Maybe you could say

2:52

Starling could be the biggest private

2:54

company in terms of value, but it's not

2:55

really. Let's just roll with Open Hour.

2:57

It's one of the biggest companies,

2:58

right? um reportedly worth near 800

3:01

billion at current valuations. Their CEO

3:04

is just goes on to Twitter and starts

3:06

talking like this. He's worried, right?

3:08

He's absolutely worried. He is losing

3:10

market share to Anthropic. They are

3:13

losing market share to Google. They're

3:15

losing market share all over the place

3:17

and everything they seem to do. When I

3:19

say market share, I mean consumer market

3:21

share. Clearly, they're not losing like

3:23

government market share. uh and they're

3:26

trying to go into uh the corporate arena

3:29

as well to take an anthropic. But look

3:30

at this. He says, "I'd like to answer

3:32

questions about our work with the

3:33

Department of War." You know, I cannot

3:36

stop reading that as Dawn of War cuz

3:37

it's Dawn of War, is it? Dawn of War.

3:39

Dawn of War, by the way, the remaster of

3:41

Dawn of War is really good. And Dawn of

3:43

War 4 looks sick. I need that. Uh

3:45

anyway, Dawn of War 2 is a good game as

3:48

well. Dawn of War 3 was trash, but four

3:50

looks pretty good. Anyway, he says, um,

3:52

"Ask me anything." So it's like, okay,

3:55

let's let's take a look at what he says.

3:57

So we'll look at the question and we'll

3:58

look at the answer. If the government

4:00

comes back with a memo saying that in

4:01

their view mass domestic surveillance is

4:03

legal, do you accept that? Do you do it

4:05

until the courts bar it or do you delay

4:08

until the courts approve it? Second,

4:09

would mass domestic surveillance been

4:11

lawful use right now? That's a good

4:13

question. And he says we would not do

4:15

that because it violates the

4:16

constitution. Also, I cannot overstate

4:18

how much the department of war has been

4:20

extremely aligned on this point. This is

4:22

objectively not accurate because this is

4:24

what Anthropic was saying and they threw

4:26

Anthropic out for saying the same thing.

4:28

So this just cannot be accurate.

4:30

However, maybe this is the question you

4:32

are really asking. What would we do if

4:34

there were a constitutional amendment

4:36

that made it legal? Maybe I would quit

4:38

my job. I mean, I can't I can't see that

4:40

happening in in any world. I very

4:43

deeply, although a lot of people

4:45

probably would like to see it, but

4:46

anyway, I very deeply believe in the

4:49

democratic process and that our elected

4:50

leaders have the power and that we all

4:53

have to uphold the constitution, but

4:55

isn't he agreeing to have his systems

4:57

used in mass surveillance, which is

4:59

against that? See, this is this is what

5:01

this guy is all over the place. I'm

5:03

terrified of a world where AI companies

5:05

act like they have more power than the

5:07

government. I don't like So, here's the

5:08

thing. I'm I'm not entirely sure he is.

5:10

think he wants to be the person with

5:13

more power. See, you've got to remember

5:15

what has happened here. There was a gap.

5:17

Anthropic has been removed from

5:19

government systems. Immediately the same

5:21

day, this guy is he enters. He goes

5:23

straight in. It doesn't matter what the

5:25

ethical impact of that might be, what

5:27

the constitutional impact of that might

5:29

be, what the legal, he just doesn't

5:30

care. He's like, "Whatever, get us in."

5:32

It's ruthless behavior of someone that

5:34

wants relentless growth, which is

5:36

typically what these CEOs generally do.

5:38

Anyway, I would also be terrified of a

5:40

world where our government decides mass

5:41

domestic surveillance was okay. I don't

5:43

know how I'd come to work every day if

5:45

that were the state of the country or

5:47

the constitution. But he is building

5:49

tools that directly enable this. That's

5:51

what's bonkers. How long has this

5:54

conversation with the department of war

5:55

been going for? What was the reason for

5:57

announcing so close to the deadline they

5:58

gave anthropic? Again, good question.

6:00

Interesting that he answered this. For a

6:03

long time, we were planning uh to

6:05

non-classified work only. We thought the

6:07

D

6:09

clearly needed an AI partner and doing

6:11

classified work is clearly much more

6:13

complex. We have said no to previous

6:15

deals in classified settings that

6:17

anthropic took. We started talking with

6:20

the Department of War many months ago

6:22

about our non-classified work. This week

6:24

things shifted into a high gear on the

6:26

classified side. We found the D. It it

6:30

doesn't sound very good to say as a DO,

6:32

but I'm going to keep say I keep saying

6:33

like that instead of department of war

6:34

to be flexible on what we needed and we

6:37

wanted to support them in their very

6:38

important mission. The reason for

6:40

rushing is an attempt to deescalate the

6:42

situation. I think the current path

6:43

things are on. The reason for rushing is

6:46

an attempt to deescalate the situation.

6:49

It isn't. It's just an attempt to

6:50

capture market. That's all it is. It's

6:52

crazy. I mean, if this guy had any

6:54

bollocks, he would just be like, I I

6:55

stand with the comments of Anthropic.

6:58

Now, I'm not saying like be be fully

7:00

aware. I am not saying anthropic are

7:02

like great, right? They've been used,

7:04

like I said, they've been using the

7:05

strikes on Iran and they've been inside

7:07

the military for a while. So, these

7:08

systems are being used whether you like

7:10

it or not, for nefarious means, right?

7:12

So, for him to come out, Dario, and say,

7:14

"Look, we've got a line and they want us

7:16

to cross it and we're not going to do

7:17

it." And the very same day Open is like,

7:19

"Yeah, we'll cross that line." That's

7:20

[ __ ] Like if this guy Sam Alman had

7:23

come out and said no, I believe in what

7:26

Anthropica is saying then that would

7:28

leave the door open to like what I'm

7:30

sure XAI would do it but who gives a

7:32

[ __ ] about them. You know what I mean?

7:33

So this is like a complete loss of

7:35

values. And again this is why they're

7:37

suffering with people pulling back and

7:39

stopping using their service because

7:41

it's showing that they are untrustworthy

7:43

in a massive massive like a really

7:46

dangerous way. Anyway, I think the

7:48

current path things are on is dangerous

7:51

philanthropic

7:53

healthy competition and the US. We

7:56

negotiated to make sure similar terms

7:58

would be offered to all other AI labs. I

8:00

know what it's like to feel backed into

8:02

a corner and I think it's worth some

8:04

empathy to the DOW.

8:07

They are on the very they are a very

8:09

dedicated group of people as I mentioned

8:12

an extremely important mission. I cannot

8:14

imagine doing their work. Our industry

8:16

tells them the technology we're building

8:18

is going to be the high order bit bit in

8:21

geopolitical conflict. China is rushing

8:24

ahead. You are very behind and then we

8:26

say but we won't help you and we think

8:28

you are kind of evil.

8:30

I don't think I'd react great in that

8:32

situation. I do not believe unelected

8:35

leaders of private companies should have

8:37

as much power as our de democratically

8:38

elected government. But I do think we

8:40

need to help them. The main competitor,

8:42

Anthropic, did OpenAI come out and say

8:45

they do not think Anthropic should be

8:46

labeled a supply chain risk. It was even

8:49

stated that your position on this was

8:51

made clear to the DO. From the outside,

8:54

it feels like some political chess given

8:56

this was said after your deal was

8:57

confirmed with the D. And then he says

9:00

enforcing the um STR, so the supply

9:02

chain uh designation on anthropic would

9:05

be very bad for our industry and our

9:07

country and obviously their company. I

9:10

mean, but it's it's it's being forced on

9:11

them. I don't think the legal uh

9:13

documents have been submitted to them,

9:15

but it is the process is beginning. We

9:18

said to the DOW before and after what

9:21

what's we said that part of the reason

9:23

we are willing to do this quickly was in

9:25

the hopes of deescalation.

9:27

I mean, it's it just isn't though. It's

9:30

just to get in it isn't it just isn't. I

9:32

feel competitive with anthropic for

9:34

sure. But successfully building safe

9:36

super intelligence and widely sharing

9:39

the benefits is way more important that

9:42

any company than any company

9:44

competition. I believe they would do

9:46

something to try to help us in the face

9:48

of great injustice if we could.

9:52

We should all care very much about the

9:54

president.

9:55

I saw in some other tweet that I must

9:57

not be willing to criticize the DO. It

10:00

said something about sucking their dick

10:01

too hard to be able to say anything

10:02

critical, but I assume this was the

10:04

intent to say it very clearly. I think

10:08

this is a very bad decision from the DO

10:10

and I hope they reverse it. If we take

10:12

heat for strongly criticizing it, then

10:14

so be it. Now, this one is like hang it

10:17

in the Louve. It is store this message

10:20

and get ready and then hang it in the

10:22

Lou when it proves to be the case. So,

10:24

he says, well, he gets asked what will

10:27

be the case or the cause. Open AI what

10:29

would cause sorry open AI to walk away

10:31

from a government partnership nothing

10:33

they're never going to walk away from

10:34

one right anyway is there a clearly

10:36

defined boundary or red line you won't

10:38

cross and he says well let me just

10:40

address that why I say that the reason

10:42

why they will never walk away from this

10:43

because the way these AI systems work

10:45

and look at what's going on with Claude

10:46

at the moment it is embedded in the

10:48

government systems it's very hard to get

10:50

that out and replace it with a different

10:51

model the longer you are in these

10:53

systems the more difficult it becomes to

10:55

remove you so even if models get better

10:57

and maybe can do slightly different.

10:59

It's going to be hard to remove the

11:00

embedded model. So it is in the interest

11:02

of an AI lab to get embedded and you can

11:04

say this is what anthropic were doing.

11:06

They did this but then when the crunch

11:08

came and suddenly you know they were

11:10

asked we're just going to remove the

11:12

guardrails and they're like no they

11:13

stood up for it then they got removed.

11:15

Now, Open AI clearly want to be in the

11:18

government systems because the longer

11:19

they're in them, the harder it is to get

11:20

rid of them. And you know, think 10

11:22

years down the line, they will be the

11:23

deacto provider of the AI services to

11:26

whatever the government are using them

11:27

for and not just the US government,

11:29

right? This will um like precipitate out

11:32

across all major western governments as

11:34

well. Um unless there's a political

11:36

backlash, which there may be, especially

11:38

in European governments. Um but then he

11:41

says this, and this is what I'm saying,

11:43

hang this up and get ready. If we were

11:44

asked to do something unconstitutional

11:46

or illegal, we will walk away. Please

11:48

come visit me in jail if necessary.

11:50

Which of Open AI's core principles was

11:52

the most difficult to reconcile with the

11:54

DO's requirement during your internal

11:56

debates this week? Thinking through

11:59

non-domemestic surveillance, I have

12:01

accepted that the US military is going

12:04

to do some amount of surveillance on

12:05

foreigners, and I know foreign

12:07

governments try to do it to us, but I

12:09

still don't like it. I think it is very

12:11

important that society thinks through

12:13

the consequences of this. Perhaps the

12:15

single principle I care most about for

12:16

AI is that it is democratized and I can

12:19

see surveillance making that worse. On

12:21

the other hand, I also respect the

12:23

democratic process. I don't think this

12:24

is up to me to decide. You see, it's not

12:27

up to him to decide, but he is the

12:30

leader of the company that's willing to

12:31

provide the AI tools that will enable

12:33

this. That is what's crazy. Like I said,

12:35

it's bonkers, right? This guy says it's

12:36

not up to me. It's nothing to do with

12:37

me. while also providing the the the the

12:40

means to do the thing. You know, it's

12:43

crazy. He's got no spine. So, this is

12:46

kind of funny. So, a user just goes like

12:48

a screenshot of the app store. You happy

12:49

now? And he just says, "No, also update

12:51

your apps."

12:54

So, this is a direct question. How do

12:55

you go from a tool for the betterment of

12:57

the human race to let's work with the

12:58

Department of War?

13:00

I value my liberty and safety and yours.

13:03

I believe that strong democracy and a

13:05

strong US in particular is a very good

13:07

thing for the world. The 16-year-old me

13:09

thought every country should just

13:10

abolish their defense department at the

13:12

same time. I wish he were right, but I

13:16

now but I now I know I guess I now think

13:21

the world is a much more fragile place,

13:24

but I now think the world is a much So

13:27

then he's like I want to recap this. I

13:29

want to like this is the most insane

13:31

attempt at damage control and I don't

13:32

think this has done anything for damage

13:33

control. Um it just goes to show you

13:36

what the hell is going on in open AI. Um

13:38

but he says this to sum up the AMA.

13:43

There is more open debate than I thought

13:45

there would be at least in this part of

13:47

Twitter about whether we prefer a

13:49

democratically elected government or

13:51

unelected private companies to have more

13:52

power.

13:54

I don't think that's accurate. From what

13:56

I've seen and my take on on that is it's

14:00

it's not about that. It's it's about the

14:03

the companies willing to provide their

14:05

systems which they know they're going to

14:06

be used for extremely nefarious you know

14:09

ambiguous legal loophole applications

14:13

which he is happy for that and other AI

14:16

labs are not well anthropic

14:18

specifically. I guess this is something

14:21

people disagree on but I don't this

14:23

seems like an important area for more

14:25

discussion. That's just a non- comment.

14:27

I think uh there is there is a question

14:30

behind a lot of the questions but I

14:32

haven't seen quite articulated what

14:34

happens if the government tries to

14:36

nationalize open AI or other AI efforts.

14:39

Obviously I don't know. I have thought

14:40

about it of course. It seems to me for a

14:43

long time it might be better building be

14:46

better if building AGI were a government

14:47

project but it doesn't seem super likely

14:49

on the current trajectory. That said, I

14:52

do think a close partnership between

14:53

governments and the companies building

14:55

the technology is super important. This

14:57

might be him saying we're please just

14:59

just nationalize us because we've got no

15:02

money. Please please.

15:06

I don't think it is though. People take

15:09

their safety in the national security

15:11

sense more for granted than I realized,

15:13

which I think is a good thing on

15:14

balance, but I don't think shows enough

15:16

respect to the tremendous work it takes

15:18

for that to happen. I don't know. I

15:19

don't again I'm not I'm not entirely

15:21

sure what you're saying there because

15:22

people's issue is they don't want to be

15:24

surveiled and and then other people are

15:26

worried about autonomous weapon systems

15:28

going out of control but mainly at the

15:30

moment is they don't want to be

15:31

surveiled because like I've said with AI

15:33

systems the data is kind of already

15:35

there but the AI systems can collect

15:37

that data and then build profiles much

15:39

faster and quicker than any other system

15:40

could do before. So you can get

15:42

detailed, you know, programs on people

15:44

and if you're a government, um, you've

15:46

obviously got tons of information on

15:47

your population. You can do all kinds of

15:49

things. You know, I'd dread to think

15:50

what's going on in China at the moment

15:51

because the amount of detail they've got

15:53

on all of their, um, civilians is is

15:55

crazy. And, and this, again, this is not

15:57

like me being naive. I obviously, you

15:59

know, a lot of the big American tech

16:01

companies have got tons of information

16:02

on all of us. You know, I'm not in

16:03

America, but Google knows everything

16:05

about me. You know what I mean? So, this

16:07

stuff goes on everywhere. So, this is

16:09

crazy. Now, um,

16:12

US military reportedly used Claude in

16:14

Iran strikes despite Trump's ban. So,

16:17

Trump calls Anthropic, a radical left AI

16:20

company run by people have no idea what

16:22

the real world is about, and then uses

16:24

their applications to then bomb Iran. I

16:26

mean, it's crazy. The US military

16:28

reportedly used Claude, Anthropic's AI

16:30

model, to inform its attack on Iran,

16:32

despite Donald Trump's decision,

16:33

announced hours earlier, to sever all

16:35

ties with the company and its artificial

16:37

intelligence tools. The use of claw

16:38

during a massive joint USI Israel

16:40

bombardment of Iran that began on

16:42

Saturday was reported by the Wall Street

16:44

Journal and Axios. It underlines the

16:46

complexity of the US military

16:47

withdrawing uh powerful AI tools from

16:50

its missions when the technology is

16:51

already intricately embedded in

16:53

operations. According to the journal, US

16:56

military command used the tools for

16:57

intelligence purposes as well as to help

16:59

select targets and carry out battlefield

17:01

simulations. On Friday, just hours

17:03

before the Iran attack began, Trump

17:05

ordered all federal agencies to stop

17:07

using Claude immediately. He denounced

17:08

Anthropic on Truth Social as a radical

17:10

left AI company run by people who have

17:12

no idea what the real world is about. Uh

17:14

the flaming row was triggered by the use

17:16

of Claude uh the use of claw by the US

17:18

military. In its raid to capture the

17:21

president of Venezuela, Nicholas Maduro

17:23

in January, anthropic object objected

17:25

pointed out that its terms of use do not

17:27

allow claw to be applied for violent

17:28

ends, to develop weapons, or for

17:30

surveillance. Since then, relations

17:32

between Trump and the Pentagon and the

17:34

AI company have steadily worsened. In a

17:35

lengthy post on X on Friday, the defense

17:37

secretary Peter Hexf accused Anthropic

17:40

of arrogance and betrayal, adding that

17:41

America's warf fighters will never be

17:43

held hostage by the ideological whims of

17:44

big tech. Hexf demanded full and

17:46

unrestricted access to Anthropic's AI

17:48

models for every lawful purpose. But the

17:51

defense secretary also gave a nod to the

17:52

difficulty of rapidly detaching military

17:55

systems from the AI tool given how

17:57

widely used they have become. He said

17:58

that Anthropic will continue to provide

18:00

services for a period of no more than

18:02

six months to allow for a seamless

18:03

transition and to better to a better and

18:05

more patriotic service which is code for

18:07

oh [ __ ] we need them and we're going to

18:09

use them now so we're just going to tell

18:10

them bollocks get rid of them but then

18:12

we're also I mean I've got to be honest

18:13

I think this is just a case of I don't

18:16

think they can be told no like the way

18:18

again I'm not going to get political

18:20

here but the way like I've I've read

18:22

this Trump administration if it's told

18:24

no it just goes crazy it's like that you

18:26

can't tell me no you're gone And that's

18:28

just how they they they act, you know.

18:31

And it was a valid thing that Anthropic

18:32

said like, "No, we don't want our

18:33

systems used for this." And they've

18:35

said, "Hang on, you've said no. You're

18:36

gone. It's" And then round the corner is

18:39

uh Sam Orman. Yeah, I'll I'll I'll come

18:42

in. Apparently, I've read six articles

18:43

on the Guardian this year.

18:46

Anyway, since the break of with

18:48

Anthropic, the rival company OpenAI has

18:50

stepped into the breach. Sam Ortman,

18:52

OpenAI CEO has said he reached an

18:54

agreement with the Pentagon for use in

18:55

its classified network of the company's

18:56

tools which included chat GPT. Okay, so

18:59

that's the situation we find ourselves

19:00

in at the moment. It is a very uh murky

19:04

situation. It's a worrying situation.

19:06

You know, we've got this development of

19:08

AI being used very clearly in you know

19:11

war scenarios. It's going to be used in

19:13

mass surveillance surveillance. It's

19:15

going to it's there's lots of

19:16

applications of AI which you know like

19:18

look I'm just going to say this. I think

19:20

what I want to say is I come at AI the

19:24

approach is a very like utopian approach

19:26

of let's cure all disease let's you know

19:28

cure things like famine let's cure um

19:32

the energy crisis right I think that's

19:34

what AI to me should be used for you

19:37

know or even cool stuff like let's work

19:38

out how

19:40

to communicate to a dog you know what I

19:42

mean we can communicate to dogs now but

19:43

you get what I'm saying right or what

19:45

what are birds saying when they're

19:46

talking to each other let's decipher an

19:47

animal language how do whales speak you

19:49

know this kind of stuff. The problem is

19:51

what we're seeing and and this is why it

19:52

pisses me off is it's being used for

19:55

generative creative purposes. It's

19:57

creating slop slop music, slop videos,

20:00

slop everything, slop text, slop

20:03

articles. Then it's being used for

20:05

autonomous weapons. It's being used for

20:06

surveillance. It's being used for battle

20:08

simulations. It's being used to select

20:10

and designate targets. I I dread to

20:12

think what AI has been involved in where

20:14

it's gone wrong cuz it will have gone

20:16

wrong and they would have covered this

20:17

up. And to me, it's very, very

20:19

frightening. And what worries me is you

20:20

can see these unscrupulous leaders of

20:22

some of these AI companies will just do

20:25

anything to get their AI systems to be

20:27

used, get them embedded. They don't care

20:29

almost what they're being used for. Some

20:31

of them do in the case of Anthropic, but

20:32

again, like I said, they were inside the

20:34

US military anyway. Um, so yeah, this is

20:38

a complex situation, and the fact of the

20:41

matter is right now, OpenAI does seem to

20:43

be suffering. In my previous video, I

20:45

did say I don't think this is going to

20:47

do much to OpenAI. The whole quit GPT

20:49

movement, which seem to be happening off

20:51

the back of their DO uh contract, which

20:55

does seem to be having an effect because

20:56

clearly Claude is the top app now on the

20:59

app store. It's the top AI app and just

21:01

the top app that's being downloaded. and

21:03

OpenAI is losing enough traction and

21:05

they're getting enough bad press that

21:07

Sam Alman's having to come out to do

21:08

this crazy AMA on Twitter which I don't

21:11

think has really helped with anything

21:12

because he's not really answered

21:14

anything there. At least I'm that's the

21:15

way I've read it. All right guys, thank

21:17

you for watching today's video. I've

21:18

been inside. So if you enjoyed this then

21:20

do subscribe to the channel as ever.

21:21

I'll keep you up to date on AI related

21:23

news and uh gaming news and all of that

21:25

stuff. And I'm actually going back to

21:26

play Marathon now because Marathon is a

21:28

weird game. It's a game which just gets

21:31

better the more you play it. I think the

21:33

aesthetic carries that game quite a lot.

21:35

Anyway, I'm out of here, guys. Thank you

21:36

for watching and I'll catch you on the

21:38

next video.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.