TRANSCRIPTEnglish

OpenAI Face Mass Boycott After Granting The Government AI-Driven Mass Surveillance...

28m 9s5,338 words809 segmentsEnglish

FULL TRANSCRIPT

0:00

Artificial intelligence is a technology

0:02

that of course impacts gaming in a lot

0:04

of ways, but it's also much bigger than

0:07

just gaming. It is a technology that is

0:09

incredibly powerful that has the

0:11

potential to impact just about every

0:13

industry on this planet. And it's

0:16

important to talk about the lines that

0:18

need to be drawn as well as the moral

0:21

implications of companies who pursue AI

0:24

based on the methodologies with which

0:26

they're pursuing that technology, the

0:28

actions that they kind of partake in and

0:30

the words that they speak. And OpenAI in

0:33

particular is one of the biggest factors

0:36

in this conversation due to I mean just

0:39

how early they got into this race and

0:41

the amount of impact that they've had

0:43

and a lot of that impact many people

0:45

believe to be rather negative. Open AAI,

0:48

you know, from the way they've

0:49

completely wrecked the computing

0:52

components market uh due to their

0:54

pursuit of essentially hoarding

0:57

components to try to save off

0:59

competition to try to stay ahead of the

1:01

race and uh kind of uh pursuing their

1:04

ambitions at all costs combined with the

1:07

irresponsible ways in which this

1:09

technology is being used in such a way

1:11

that it's negatively impacting the

1:13

environment. It's negatively impacting

1:15

artists. It's negatively impacting the

1:17

very people that AI technology is

1:19

advertised to benefit as something that

1:21

is supposed to like enhance humans. But

1:24

in reality, it's a technology that is

1:26

replacing humans. It's negatively

1:28

impacting just the the way

1:30

misinformation has become so prevalent.

1:33

It all paves to a road where the public

1:34

is beginning to realize that AI

1:36

companies don't necessarily have our

1:38

best interests in mind. They would like

1:40

to see these companies just suffer a

1:42

downfall that will hopefully signal a

1:45

rejection of abusive ways of using this

1:48

technology or exploitative ways or ways

1:50

that ultimately just fuel further class

1:52

divides, fuel just further empowerment

1:54

of corporations and you know just uh a

1:58

deployment of this technology that isn't

2:00

ultimately beneficial to society as a

2:02

whole. Now, I've talked a lot about

2:04

OpenAI over the last couple of months,

2:06

and I'd like to give you the latest

2:07

update on where this company is at.

2:09

They're continuing to bleed money. The

2:12

PR surrounding the company just keeps

2:14

getting worse. And now, we're getting to

2:16

a point where the world is starting to

2:18

kind of like boycott Open AI due to some

2:21

of the stances that they've recently

2:23

taken. Lately, you might be seeing

2:24

plenty of headlines such as this one.

2:26

cancel GPT movement goes mainstream

2:28

after opening I closes deal with the US

2:31

Department of War as Enthropic refuses

2:33

to surveil. So that's going to be the

2:34

main topic of today's video. I'm going

2:36

to kind of walk you through everything

2:37

that's transpired on that front. But

2:39

yeah, you can see scrolling down all the

2:42

embedded posts that highlight people

2:43

just straight off showing their canceled

2:45

subscriptions. And this is a movement

2:47

that's gained tons of traction. I think

2:49

much more than maybe even OpenAI

2:51

anticipated. You've already seen CEO Sam

2:53

Altman talk about how people unfairly

2:56

judge AI for how much energy it consumes

2:58

because human beings consume a lot of

3:00

energy in order to train them and raise

3:03

them and whatnot, which was just so

3:04

incredibly dehumanizing. But then we

3:06

have development surrounding how OpenAI

3:08

is basically willing to make a deal with

3:10

anyone that will give them the money to

3:12

be able to support their endeavors due

3:15

to the incredibly fast rate at which

3:17

they're burning money to a point where

3:19

revenue just doesn't seem like it's able

3:21

to catch up. And so this latest story

3:24

concerns the US government and the way

3:27

artificial intelligence again we're

3:29

going far beyond just gaming. It has a

3:31

huge impact on gaming, but also on

3:34

things like surveillance and things

3:36

like, you know, weapons. It should come

3:38

as no surprise that the Pentagon and the

3:40

military are interested in artificial

3:42

intelligence. And not long ago, I talked

3:44

about this developing situation where

3:46

Anthropic was essentially threatened

3:49

because they would not forego certain

3:51

lines that they drew in the sand. namely

3:54

that Enthropic, one of OpenAI's main

3:56

rivals, straight up told the US

3:58

government who were interested in

3:59

working with Anthropic and utilizing

4:02

their technology, that they would not

4:04

allow for the mass surveillance of

4:05

Americans with the usage of their

4:07

artificial intelligence and that they

4:08

would not allow for the development of

4:10

fully autonomous weaponry. Basically,

4:12

Enthropic made the stance of like our

4:14

technology is incredibly powerful and we

4:16

do believe there is some responsibility

4:18

to ensure that this technology is not

4:21

used for nefarious purposes or for

4:24

really destructive purposes. The

4:25

Pentagon did not like that moral stance

4:27

and so they threatened to essentially

4:29

deem Enthropic a national security risk

4:32

or a supply chain risk which would mean

4:34

that anyone who wants to work with the

4:36

US government would not be able to work

4:39

with Enthropic and would essentially put

4:41

Enthropic in a very precarious situation

4:43

from a business standpoint. The threat

4:45

being either you do what we want and you

4:47

allow us to use your technology in any

4:49

way, shape or form the way we deem it

4:51

appropriate to use it or we screw over

4:53

your company. So those talks were

4:54

ongoing for a while with reports from

4:56

February 14th to February 16th

4:59

highlighting that whole ongoing

5:00

situation and talks behind the scenes

5:02

and then that kind of came to a head on

5:04

February 26, 2026 when Enthropic

5:07

released the following statement in

5:09

regards to discussions with the

5:11

Department of War. And in this

5:12

statement, Enthropic basically

5:13

highlights how they will not allow for

5:15

the mass domestic surveillance of

5:17

citizens as well as the development of

5:18

fully autonomous weapons with their AI

5:21

technology. And that's a line that they

5:22

are not going to back away from. And the

5:24

end result was that the current

5:25

administration in the United States

5:27

ordered all federal agencies in the US

5:29

government to immediately stop using

5:32

anthropics technology. Basically saying,

5:34

"We're done doing business with you

5:35

because you dared draw some lines in the

5:37

sand in terms of ensuring that this

5:39

technology isn't abused." And all of

5:42

that does relate to the latest

5:43

development surrounding OpenAI who

5:45

recently got in bed with the US

5:47

government in regards to this particular

5:50

issue. Now, just the day after Enthropic

5:52

made their stance clear, CEO of OpenAI,

5:55

Sam Alultman, came on the news and

5:56

seemed to express like uh a respect for

5:59

Enthropic stance and almost seem to

6:02

suggest that he would kind of back a

6:04

similar ideal, saying that for all the

6:06

differences I have with Anthropic, I

6:08

mostly trust them as a company and I

6:10

think they really do care about safety.

6:12

And the way this was interpreted is that

6:14

OpenAI is drawing the same lines that

6:16

Anthropic are. AI as a technology

6:19

already feels like it's crossed multiple

6:21

lines and multiple like boundaries that

6:23

many people feel shouldn't be crossed

6:25

but the ultimate line is things like

6:27

surveillance of mass citizens and use of

6:30

this powerful technology for automated

6:32

weapons and anthropic said no we are

6:34

absolutely not giving in to those kinds

6:37

of demands and Sam Alman seemed to be in

6:41

the space of like I respect that and

6:43

that initially indicated to people that

6:45

Sam Alman would basically follow in

6:48

anthropics footsteps when it comes to

6:50

this specific issue. But on February

6:52

27th, 2026, he straight up came out and

6:55

basically highlighted how Sam Alman and

6:58

OpenAI basically caved to the demands of

7:01

the US government and ultimately decided

7:03

to lend their technology. Now, he tries

7:06

to paint a more positive picture trying

7:08

to say that he also draws lines and that

7:10

the technology the way to be used by the

7:12

US government uh will not allow for the

7:15

crossing of those lines. But the fact

7:16

that Enthropic was shunned by the US

7:18

government despite having worked with

7:20

them for so long and then replaced by

7:22

OpenAI, I think only highlights that

7:24

there is definitely some kind of give

7:27

that OpenAI had that Anthropic did. And

7:29

it's clear that certain compromises were

7:31

made when it comes to drawing lines in

7:33

the sand that made the US government

7:35

happy enough to work with OpenAI over

7:38

Anthropic. So here's how Sam Alman

7:40

phrases it. Tonight, we reached an

7:41

agreement with the Department of War to

7:43

deploy our models in their classified

7:44

network. In all of our interactions, the

7:46

DO displayed a deep respect for safety

7:49

and a desire to partner to achieve the

7:51

best possible outcome. And Sam Alman

7:53

specifically highlights how he takes it

7:55

very seriously to ensure important

7:57

safety principles such as prohibitions

7:59

on domestic mass surveillance as well as

8:01

humans responsibility for the use of

8:02

force, including for autonomous weapon

8:04

systems. the DO agrees with these

8:07

principles, reflects them in law and

8:08

policy, and we put them into our

8:10

agreement. But if that's the case, then

8:13

why wouldn't they just agree to

8:14

anthropics lines that they drew in the

8:16

sand? Why would they shun Anthropic to

8:19

the degree that they have? If Sam Alman

8:21

is insisting that the Department of War

8:24

agrees with the principles of not using

8:26

AI for mass domestic surveillance or

8:28

using AI for essentially autonomous

8:30

weapon systems, the only reason the US

8:33

government would work with Sam Alman and

8:35

OpenAI over Enthropic is that Sam Alman

8:37

and OpenAI made some concessions when it

8:39

comes to giving the Department of War

8:41

and the US government just a lot more

8:43

freedom than Enthropic would have when

8:45

it comes to how AI technology is used.

8:47

So, a lot of people are taking to

8:50

essentially admonishing Sam Alman and

8:52

OpenAI, and they've they were already

8:54

like not particularly popular, but now

8:56

that's been taken to a whole new level.

8:59

In fact, community notes already saying

9:00

things like government officials have

9:02

contradicted Sam's claim, saying OpenAI

9:04

will allow the DO or Department of War

9:06

to use their models for all lawful

9:09

purposes. This could allow for uses

9:11

Enthropic refuse to engage in, namely

9:14

mass surveillance tools and weapon

9:15

systems with no human oversight.

9:17

Basically, if the US government comes up

9:19

with a reason for why it's lawful to use

9:22

Open AI to engage in mass surveillance

9:24

domestically or to use AI for autonomous

9:27

weapons, then they're allowed to do

9:29

that. It's basically like a loophole, a

9:31

loophole that OpenAI was willing to

9:33

grant them that Enthropic wasn't. And

9:35

what this has ultimately done is tanked

9:37

OpenAI's popularity. In fact, looking at

9:39

the quote tweets to Sam Alman's post

9:41

here, declaring that he's reached an

9:44

agreement with the Department of War and

9:46

that he values the principles of

9:48

prohibiting mass surveillance and

9:49

autonomous weapons with the use of AI.

9:51

Here's somebody quote tweeting with one

9:53

of the most popular posts on this

9:54

matter, saying, "Sam Alman is such an

9:56

incredible backsabber, liar, and trader.

9:59

While your competitor is taking a heroic

10:01

and principal stand, you soup in to make

10:03

your deal. Imagine working for this guy.

10:05

Is there greater shame? This should lead

10:07

to a mass exodus from open AI. And then

10:10

beyond that, you've got plenty of people

10:11

talking about how they predicted that

10:13

this would be the endgame of AI. This

10:15

would be kind of its most egregious use.

10:18

Things like military and mass

10:20

surveillance and development of weapons.

10:22

This is a technology that already felt

10:24

incredibly dangerous when it comes to

10:26

like the way it's being used to replace

10:28

workers rather than to give workers

10:31

tools to be able to better do their

10:33

work. the way it's stealing art and

10:36

replacing artists and the way its

10:37

quality output is inconsistent and you

10:40

know unreliable enough that becoming

10:43

fully dependent on AI might lead to just

10:46

a lot of headaches. But now to see this

10:47

on a level where it'll directly have an

10:50

impact on things like our privacy and

10:52

just the state of the world given how

10:55

powerful weapons are becoming and how AI

10:57

could just kind of make it that much

10:59

more precarious. This just highlights

11:01

how corporations think. Gaming companies

11:04

will tell you that AI will be used to

11:06

enhance artists and essentially to

11:08

support artists, but we're seeing plenty

11:10

of evidence of corporations doing the

11:12

exact opposite, just straight up trying

11:13

to cheapen out on human labor and trying

11:16

to replace them with AI to essentially

11:18

engage in cost cutting measures to be

11:20

able to output faster regardless of

11:22

quality and artistry just for the sake

11:24

of profits. And now with someone like

11:26

Sam Alman, we're seeing just how easily

11:28

someone like this individual is just

11:30

willing to sacrifice and forego all

11:33

principles and all manner of integrity.

11:35

The way he spoke of humans as if they

11:37

drain energy instead of, you know,

11:40

speaking of humanity as the point of

11:42

doing anything, you know, bettering

11:44

humanity as a whole for our survival and

11:46

our collective collaboration to be able

11:48

to kind of foster a better future. That

11:50

already told me everything I need to

11:51

know about him. But the way he just

11:53

quickly swooped in and took Enthropic's

11:56

place when Enthropic was making a stand

11:58

because his pursuit for money, his

12:00

ambitions far outweigh any semblance of

12:03

morality. All of this just really

12:05

highlights how you cannot trust anyone

12:08

who tells you, especially on a corporate

12:10

executive level, that AI will be used to

12:13

empower humans and to try to make

12:15

society better, to try to make products

12:16

better, to try to make the people who

12:18

work on the products just more

12:20

productive and to give them tools to

12:22

make their work easier and better. You

12:24

can't just trust that anymore. And if

12:25

you want to learn more about the way

12:26

OpenAI cave to the Pentagon on AI

12:28

surveillance, this article from The

12:30

Verge is a good read. But it basically

12:32

highlights how weak Sam Alman's promise

12:35

of prohibiting mass surveillance and

12:37

prohibiting the development of

12:38

autonomous weapons through AI is when

12:41

you really look at what this deal is and

12:43

when you look at the way it is

12:45

highlighted that AI open AI can be used

12:48

for all lawful purposes which again it's

12:51

just about loopholes that allow them to

12:53

say words that feel really positive and

12:55

feel really like morally upstanding but

12:58

in reality like there are countless ways

13:00

to ensure that this technology can

13:02

ultimately be abused. Here's someone who

13:04

kind of summarized the Verge's article

13:05

pretty aptly. Basically, OpenAI agreed

13:07

to follow laws that have allowed for

13:09

mass surveillance in the past while

13:11

insisting they protect its red lines

13:13

that are incredibly weak. Basically,

13:16

OpenAI is full of and they may well

13:18

turn over everything you ever typed into

13:20

Chat GPT if the US government asks. And

13:23

keep in mind that with chat GPT, people

13:24

kind of apply to it a level of cander, a

13:28

level of honesty and openness that

13:29

allows for just like an extreme level of

13:32

psychological profiling because what

13:34

people type into chat GBT, the queries

13:36

that they ask it informs a lot about

13:38

that individual. The fact that they were

13:40

doing targeted ads through chat GPT,

13:43

that was already like shady enough. But

13:46

now to see it go up a whole other level

13:48

where you know government entities can

13:50

use this technology for mass

13:52

surveillance and to just kind of get to

13:54

know you on a very intimate level that

13:56

would make you uncomfortable if you

13:57

really knew what was going on. Yeah, it

13:59

doesn't shock me that powerful tools

14:01

like these are primed to be abused due

14:03

to a lack of regulations and just due to

14:06

a lack of like moral foresight when it

14:09

comes to all the egregious ways in which

14:10

this technology can be used which is

14:12

being advertised as good for society.

14:14

But that only applies when the

14:16

technology is actually used for good

14:17

purposes. But you know, it's ri for

14:20

abuse right now. And what this has meant

14:21

for OpenAI in the eyes of the public is

14:23

that there's actually been a mass

14:25

boycott of OpenAI and its services.

14:28

Right here we have an X news headline

14:31

saying cloud app tops App Store after

14:33

OpenAI's DODO backlash. And you can

14:36

actually see screenshots of this. Once

14:38

upon a time, Chat TPT was number one.

14:40

And now you can see Claude is very

14:42

quickly catching up. And eventually

14:44

Claude did in fact catch up as people

14:46

decided to forego chat GBT for clawed by

14:49

Anthropic instead due to the fact that

14:51

Anthropic actually decided to draw lines

14:53

in the sand and despite threats from the

14:55

US government they actually like held to

14:58

those standards. They actually held that

15:00

moral line and to really highlight what

15:01

a big deal that is before all this cloud

15:04

was at number 131 on the app store in

15:06

late January and now it is at number one

15:09

this weekend. their integrity move ended

15:12

up actually rewarding them and they also

15:13

took great advantage of this whole

15:16

situation. Enthropic response saw that

15:18

attention ship memory on the free plan

15:20

make the free tier stickier at the exact

15:22

moment millions of new users are

15:24

flooding in. And then beyond that,

15:25

instead of letting their designation of

15:27

supply chain risk just kind of crumble

15:30

them, integrity ultimately ended up

15:31

rewarding them with K. Perry posting her

15:33

claw pro subscription, Reddit organizing

15:36

mass chat GPT cancellations, 700 plus

15:39

employees at Google and OpenAI signing

15:41

an open letter backing Enthropic's

15:43

position. What was supposed to be this

15:44

punishment from the government,

15:46

Enthropic transformed into an

15:48

opportunity converted into the largest

15:50

consumer acquisition event in AI history

15:52

that immediately shipped product to

15:54

retain every new user walking through

15:55

the door. Now, I'm not trying to label

15:57

Anthropic as like the good guys. I think

15:59

all AI companies right now in the way

16:01

they're pursuing and racing towards

16:04

being the dominant in this technological

16:06

space like Enthropic is just as guilty

16:08

in many other areas when it comes to how

16:11

uh they're pursuing this technology

16:13

without really thinking about the

16:14

consequences without regard for how all

16:16

this negatively impacts the masses, the

16:19

public, the economy, uh the environment,

16:22

the artists, so on and so forth. But at

16:24

the very least, they're willing to draw

16:25

some lines here and there because they

16:27

recognize that if AI falls into the

16:29

wrong hands, it could be so incredibly

16:31

destructive and cataclysmic. But of

16:33

course, there will be some companies who

16:34

don't care about those things, who won't

16:36

draw those lines as long as they get to

16:38

make a profit and be able to realize

16:39

their ambitions regardless of the cost

16:41

that is associated with that, where

16:43

their ambitions outweigh any thought

16:45

about, you know, the concept of humanity

16:47

and working together so that we can

16:49

thrive together and move forward towards

16:52

a better future together. But that's not

16:53

at all on Sam Alman's minds and it

16:55

clearly shows. But yeah, basically tons

16:57

of people have decided to drop Open AI

17:00

and so Sam Alman seeing the backlash,

17:02

seeing the negative PR is deciding to

17:04

add further responses and trying to

17:07

swatch people's concerns. He tried to

17:08

host an AMA where he tried to sympathize

17:11

with people and talked about how there

17:13

are scary precedents that are being set

17:15

with the department of war and the way

17:17

they're blacklisting anthropic and any

17:19

company that doesn't work with them and

17:21

allow them full autonomy over these

17:23

powerful technologies that could be

17:24

abused. He insists that he holds the

17:26

government responsible for these kinds

17:29

of actions. And when asked about what

17:31

happens if the government tries to

17:32

nationalize open AI or other AI efforts,

17:34

he said, "I obviously don't know. I have

17:37

thought about it. Of course, it has

17:38

seemed to me for a long time it might be

17:40

better if building AGI were a government

17:42

project, but it doesn't seem super

17:44

likely on the current trajectory. That

17:46

said, I do think a close partnership

17:48

between governments and the companies

17:49

building this technology is super

17:51

important. Even if he is being honest,

17:52

what this highlights is he hasn't really

17:54

fully thought about the consequences of

17:58

just signing this kind of like deal with

18:00

the devil where his incredibly powerful

18:02

technology is just given away to be used

18:04

for lawful purposes where loopholes will

18:07

allow that technology to be used.

18:08

However, the US government or entities

18:11

like the Department of War deems,

18:13

including in ways that the mass public

18:14

would absolutely would never approve of,

18:17

but they just have that power now

18:18

because Sam Alman handed that power to

18:20

them. Sam Alman's attempts to assuage

18:22

the fears of the public or to try to

18:24

build himself as like more morally

18:26

upstanding, it just hasn't been panning

18:29

out because the morally upstanding thing

18:31

to do would have been to not accept a

18:33

deal that will allow unfettered use of

18:35

such a powerful technology and

18:37

potentially in really abusive and

18:38

destructive ways. Basically taking the

18:40

stance that Enthropic did, but OpenAI

18:42

didn't. And so yeah, I mean this is a

18:45

company that essentially has a cloud

18:47

looming over it when it comes to the

18:49

negative PR and the negative optics in

18:50

the eyes of the public. I surmise that

18:52

ultimately the real reason Open AI

18:54

signed this deal with the government

18:56

wasn't because they believe in a

18:58

collaboration that could pave the road

18:59

towards a better future, whatever. Open

19:01

AAI just basically is desperate right

19:03

now. They need to sign any and every

19:05

deal possible that will bring them

19:07

billions in order to be able to weather

19:09

the rate at which they're burning money.

19:11

They will basically take money from any

19:13

source because otherwise they will go

19:15

under. This is a company that right now

19:16

is basically built on a house of cards.

19:19

I mean relationships are strained from

19:21

how Star AI data centers for OpenAI

19:23

reportedly delayed by squables between

19:25

partners. There's a report from the

19:26

information that was transcribed by

19:28

Tom's Hardware. The article says that

19:30

sources say OpenAI, Oracle, and SoftBank

19:32

disagreed on who would have ultimate

19:33

control of the planned data centers.

19:36

Here's somebody else adding that

19:37

basically we're looking at clashes over

19:39

control, marathon negotiations fueled by

19:41

7-Eleven in Tokyo, financing push back,

19:44

and a quiet pullback from OpenAI

19:45

building its own data centers for now.

19:47

And all the while, OpenAI is going

19:48

around trying to secure as much funding

19:50

as possible. And their latest funding

19:51

round, they were able to secure around

19:53

hundred billion, which you would think

19:55

would be enough to make them secure for

19:57

a long time to come. But AI as a

19:59

technology to build and to power and to

20:01

maintain is so freaking expensive that

20:03

this money that will burn away really

20:05

quick. A hundred billion dollars is

20:07

still nowhere near enough for OpenAI to

20:10

be secure about its financial future. I

20:12

mean, we got reports like this one from

20:13

the information from February 20th,

20:15

2026, highlighting how OpenAI has

20:18

boosted its revenue forecast, but are

20:20

still predicting $112 billion more cash

20:22

burn through 2030. Plenty of analysts

20:25

are discussing how based on everything

20:26

they're seeing, there's not a single

20:28

thing about OpenAI that makes it look

20:30

financially sustainable for the long

20:32

term. Here's an analysis that OpenAI

20:34

could face bankruptcy within 2 years.

20:36

With paid subscribers remaining at only

20:38

about 5% of total users, OpenAI faces a

20:40

real risk due to billions in operating

20:42

costs against a limited return and a

20:44

continued reliance on funding and

20:46

investments. And that's why they signed

20:48

that deal with the US government

20:50

regardless of what their moral stances

20:51

are because moral stance has to go out

20:53

the window in order for open AI to

20:56

survive because they created a business

20:57

model that is so unsustainable. So

21:00

they're they'll rely on the US

21:01

government and on entities who seek to

21:03

use their technology for destructive

21:05

purposes as long as they get their

21:06

funding. Here's another analysis from

21:08

George Noble, who's been in the

21:10

investing space for a while and has made

21:13

a name for himself on that front,

21:14

talking about how basically Sam Alman,

21:17

CEO of OpenAI, just convinced three of

21:19

the world's smartest investors to fund

21:21

his losses, $110 billion, but zero

21:24

profit insight. And the numbers are

21:26

broken down here, but basically, it's

21:28

looking incredibly bleak with this whole

21:31

kind of post and analysis ending with

21:33

this can't end well. And then on top of

21:35

that, beyond the negative press that

21:37

OpenAI continues to receive and with the

21:39

latest developments resulting in OpenAI

21:42

being boycott by the masses to the point

21:45

where Enthropic is kind of being put up

21:46

on a pedestal over open AI because at

21:48

least they had some moral integrity to

21:50

draw some lines and to not cave to

21:52

pressure from powerful entities like the

21:54

government. On top of all that, plenty

21:56

of people talking about how there is a

21:58

real backlash against AI and it's

22:00

winning with the masses disrupting the

22:03

building of AI data centers. And

22:05

apparently, this is an issue that's

22:06

uniting people across all kinds of

22:08

spectrums when it comes to beliefs and

22:10

ideals. Very differing groups have found

22:12

common ground and a common enemy in the

22:15

way AI has been proliferating and the

22:18

way it's been doing so in just ways that

22:20

people do not see the benefit. And in

22:22

fact, they see the opposite. They see

22:24

its destructive capabilities and the

22:26

dehumanizing element that AI brings to

22:28

the table with the way corporations will

22:30

not use this technology to empower

22:32

humans, but to take advantage of humans

22:34

to try to replace humans in order to

22:36

pursue further profits and to be able to

22:38

rise further for just an an even more

22:41

extensive class divide. Yeah, the

22:43

article is right here. You can check it

22:44

out for yourself. some interesting

22:46

discussions about all of the ways in

22:48

which the masses, the public have

22:50

disrupted artificial intelligence

22:53

endeavors. Though of course this is a

22:55

technology that has a lot of powerful

22:57

backing surrounding it and continues to

22:59

march forward at a rapid pace at a pace

23:02

that feels too rapid for what uh society

23:05

as a whole can morally handle. There are

23:06

reports talking about how the contrast

23:09

between the dotcom boom and the AI boom

23:11

is very apparent. People actually did

23:13

like the.com boom because the internet

23:15

ultimately did feel like a very

23:16

beneficial invention. It's not perfect.

23:19

It's got its drawbacks, but ultimately

23:21

it felt more beneficial than not. The AI

23:23

boom, however, not so much. It's nowhere

23:25

near as beloved. The internet feels like

23:27

something that was meant for all of us,

23:29

whereas AI feels like a technology meant

23:31

to be used against the masses for the

23:33

betterment of a select few. And what

23:35

this ultimately does is compromises the

23:36

ability for this technology to be mass

23:38

accepted and mass adopted at a level

23:41

that AI companies will feel comfortable

23:43

with at a level that will allow them to

23:44

finally make a profit instead of just

23:46

burn money. And then beyond that, from a

23:48

usefulness capacity, there were reports

23:50

highlighting how thousands of CEOs

23:51

admitted AI had no impact on employment

23:54

or productivity. So all the promise

23:55

about how AI will enhance workflows and

23:57

just make everything more efficient,

23:59

will empower people's capabilities, so

24:01

on and so forth. None of that is panning

24:03

out, which means its mass adoption is

24:05

going to be that much more difficult

24:06

because the technology just isn't doing

24:08

what it's what it promised to do. And

24:10

then there's the fact that AI just

24:12

breaks all kinds of copyright laws with

24:14

OpenAI's own CEO Sam Alman straight up

24:16

admitting that it's virtually impossible

24:18

to develop advanced AI models like Chat

24:20

GBT without some form of copyrighted

24:22

content. And already we've seen

24:23

companies like Disney and Warner

24:25

Brothers and many others start to sue AI

24:27

companies because yeah that technology

24:29

is being used to create basically you

24:32

know videos and images from characters

24:34

that uh are being deployed in a way that

24:37

is attracting a lot of attention. People

24:39

are just straight up making movies with

24:41

properties that don't belong to them

24:43

because technologies like chat GBT and

24:45

all these AI models enable that without

24:48

seeking permission from the copyright

24:50

holders without seeking permission from

24:52

the people who they're stealing work

24:54

from. And then economically, AI

24:55

basically added zero to the US's

24:58

economic growth due to the fact that AI

25:00

is being built off of components bought

25:03

overseas. You know, all those chips that

25:04

they need that they're hoarding, all

25:06

that comes pretty much from overseas.

25:08

that stuff's not manufactured in the US.

25:10

So if anything, all these US companies

25:12

pursuing AI are boosting other countries

25:14

economies and are, you know, basically

25:16

tanking the US's by creating this bubble

25:18

and engaging in mass expenditures with

25:20

the hopes of profit that may very well

25:22

not pan out due to the rate with which

25:24

they have to spend money in order to

25:25

make this technology possible. So yeah,

25:27

when you look at just how poorly OpenAI

25:29

is doing from a public perception

25:31

standpoint, from a financial standpoint,

25:33

from a viability standpoint, from just

25:36

like an infrastructure standpoint, it's

25:38

suddenly not that difficult to figure

25:39

out why OpenAI would sign a deal with

25:41

the devil. It's basically because

25:43

they're that desperate. They're so

25:45

desperate that they're willing to forgo

25:46

any semblance of morality and any

25:48

semblance of foresight when it comes to

25:51

how that technology could be used for

25:52

destructive purposes or for privacy

25:55

invading purposes. Sam Alman and Open

25:56

Air are just so desperate to survive

25:58

that they'll make these kinds of deals

26:00

and try to justify it in any which way

26:02

and it's not working. Anyone who's

26:03

willing to go this far to realize their

26:05

ambitions, anyone who's willing to

26:06

sacrifice everything to basically keep

26:09

their company alive, especially a

26:12

company that is dying because of their

26:14

own business model of their own making,

26:16

that is someone you can never trust.

26:19

OpenAI's actions have already had like a

26:21

direct impact on the gaming landscape as

26:23

a whole. But again, now it's becoming

26:25

far bigger than that and now it's

26:27

becoming again it's truly becoming

26:29

existential in terms of the questions

26:31

that we have to start asking ourselves

26:32

about the capabilities of this

26:34

technology. It started with negative

26:36

impacts on computing components and

26:37

electronics like gaming devices, but now

26:40

it's expanded to a point where like the

26:42

picture is even much bigger than that.

26:45

And gamers in particular are already

26:47

looking forward to the downfall of Open

26:49

AI because of the adverse impacts that

26:51

they've had on the economy of gaming

26:54

electronics. The uh just the state of

26:56

artistry in general, the poorer quality

26:59

outputs. Like AI is just not a

27:01

technology that is favored very much by

27:03

the gaming community. But how little you

27:05

can trust these executives to actually

27:07

look out for us to try to make all this

27:10

about benefiting humanity and uh

27:12

benefiting the mediums that they're

27:14

being employed in and whatnot. uh just

27:16

the complete inability to trust these

27:18

executives uh who have this power to

27:20

deploy this technology is not

27:22

particularly surprising but it's still

27:24

good to have that reaffirmation so we

27:27

can continue to keep our eyes peeled for

27:30

you know the words that executives will

27:32

use to try to justify the implementation

27:34

or rather the poor and the more abusive

27:36

or exploitative implementation of AI

27:38

technology while having every intention

27:40

to screw over the public the masses the

27:43

artists the workers and everyone who

27:44

isn't just the executives trying to make

27:46

money. And that, ladies and gentlemen,

27:47

is kind of my take on the latest

27:49

surrounding OpenAI. I'd love to hear

27:50

what your thoughts and opinions are on

27:52

all this in the comments below. And to

27:53

be further updated on all things gaming

27:56

news, reviews, and discussions, stay

27:59

tuned right here on Yong. Yeah. I'll see

28:02

you guys next time.

28:05

Yong.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.