OpenAI Face Mass Boycott After Granting The Government AI-Driven Mass Surveillance...
FULL TRANSCRIPT
Artificial intelligence is a technology
that of course impacts gaming in a lot
of ways, but it's also much bigger than
just gaming. It is a technology that is
incredibly powerful that has the
potential to impact just about every
industry on this planet. And it's
important to talk about the lines that
need to be drawn as well as the moral
implications of companies who pursue AI
based on the methodologies with which
they're pursuing that technology, the
actions that they kind of partake in and
the words that they speak. And OpenAI in
particular is one of the biggest factors
in this conversation due to I mean just
how early they got into this race and
the amount of impact that they've had
and a lot of that impact many people
believe to be rather negative. Open AAI,
you know, from the way they've
completely wrecked the computing
components market uh due to their
pursuit of essentially hoarding
components to try to save off
competition to try to stay ahead of the
race and uh kind of uh pursuing their
ambitions at all costs combined with the
irresponsible ways in which this
technology is being used in such a way
that it's negatively impacting the
environment. It's negatively impacting
artists. It's negatively impacting the
very people that AI technology is
advertised to benefit as something that
is supposed to like enhance humans. But
in reality, it's a technology that is
replacing humans. It's negatively
impacting just the the way
misinformation has become so prevalent.
It all paves to a road where the public
is beginning to realize that AI
companies don't necessarily have our
best interests in mind. They would like
to see these companies just suffer a
downfall that will hopefully signal a
rejection of abusive ways of using this
technology or exploitative ways or ways
that ultimately just fuel further class
divides, fuel just further empowerment
of corporations and you know just uh a
deployment of this technology that isn't
ultimately beneficial to society as a
whole. Now, I've talked a lot about
OpenAI over the last couple of months,
and I'd like to give you the latest
update on where this company is at.
They're continuing to bleed money. The
PR surrounding the company just keeps
getting worse. And now, we're getting to
a point where the world is starting to
kind of like boycott Open AI due to some
of the stances that they've recently
taken. Lately, you might be seeing
plenty of headlines such as this one.
cancel GPT movement goes mainstream
after opening I closes deal with the US
Department of War as Enthropic refuses
to surveil. So that's going to be the
main topic of today's video. I'm going
to kind of walk you through everything
that's transpired on that front. But
yeah, you can see scrolling down all the
embedded posts that highlight people
just straight off showing their canceled
subscriptions. And this is a movement
that's gained tons of traction. I think
much more than maybe even OpenAI
anticipated. You've already seen CEO Sam
Altman talk about how people unfairly
judge AI for how much energy it consumes
because human beings consume a lot of
energy in order to train them and raise
them and whatnot, which was just so
incredibly dehumanizing. But then we
have development surrounding how OpenAI
is basically willing to make a deal with
anyone that will give them the money to
be able to support their endeavors due
to the incredibly fast rate at which
they're burning money to a point where
revenue just doesn't seem like it's able
to catch up. And so this latest story
concerns the US government and the way
artificial intelligence again we're
going far beyond just gaming. It has a
huge impact on gaming, but also on
things like surveillance and things
like, you know, weapons. It should come
as no surprise that the Pentagon and the
military are interested in artificial
intelligence. And not long ago, I talked
about this developing situation where
Anthropic was essentially threatened
because they would not forego certain
lines that they drew in the sand. namely
that Enthropic, one of OpenAI's main
rivals, straight up told the US
government who were interested in
working with Anthropic and utilizing
their technology, that they would not
allow for the mass surveillance of
Americans with the usage of their
artificial intelligence and that they
would not allow for the development of
fully autonomous weaponry. Basically,
Enthropic made the stance of like our
technology is incredibly powerful and we
do believe there is some responsibility
to ensure that this technology is not
used for nefarious purposes or for
really destructive purposes. The
Pentagon did not like that moral stance
and so they threatened to essentially
deem Enthropic a national security risk
or a supply chain risk which would mean
that anyone who wants to work with the
US government would not be able to work
with Enthropic and would essentially put
Enthropic in a very precarious situation
from a business standpoint. The threat
being either you do what we want and you
allow us to use your technology in any
way, shape or form the way we deem it
appropriate to use it or we screw over
your company. So those talks were
ongoing for a while with reports from
February 14th to February 16th
highlighting that whole ongoing
situation and talks behind the scenes
and then that kind of came to a head on
February 26, 2026 when Enthropic
released the following statement in
regards to discussions with the
Department of War. And in this
statement, Enthropic basically
highlights how they will not allow for
the mass domestic surveillance of
citizens as well as the development of
fully autonomous weapons with their AI
technology. And that's a line that they
are not going to back away from. And the
end result was that the current
administration in the United States
ordered all federal agencies in the US
government to immediately stop using
anthropics technology. Basically saying,
"We're done doing business with you
because you dared draw some lines in the
sand in terms of ensuring that this
technology isn't abused." And all of
that does relate to the latest
development surrounding OpenAI who
recently got in bed with the US
government in regards to this particular
issue. Now, just the day after Enthropic
made their stance clear, CEO of OpenAI,
Sam Alultman, came on the news and
seemed to express like uh a respect for
Enthropic stance and almost seem to
suggest that he would kind of back a
similar ideal, saying that for all the
differences I have with Anthropic, I
mostly trust them as a company and I
think they really do care about safety.
And the way this was interpreted is that
OpenAI is drawing the same lines that
Anthropic are. AI as a technology
already feels like it's crossed multiple
lines and multiple like boundaries that
many people feel shouldn't be crossed
but the ultimate line is things like
surveillance of mass citizens and use of
this powerful technology for automated
weapons and anthropic said no we are
absolutely not giving in to those kinds
of demands and Sam Alman seemed to be in
the space of like I respect that and
that initially indicated to people that
Sam Alman would basically follow in
anthropics footsteps when it comes to
this specific issue. But on February
27th, 2026, he straight up came out and
basically highlighted how Sam Alman and
OpenAI basically caved to the demands of
the US government and ultimately decided
to lend their technology. Now, he tries
to paint a more positive picture trying
to say that he also draws lines and that
the technology the way to be used by the
US government uh will not allow for the
crossing of those lines. But the fact
that Enthropic was shunned by the US
government despite having worked with
them for so long and then replaced by
OpenAI, I think only highlights that
there is definitely some kind of give
that OpenAI had that Anthropic did. And
it's clear that certain compromises were
made when it comes to drawing lines in
the sand that made the US government
happy enough to work with OpenAI over
Anthropic. So here's how Sam Alman
phrases it. Tonight, we reached an
agreement with the Department of War to
deploy our models in their classified
network. In all of our interactions, the
DO displayed a deep respect for safety
and a desire to partner to achieve the
best possible outcome. And Sam Alman
specifically highlights how he takes it
very seriously to ensure important
safety principles such as prohibitions
on domestic mass surveillance as well as
humans responsibility for the use of
force, including for autonomous weapon
systems. the DO agrees with these
principles, reflects them in law and
policy, and we put them into our
agreement. But if that's the case, then
why wouldn't they just agree to
anthropics lines that they drew in the
sand? Why would they shun Anthropic to
the degree that they have? If Sam Alman
is insisting that the Department of War
agrees with the principles of not using
AI for mass domestic surveillance or
using AI for essentially autonomous
weapon systems, the only reason the US
government would work with Sam Alman and
OpenAI over Enthropic is that Sam Alman
and OpenAI made some concessions when it
comes to giving the Department of War
and the US government just a lot more
freedom than Enthropic would have when
it comes to how AI technology is used.
So, a lot of people are taking to
essentially admonishing Sam Alman and
OpenAI, and they've they were already
like not particularly popular, but now
that's been taken to a whole new level.
In fact, community notes already saying
things like government officials have
contradicted Sam's claim, saying OpenAI
will allow the DO or Department of War
to use their models for all lawful
purposes. This could allow for uses
Enthropic refuse to engage in, namely
mass surveillance tools and weapon
systems with no human oversight.
Basically, if the US government comes up
with a reason for why it's lawful to use
Open AI to engage in mass surveillance
domestically or to use AI for autonomous
weapons, then they're allowed to do
that. It's basically like a loophole, a
loophole that OpenAI was willing to
grant them that Enthropic wasn't. And
what this has ultimately done is tanked
OpenAI's popularity. In fact, looking at
the quote tweets to Sam Alman's post
here, declaring that he's reached an
agreement with the Department of War and
that he values the principles of
prohibiting mass surveillance and
autonomous weapons with the use of AI.
Here's somebody quote tweeting with one
of the most popular posts on this
matter, saying, "Sam Alman is such an
incredible backsabber, liar, and trader.
While your competitor is taking a heroic
and principal stand, you soup in to make
your deal. Imagine working for this guy.
Is there greater shame? This should lead
to a mass exodus from open AI. And then
beyond that, you've got plenty of people
talking about how they predicted that
this would be the endgame of AI. This
would be kind of its most egregious use.
Things like military and mass
surveillance and development of weapons.
This is a technology that already felt
incredibly dangerous when it comes to
like the way it's being used to replace
workers rather than to give workers
tools to be able to better do their
work. the way it's stealing art and
replacing artists and the way its
quality output is inconsistent and you
know unreliable enough that becoming
fully dependent on AI might lead to just
a lot of headaches. But now to see this
on a level where it'll directly have an
impact on things like our privacy and
just the state of the world given how
powerful weapons are becoming and how AI
could just kind of make it that much
more precarious. This just highlights
how corporations think. Gaming companies
will tell you that AI will be used to
enhance artists and essentially to
support artists, but we're seeing plenty
of evidence of corporations doing the
exact opposite, just straight up trying
to cheapen out on human labor and trying
to replace them with AI to essentially
engage in cost cutting measures to be
able to output faster regardless of
quality and artistry just for the sake
of profits. And now with someone like
Sam Alman, we're seeing just how easily
someone like this individual is just
willing to sacrifice and forego all
principles and all manner of integrity.
The way he spoke of humans as if they
drain energy instead of, you know,
speaking of humanity as the point of
doing anything, you know, bettering
humanity as a whole for our survival and
our collective collaboration to be able
to kind of foster a better future. That
already told me everything I need to
know about him. But the way he just
quickly swooped in and took Enthropic's
place when Enthropic was making a stand
because his pursuit for money, his
ambitions far outweigh any semblance of
morality. All of this just really
highlights how you cannot trust anyone
who tells you, especially on a corporate
executive level, that AI will be used to
empower humans and to try to make
society better, to try to make products
better, to try to make the people who
work on the products just more
productive and to give them tools to
make their work easier and better. You
can't just trust that anymore. And if
you want to learn more about the way
OpenAI cave to the Pentagon on AI
surveillance, this article from The
Verge is a good read. But it basically
highlights how weak Sam Alman's promise
of prohibiting mass surveillance and
prohibiting the development of
autonomous weapons through AI is when
you really look at what this deal is and
when you look at the way it is
highlighted that AI open AI can be used
for all lawful purposes which again it's
just about loopholes that allow them to
say words that feel really positive and
feel really like morally upstanding but
in reality like there are countless ways
to ensure that this technology can
ultimately be abused. Here's someone who
kind of summarized the Verge's article
pretty aptly. Basically, OpenAI agreed
to follow laws that have allowed for
mass surveillance in the past while
insisting they protect its red lines
that are incredibly weak. Basically,
OpenAI is full of and they may well
turn over everything you ever typed into
Chat GPT if the US government asks. And
keep in mind that with chat GPT, people
kind of apply to it a level of cander, a
level of honesty and openness that
allows for just like an extreme level of
psychological profiling because what
people type into chat GBT, the queries
that they ask it informs a lot about
that individual. The fact that they were
doing targeted ads through chat GPT,
that was already like shady enough. But
now to see it go up a whole other level
where you know government entities can
use this technology for mass
surveillance and to just kind of get to
know you on a very intimate level that
would make you uncomfortable if you
really knew what was going on. Yeah, it
doesn't shock me that powerful tools
like these are primed to be abused due
to a lack of regulations and just due to
a lack of like moral foresight when it
comes to all the egregious ways in which
this technology can be used which is
being advertised as good for society.
But that only applies when the
technology is actually used for good
purposes. But you know, it's ri for
abuse right now. And what this has meant
for OpenAI in the eyes of the public is
that there's actually been a mass
boycott of OpenAI and its services.
Right here we have an X news headline
saying cloud app tops App Store after
OpenAI's DODO backlash. And you can
actually see screenshots of this. Once
upon a time, Chat TPT was number one.
And now you can see Claude is very
quickly catching up. And eventually
Claude did in fact catch up as people
decided to forego chat GBT for clawed by
Anthropic instead due to the fact that
Anthropic actually decided to draw lines
in the sand and despite threats from the
US government they actually like held to
those standards. They actually held that
moral line and to really highlight what
a big deal that is before all this cloud
was at number 131 on the app store in
late January and now it is at number one
this weekend. their integrity move ended
up actually rewarding them and they also
took great advantage of this whole
situation. Enthropic response saw that
attention ship memory on the free plan
make the free tier stickier at the exact
moment millions of new users are
flooding in. And then beyond that,
instead of letting their designation of
supply chain risk just kind of crumble
them, integrity ultimately ended up
rewarding them with K. Perry posting her
claw pro subscription, Reddit organizing
mass chat GPT cancellations, 700 plus
employees at Google and OpenAI signing
an open letter backing Enthropic's
position. What was supposed to be this
punishment from the government,
Enthropic transformed into an
opportunity converted into the largest
consumer acquisition event in AI history
that immediately shipped product to
retain every new user walking through
the door. Now, I'm not trying to label
Anthropic as like the good guys. I think
all AI companies right now in the way
they're pursuing and racing towards
being the dominant in this technological
space like Enthropic is just as guilty
in many other areas when it comes to how
uh they're pursuing this technology
without really thinking about the
consequences without regard for how all
this negatively impacts the masses, the
public, the economy, uh the environment,
the artists, so on and so forth. But at
the very least, they're willing to draw
some lines here and there because they
recognize that if AI falls into the
wrong hands, it could be so incredibly
destructive and cataclysmic. But of
course, there will be some companies who
don't care about those things, who won't
draw those lines as long as they get to
make a profit and be able to realize
their ambitions regardless of the cost
that is associated with that, where
their ambitions outweigh any thought
about, you know, the concept of humanity
and working together so that we can
thrive together and move forward towards
a better future together. But that's not
at all on Sam Alman's minds and it
clearly shows. But yeah, basically tons
of people have decided to drop Open AI
and so Sam Alman seeing the backlash,
seeing the negative PR is deciding to
add further responses and trying to
swatch people's concerns. He tried to
host an AMA where he tried to sympathize
with people and talked about how there
are scary precedents that are being set
with the department of war and the way
they're blacklisting anthropic and any
company that doesn't work with them and
allow them full autonomy over these
powerful technologies that could be
abused. He insists that he holds the
government responsible for these kinds
of actions. And when asked about what
happens if the government tries to
nationalize open AI or other AI efforts,
he said, "I obviously don't know. I have
thought about it. Of course, it has
seemed to me for a long time it might be
better if building AGI were a government
project, but it doesn't seem super
likely on the current trajectory. That
said, I do think a close partnership
between governments and the companies
building this technology is super
important. Even if he is being honest,
what this highlights is he hasn't really
fully thought about the consequences of
just signing this kind of like deal with
the devil where his incredibly powerful
technology is just given away to be used
for lawful purposes where loopholes will
allow that technology to be used.
However, the US government or entities
like the Department of War deems,
including in ways that the mass public
would absolutely would never approve of,
but they just have that power now
because Sam Alman handed that power to
them. Sam Alman's attempts to assuage
the fears of the public or to try to
build himself as like more morally
upstanding, it just hasn't been panning
out because the morally upstanding thing
to do would have been to not accept a
deal that will allow unfettered use of
such a powerful technology and
potentially in really abusive and
destructive ways. Basically taking the
stance that Enthropic did, but OpenAI
didn't. And so yeah, I mean this is a
company that essentially has a cloud
looming over it when it comes to the
negative PR and the negative optics in
the eyes of the public. I surmise that
ultimately the real reason Open AI
signed this deal with the government
wasn't because they believe in a
collaboration that could pave the road
towards a better future, whatever. Open
AAI just basically is desperate right
now. They need to sign any and every
deal possible that will bring them
billions in order to be able to weather
the rate at which they're burning money.
They will basically take money from any
source because otherwise they will go
under. This is a company that right now
is basically built on a house of cards.
I mean relationships are strained from
how Star AI data centers for OpenAI
reportedly delayed by squables between
partners. There's a report from the
information that was transcribed by
Tom's Hardware. The article says that
sources say OpenAI, Oracle, and SoftBank
disagreed on who would have ultimate
control of the planned data centers.
Here's somebody else adding that
basically we're looking at clashes over
control, marathon negotiations fueled by
7-Eleven in Tokyo, financing push back,
and a quiet pullback from OpenAI
building its own data centers for now.
And all the while, OpenAI is going
around trying to secure as much funding
as possible. And their latest funding
round, they were able to secure around
hundred billion, which you would think
would be enough to make them secure for
a long time to come. But AI as a
technology to build and to power and to
maintain is so freaking expensive that
this money that will burn away really
quick. A hundred billion dollars is
still nowhere near enough for OpenAI to
be secure about its financial future. I
mean, we got reports like this one from
the information from February 20th,
2026, highlighting how OpenAI has
boosted its revenue forecast, but are
still predicting $112 billion more cash
burn through 2030. Plenty of analysts
are discussing how based on everything
they're seeing, there's not a single
thing about OpenAI that makes it look
financially sustainable for the long
term. Here's an analysis that OpenAI
could face bankruptcy within 2 years.
With paid subscribers remaining at only
about 5% of total users, OpenAI faces a
real risk due to billions in operating
costs against a limited return and a
continued reliance on funding and
investments. And that's why they signed
that deal with the US government
regardless of what their moral stances
are because moral stance has to go out
the window in order for open AI to
survive because they created a business
model that is so unsustainable. So
they're they'll rely on the US
government and on entities who seek to
use their technology for destructive
purposes as long as they get their
funding. Here's another analysis from
George Noble, who's been in the
investing space for a while and has made
a name for himself on that front,
talking about how basically Sam Alman,
CEO of OpenAI, just convinced three of
the world's smartest investors to fund
his losses, $110 billion, but zero
profit insight. And the numbers are
broken down here, but basically, it's
looking incredibly bleak with this whole
kind of post and analysis ending with
this can't end well. And then on top of
that, beyond the negative press that
OpenAI continues to receive and with the
latest developments resulting in OpenAI
being boycott by the masses to the point
where Enthropic is kind of being put up
on a pedestal over open AI because at
least they had some moral integrity to
draw some lines and to not cave to
pressure from powerful entities like the
government. On top of all that, plenty
of people talking about how there is a
real backlash against AI and it's
winning with the masses disrupting the
building of AI data centers. And
apparently, this is an issue that's
uniting people across all kinds of
spectrums when it comes to beliefs and
ideals. Very differing groups have found
common ground and a common enemy in the
way AI has been proliferating and the
way it's been doing so in just ways that
people do not see the benefit. And in
fact, they see the opposite. They see
its destructive capabilities and the
dehumanizing element that AI brings to
the table with the way corporations will
not use this technology to empower
humans, but to take advantage of humans
to try to replace humans in order to
pursue further profits and to be able to
rise further for just an an even more
extensive class divide. Yeah, the
article is right here. You can check it
out for yourself. some interesting
discussions about all of the ways in
which the masses, the public have
disrupted artificial intelligence
endeavors. Though of course this is a
technology that has a lot of powerful
backing surrounding it and continues to
march forward at a rapid pace at a pace
that feels too rapid for what uh society
as a whole can morally handle. There are
reports talking about how the contrast
between the dotcom boom and the AI boom
is very apparent. People actually did
like the.com boom because the internet
ultimately did feel like a very
beneficial invention. It's not perfect.
It's got its drawbacks, but ultimately
it felt more beneficial than not. The AI
boom, however, not so much. It's nowhere
near as beloved. The internet feels like
something that was meant for all of us,
whereas AI feels like a technology meant
to be used against the masses for the
betterment of a select few. And what
this ultimately does is compromises the
ability for this technology to be mass
accepted and mass adopted at a level
that AI companies will feel comfortable
with at a level that will allow them to
finally make a profit instead of just
burn money. And then beyond that, from a
usefulness capacity, there were reports
highlighting how thousands of CEOs
admitted AI had no impact on employment
or productivity. So all the promise
about how AI will enhance workflows and
just make everything more efficient,
will empower people's capabilities, so
on and so forth. None of that is panning
out, which means its mass adoption is
going to be that much more difficult
because the technology just isn't doing
what it's what it promised to do. And
then there's the fact that AI just
breaks all kinds of copyright laws with
OpenAI's own CEO Sam Alman straight up
admitting that it's virtually impossible
to develop advanced AI models like Chat
GBT without some form of copyrighted
content. And already we've seen
companies like Disney and Warner
Brothers and many others start to sue AI
companies because yeah that technology
is being used to create basically you
know videos and images from characters
that uh are being deployed in a way that
is attracting a lot of attention. People
are just straight up making movies with
properties that don't belong to them
because technologies like chat GBT and
all these AI models enable that without
seeking permission from the copyright
holders without seeking permission from
the people who they're stealing work
from. And then economically, AI
basically added zero to the US's
economic growth due to the fact that AI
is being built off of components bought
overseas. You know, all those chips that
they need that they're hoarding, all
that comes pretty much from overseas.
that stuff's not manufactured in the US.
So if anything, all these US companies
pursuing AI are boosting other countries
economies and are, you know, basically
tanking the US's by creating this bubble
and engaging in mass expenditures with
the hopes of profit that may very well
not pan out due to the rate with which
they have to spend money in order to
make this technology possible. So yeah,
when you look at just how poorly OpenAI
is doing from a public perception
standpoint, from a financial standpoint,
from a viability standpoint, from just
like an infrastructure standpoint, it's
suddenly not that difficult to figure
out why OpenAI would sign a deal with
the devil. It's basically because
they're that desperate. They're so
desperate that they're willing to forgo
any semblance of morality and any
semblance of foresight when it comes to
how that technology could be used for
destructive purposes or for privacy
invading purposes. Sam Alman and Open
Air are just so desperate to survive
that they'll make these kinds of deals
and try to justify it in any which way
and it's not working. Anyone who's
willing to go this far to realize their
ambitions, anyone who's willing to
sacrifice everything to basically keep
their company alive, especially a
company that is dying because of their
own business model of their own making,
that is someone you can never trust.
OpenAI's actions have already had like a
direct impact on the gaming landscape as
a whole. But again, now it's becoming
far bigger than that and now it's
becoming again it's truly becoming
existential in terms of the questions
that we have to start asking ourselves
about the capabilities of this
technology. It started with negative
impacts on computing components and
electronics like gaming devices, but now
it's expanded to a point where like the
picture is even much bigger than that.
And gamers in particular are already
looking forward to the downfall of Open
AI because of the adverse impacts that
they've had on the economy of gaming
electronics. The uh just the state of
artistry in general, the poorer quality
outputs. Like AI is just not a
technology that is favored very much by
the gaming community. But how little you
can trust these executives to actually
look out for us to try to make all this
about benefiting humanity and uh
benefiting the mediums that they're
being employed in and whatnot. uh just
the complete inability to trust these
executives uh who have this power to
deploy this technology is not
particularly surprising but it's still
good to have that reaffirmation so we
can continue to keep our eyes peeled for
you know the words that executives will
use to try to justify the implementation
or rather the poor and the more abusive
or exploitative implementation of AI
technology while having every intention
to screw over the public the masses the
artists the workers and everyone who
isn't just the executives trying to make
money. And that, ladies and gentlemen,
is kind of my take on the latest
surrounding OpenAI. I'd love to hear
what your thoughts and opinions are on
all this in the comments below. And to
be further updated on all things gaming
news, reviews, and discussions, stay
tuned right here on Yong. Yeah. I'll see
you guys next time.
Yong.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.