Sam Altman Says "Vist Me In Jail"... Insane OpenAI Damage Control AMA...
FULL TRANSCRIPT
The cracks are starting to show with
OpenAI and Sam Alman. This is bonkers.
Now, if you believe these numbers, Chat
GPT has suffered over 700,000
people unsubscribing from their paid
services. Now, this is resulting in
Anthropic getting a ton of positive
press simply because they are the ones
that are well not willing to just do
mass surveillance on the US citizens and
also allow fully autonomous systems with
their AI in terms of military weapon
systems, which of course Open AI seems
to have no problem with. I've covered
this in a previous video, so I'm not
going to go over that in detail in this
one. But what I am going to say is this.
Sam Alman is [ __ ] the bed. And uh
it's quite evident by what he's doing
now. Look at this. It is it is crazy.
This is Robert Bay by uh product manager
at Anthropic and look at the top
downloads on the app store. Claude is
number one. Chat GPT has been knocked
off that spot by obviously their
behavior over the last couple of days uh
when it comes to their um duplicitus,
let's say, uh CEO and all of that.
Basically a mask off moment uh for the
company. And also Dick Sporting's goods.
So I looked into this. Basically, the
reason why that's there is they've got
an offer where I think if you do
activities and track them with their
app, you get discount codes to use in
their store. So, that's why they're
number three. Um, but the big story here
is Claude for the first time ever has
knocked Chat GPT off the top rank spot.
So, why? That's the question. Well, we
know why. It is because Anthropic stood
up for their values and said, "Hey, we
don't feel very comfortable using our AI
systems for fully autonomous weapons and
we don't like the idea of using it for
mass surveillance." That same day, Sam
Orman says, "Oh, yeah, I agree with
Anthropic. That's great." And then that
very evening, he then decides, "No,
actually, we'll just take over the
Anthropic contract." As obviously,
Anthropic was deemed a supply chain risk
by the US government and removed from US
systems. although uh it's going to take
6 months to remove them and there was a
deadline if you remember the Friday
deadline. Now, what's interesting about
that, and we'll get on to it in this
video, is obviously what happened after
Friday, um, was a, well, the USIsraeli
strikes on Iran, which, as I'm going to
show you, used Claude, they used their
AI. So, they wanted the AI systems to be
in place to use them for that um,
mission planning and all that kind of
stuff that they were doing. Not using
them for autonomous weapons, not yet,
but using them at a mission intelligence
level. So, this prompts Sam Alman to do
this. Now, this was uh yesterday or
well, it's today if you're watching this
video today, but it was early hours of
the morning for me. You can see it was
12:13 a.m. Um, in some ways, it's
bonkers. We live in a world where the
CEO of, you know, the biggest private
company there is. Maybe you could say
Starling could be the biggest private
company in terms of value, but it's not
really. Let's just roll with Open Hour.
It's one of the biggest companies,
right? um reportedly worth near 800
billion at current valuations. Their CEO
is just goes on to Twitter and starts
talking like this. He's worried, right?
He's absolutely worried. He is losing
market share to Anthropic. They are
losing market share to Google. They're
losing market share all over the place
and everything they seem to do. When I
say market share, I mean consumer market
share. Clearly, they're not losing like
government market share. uh and they're
trying to go into uh the corporate arena
as well to take an anthropic. But look
at this. He says, "I'd like to answer
questions about our work with the
Department of War." You know, I cannot
stop reading that as Dawn of War cuz
it's Dawn of War, is it? Dawn of War.
Dawn of War, by the way, the remaster of
Dawn of War is really good. And Dawn of
War 4 looks sick. I need that. Uh
anyway, Dawn of War 2 is a good game as
well. Dawn of War 3 was trash, but four
looks pretty good. Anyway, he says, um,
"Ask me anything." So it's like, okay,
let's let's take a look at what he says.
So we'll look at the question and we'll
look at the answer. If the government
comes back with a memo saying that in
their view mass domestic surveillance is
legal, do you accept that? Do you do it
until the courts bar it or do you delay
until the courts approve it? Second,
would mass domestic surveillance been
lawful use right now? That's a good
question. And he says we would not do
that because it violates the
constitution. Also, I cannot overstate
how much the department of war has been
extremely aligned on this point. This is
objectively not accurate because this is
what Anthropic was saying and they threw
Anthropic out for saying the same thing.
So this just cannot be accurate.
However, maybe this is the question you
are really asking. What would we do if
there were a constitutional amendment
that made it legal? Maybe I would quit
my job. I mean, I can't I can't see that
happening in in any world. I very
deeply, although a lot of people
probably would like to see it, but
anyway, I very deeply believe in the
democratic process and that our elected
leaders have the power and that we all
have to uphold the constitution, but
isn't he agreeing to have his systems
used in mass surveillance, which is
against that? See, this is this is what
this guy is all over the place. I'm
terrified of a world where AI companies
act like they have more power than the
government. I don't like So, here's the
thing. I'm I'm not entirely sure he is.
think he wants to be the person with
more power. See, you've got to remember
what has happened here. There was a gap.
Anthropic has been removed from
government systems. Immediately the same
day, this guy is he enters. He goes
straight in. It doesn't matter what the
ethical impact of that might be, what
the constitutional impact of that might
be, what the legal, he just doesn't
care. He's like, "Whatever, get us in."
It's ruthless behavior of someone that
wants relentless growth, which is
typically what these CEOs generally do.
Anyway, I would also be terrified of a
world where our government decides mass
domestic surveillance was okay. I don't
know how I'd come to work every day if
that were the state of the country or
the constitution. But he is building
tools that directly enable this. That's
what's bonkers. How long has this
conversation with the department of war
been going for? What was the reason for
announcing so close to the deadline they
gave anthropic? Again, good question.
Interesting that he answered this. For a
long time, we were planning uh to
non-classified work only. We thought the
D
clearly needed an AI partner and doing
classified work is clearly much more
complex. We have said no to previous
deals in classified settings that
anthropic took. We started talking with
the Department of War many months ago
about our non-classified work. This week
things shifted into a high gear on the
classified side. We found the D. It it
doesn't sound very good to say as a DO,
but I'm going to keep say I keep saying
like that instead of department of war
to be flexible on what we needed and we
wanted to support them in their very
important mission. The reason for
rushing is an attempt to deescalate the
situation. I think the current path
things are on. The reason for rushing is
an attempt to deescalate the situation.
It isn't. It's just an attempt to
capture market. That's all it is. It's
crazy. I mean, if this guy had any
bollocks, he would just be like, I I
stand with the comments of Anthropic.
Now, I'm not saying like be be fully
aware. I am not saying anthropic are
like great, right? They've been used,
like I said, they've been using the
strikes on Iran and they've been inside
the military for a while. So, these
systems are being used whether you like
it or not, for nefarious means, right?
So, for him to come out, Dario, and say,
"Look, we've got a line and they want us
to cross it and we're not going to do
it." And the very same day Open is like,
"Yeah, we'll cross that line." That's
[ __ ] Like if this guy Sam Alman had
come out and said no, I believe in what
Anthropica is saying then that would
leave the door open to like what I'm
sure XAI would do it but who gives a
[ __ ] about them. You know what I mean?
So this is like a complete loss of
values. And again this is why they're
suffering with people pulling back and
stopping using their service because
it's showing that they are untrustworthy
in a massive massive like a really
dangerous way. Anyway, I think the
current path things are on is dangerous
philanthropic
healthy competition and the US. We
negotiated to make sure similar terms
would be offered to all other AI labs. I
know what it's like to feel backed into
a corner and I think it's worth some
empathy to the DOW.
They are on the very they are a very
dedicated group of people as I mentioned
an extremely important mission. I cannot
imagine doing their work. Our industry
tells them the technology we're building
is going to be the high order bit bit in
geopolitical conflict. China is rushing
ahead. You are very behind and then we
say but we won't help you and we think
you are kind of evil.
I don't think I'd react great in that
situation. I do not believe unelected
leaders of private companies should have
as much power as our de democratically
elected government. But I do think we
need to help them. The main competitor,
Anthropic, did OpenAI come out and say
they do not think Anthropic should be
labeled a supply chain risk. It was even
stated that your position on this was
made clear to the DO. From the outside,
it feels like some political chess given
this was said after your deal was
confirmed with the D. And then he says
enforcing the um STR, so the supply
chain uh designation on anthropic would
be very bad for our industry and our
country and obviously their company. I
mean, but it's it's it's being forced on
them. I don't think the legal uh
documents have been submitted to them,
but it is the process is beginning. We
said to the DOW before and after what
what's we said that part of the reason
we are willing to do this quickly was in
the hopes of deescalation.
I mean, it's it just isn't though. It's
just to get in it isn't it just isn't. I
feel competitive with anthropic for
sure. But successfully building safe
super intelligence and widely sharing
the benefits is way more important that
any company than any company
competition. I believe they would do
something to try to help us in the face
of great injustice if we could.
We should all care very much about the
president.
I saw in some other tweet that I must
not be willing to criticize the DO. It
said something about sucking their dick
too hard to be able to say anything
critical, but I assume this was the
intent to say it very clearly. I think
this is a very bad decision from the DO
and I hope they reverse it. If we take
heat for strongly criticizing it, then
so be it. Now, this one is like hang it
in the Louve. It is store this message
and get ready and then hang it in the
Lou when it proves to be the case. So,
he says, well, he gets asked what will
be the case or the cause. Open AI what
would cause sorry open AI to walk away
from a government partnership nothing
they're never going to walk away from
one right anyway is there a clearly
defined boundary or red line you won't
cross and he says well let me just
address that why I say that the reason
why they will never walk away from this
because the way these AI systems work
and look at what's going on with Claude
at the moment it is embedded in the
government systems it's very hard to get
that out and replace it with a different
model the longer you are in these
systems the more difficult it becomes to
remove you so even if models get better
and maybe can do slightly different.
It's going to be hard to remove the
embedded model. So it is in the interest
of an AI lab to get embedded and you can
say this is what anthropic were doing.
They did this but then when the crunch
came and suddenly you know they were
asked we're just going to remove the
guardrails and they're like no they
stood up for it then they got removed.
Now, Open AI clearly want to be in the
government systems because the longer
they're in them, the harder it is to get
rid of them. And you know, think 10
years down the line, they will be the
deacto provider of the AI services to
whatever the government are using them
for and not just the US government,
right? This will um like precipitate out
across all major western governments as
well. Um unless there's a political
backlash, which there may be, especially
in European governments. Um but then he
says this, and this is what I'm saying,
hang this up and get ready. If we were
asked to do something unconstitutional
or illegal, we will walk away. Please
come visit me in jail if necessary.
Which of Open AI's core principles was
the most difficult to reconcile with the
DO's requirement during your internal
debates this week? Thinking through
non-domemestic surveillance, I have
accepted that the US military is going
to do some amount of surveillance on
foreigners, and I know foreign
governments try to do it to us, but I
still don't like it. I think it is very
important that society thinks through
the consequences of this. Perhaps the
single principle I care most about for
AI is that it is democratized and I can
see surveillance making that worse. On
the other hand, I also respect the
democratic process. I don't think this
is up to me to decide. You see, it's not
up to him to decide, but he is the
leader of the company that's willing to
provide the AI tools that will enable
this. That is what's crazy. Like I said,
it's bonkers, right? This guy says it's
not up to me. It's nothing to do with
me. while also providing the the the the
means to do the thing. You know, it's
crazy. He's got no spine. So, this is
kind of funny. So, a user just goes like
a screenshot of the app store. You happy
now? And he just says, "No, also update
your apps."
So, this is a direct question. How do
you go from a tool for the betterment of
the human race to let's work with the
Department of War?
I value my liberty and safety and yours.
I believe that strong democracy and a
strong US in particular is a very good
thing for the world. The 16-year-old me
thought every country should just
abolish their defense department at the
same time. I wish he were right, but I
now but I now I know I guess I now think
the world is a much more fragile place,
but I now think the world is a much So
then he's like I want to recap this. I
want to like this is the most insane
attempt at damage control and I don't
think this has done anything for damage
control. Um it just goes to show you
what the hell is going on in open AI. Um
but he says this to sum up the AMA.
There is more open debate than I thought
there would be at least in this part of
Twitter about whether we prefer a
democratically elected government or
unelected private companies to have more
power.
I don't think that's accurate. From what
I've seen and my take on on that is it's
it's not about that. It's it's about the
the companies willing to provide their
systems which they know they're going to
be used for extremely nefarious you know
ambiguous legal loophole applications
which he is happy for that and other AI
labs are not well anthropic
specifically. I guess this is something
people disagree on but I don't this
seems like an important area for more
discussion. That's just a non- comment.
I think uh there is there is a question
behind a lot of the questions but I
haven't seen quite articulated what
happens if the government tries to
nationalize open AI or other AI efforts.
Obviously I don't know. I have thought
about it of course. It seems to me for a
long time it might be better building be
better if building AGI were a government
project but it doesn't seem super likely
on the current trajectory. That said, I
do think a close partnership between
governments and the companies building
the technology is super important. This
might be him saying we're please just
just nationalize us because we've got no
money. Please please.
I don't think it is though. People take
their safety in the national security
sense more for granted than I realized,
which I think is a good thing on
balance, but I don't think shows enough
respect to the tremendous work it takes
for that to happen. I don't know. I
don't again I'm not I'm not entirely
sure what you're saying there because
people's issue is they don't want to be
surveiled and and then other people are
worried about autonomous weapon systems
going out of control but mainly at the
moment is they don't want to be
surveiled because like I've said with AI
systems the data is kind of already
there but the AI systems can collect
that data and then build profiles much
faster and quicker than any other system
could do before. So you can get
detailed, you know, programs on people
and if you're a government, um, you've
obviously got tons of information on
your population. You can do all kinds of
things. You know, I'd dread to think
what's going on in China at the moment
because the amount of detail they've got
on all of their, um, civilians is is
crazy. And, and this, again, this is not
like me being naive. I obviously, you
know, a lot of the big American tech
companies have got tons of information
on all of us. You know, I'm not in
America, but Google knows everything
about me. You know what I mean? So, this
stuff goes on everywhere. So, this is
crazy. Now, um,
US military reportedly used Claude in
Iran strikes despite Trump's ban. So,
Trump calls Anthropic, a radical left AI
company run by people have no idea what
the real world is about, and then uses
their applications to then bomb Iran. I
mean, it's crazy. The US military
reportedly used Claude, Anthropic's AI
model, to inform its attack on Iran,
despite Donald Trump's decision,
announced hours earlier, to sever all
ties with the company and its artificial
intelligence tools. The use of claw
during a massive joint USI Israel
bombardment of Iran that began on
Saturday was reported by the Wall Street
Journal and Axios. It underlines the
complexity of the US military
withdrawing uh powerful AI tools from
its missions when the technology is
already intricately embedded in
operations. According to the journal, US
military command used the tools for
intelligence purposes as well as to help
select targets and carry out battlefield
simulations. On Friday, just hours
before the Iran attack began, Trump
ordered all federal agencies to stop
using Claude immediately. He denounced
Anthropic on Truth Social as a radical
left AI company run by people who have
no idea what the real world is about. Uh
the flaming row was triggered by the use
of Claude uh the use of claw by the US
military. In its raid to capture the
president of Venezuela, Nicholas Maduro
in January, anthropic object objected
pointed out that its terms of use do not
allow claw to be applied for violent
ends, to develop weapons, or for
surveillance. Since then, relations
between Trump and the Pentagon and the
AI company have steadily worsened. In a
lengthy post on X on Friday, the defense
secretary Peter Hexf accused Anthropic
of arrogance and betrayal, adding that
America's warf fighters will never be
held hostage by the ideological whims of
big tech. Hexf demanded full and
unrestricted access to Anthropic's AI
models for every lawful purpose. But the
defense secretary also gave a nod to the
difficulty of rapidly detaching military
systems from the AI tool given how
widely used they have become. He said
that Anthropic will continue to provide
services for a period of no more than
six months to allow for a seamless
transition and to better to a better and
more patriotic service which is code for
oh [ __ ] we need them and we're going to
use them now so we're just going to tell
them bollocks get rid of them but then
we're also I mean I've got to be honest
I think this is just a case of I don't
think they can be told no like the way
again I'm not going to get political
here but the way like I've I've read
this Trump administration if it's told
no it just goes crazy it's like that you
can't tell me no you're gone And that's
just how they they they act, you know.
And it was a valid thing that Anthropic
said like, "No, we don't want our
systems used for this." And they've
said, "Hang on, you've said no. You're
gone. It's" And then round the corner is
uh Sam Orman. Yeah, I'll I'll I'll come
in. Apparently, I've read six articles
on the Guardian this year.
Anyway, since the break of with
Anthropic, the rival company OpenAI has
stepped into the breach. Sam Ortman,
OpenAI CEO has said he reached an
agreement with the Pentagon for use in
its classified network of the company's
tools which included chat GPT. Okay, so
that's the situation we find ourselves
in at the moment. It is a very uh murky
situation. It's a worrying situation.
You know, we've got this development of
AI being used very clearly in you know
war scenarios. It's going to be used in
mass surveillance surveillance. It's
going to it's there's lots of
applications of AI which you know like
look I'm just going to say this. I think
what I want to say is I come at AI the
approach is a very like utopian approach
of let's cure all disease let's you know
cure things like famine let's cure um
the energy crisis right I think that's
what AI to me should be used for you
know or even cool stuff like let's work
out how
to communicate to a dog you know what I
mean we can communicate to dogs now but
you get what I'm saying right or what
what are birds saying when they're
talking to each other let's decipher an
animal language how do whales speak you
know this kind of stuff. The problem is
what we're seeing and and this is why it
pisses me off is it's being used for
generative creative purposes. It's
creating slop slop music, slop videos,
slop everything, slop text, slop
articles. Then it's being used for
autonomous weapons. It's being used for
surveillance. It's being used for battle
simulations. It's being used to select
and designate targets. I I dread to
think what AI has been involved in where
it's gone wrong cuz it will have gone
wrong and they would have covered this
up. And to me, it's very, very
frightening. And what worries me is you
can see these unscrupulous leaders of
some of these AI companies will just do
anything to get their AI systems to be
used, get them embedded. They don't care
almost what they're being used for. Some
of them do in the case of Anthropic, but
again, like I said, they were inside the
US military anyway. Um, so yeah, this is
a complex situation, and the fact of the
matter is right now, OpenAI does seem to
be suffering. In my previous video, I
did say I don't think this is going to
do much to OpenAI. The whole quit GPT
movement, which seem to be happening off
the back of their DO uh contract, which
does seem to be having an effect because
clearly Claude is the top app now on the
app store. It's the top AI app and just
the top app that's being downloaded. and
OpenAI is losing enough traction and
they're getting enough bad press that
Sam Alman's having to come out to do
this crazy AMA on Twitter which I don't
think has really helped with anything
because he's not really answered
anything there. At least I'm that's the
way I've read it. All right guys, thank
you for watching today's video. I've
been inside. So if you enjoyed this then
do subscribe to the channel as ever.
I'll keep you up to date on AI related
news and uh gaming news and all of that
stuff. And I'm actually going back to
play Marathon now because Marathon is a
weird game. It's a game which just gets
better the more you play it. I think the
aesthetic carries that game quite a lot.
Anyway, I'm out of here, guys. Thank you
for watching and I'll catch you on the
next video.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.