OpenAI is Suddenly in Trouble
FULL TRANSCRIPT
It is a bit scary to know that the most
valuable private company in the world
has your address and has shown up and
has questions for you. They were asking
for every former employee that we had
spoken to and what we said to them,
every congressional office that we spoke
to, every potential investor that we
spoke to. Tyler is just one of many
advocates suddenly being targeted.
>> Hi, welcome to another episode of Cold
Fusion. What you just saw there was
basically open AI knocking on the doors
of people who had spoken ill of them.
Why are they so scared of what people
are saying? Well, this is part of the
reason why on Friday the 16th of January
2026, Open AI dropped a bombshell. We
are starting to test ads and chat GPT
free and go new $8 a month option tiers.
That's right. Open AAI is incorporating
ads into chat GPT. Now, for any other
startup, this is normal, even expected
at this point. But for Open AI, it's an
admission that things aren't going so
well. In fact, it's their last resort.
Those aren't my words, but Sam Alman's
words in October of 2024. He stated, I
kind of think of ads as like a last
resort for us for a business model. Um,
I would do it if it meant that was the
only way to get everybody on the world
in the world like access to great
services. But if we can find something
that doesn't do that, I'd prefer that.
So after hundreds of billions of dollars
in investment, increased competition,
stupid side projects like the Sora app
losing $15 million per day, having
trillions in spending commitments. Are
we witnessing the beginning of the end
for open AI after taking 40% of all the
RAM on Earth and causing a myriad of
social, environmental, and economic
problems for everyone? There's a sizable
section of people that would love to see
this company go down in flames. And if
things continue just the way they are,
they just may get their wish. There's
talk of the whole company going bankrupt
by 2027.
As former Fidelity asset manager George
Noble states, quote, I've watched
companies employed for decades. This one
has all the warning signs.
>> You are watching Told Fusion TV.
>> Last episode, we saw how AI failed at
96% of freelancer work. But in this
episode, we're specifically looking at
open AI and the problems they're facing.
From anthropics claude to the
open-source Chinese models, the consumer
AI landscape has rapidly changed. Today,
open AI is no longer the clear leader it
once was. Look, the way this works is
we're going to tell you it's totally
hopeless to compete with us on training
foundation models you shouldn't try, and
it's your job to like try anyway. And I
believe both of those things.
I I think it I think it is pretty
hopeless, but
>> they've spent too much money they don't
have. The competition is catching up and
they're feeling the heat. In a nutshell,
it doesn't look good. They've lost $12
billion in a single quarter. Their
traffic has been falling for one year
straight. Both Salesforce and Apple have
ditched them for Gemini. Top leadership
is leaving and they need $143 billion to
become profitable. At this rate, even
Nvidia sounds less enthusiastic about
investing in them. Can I ask quickly
about open AI again? Sure. So last uh
yesterday you said that uh the uh Nvidia
is not going to invest uh as much as uh
100 billion in open AI. That's what the
current
>> run. We never we never said we were
going to invest a hundred billion
dollars in one round. That never was
said.
>> But how about the overall commitment
because last September you and
>> there was never a commitment. It was if
they invited us, they invited us to so
so uh let's start over again. They
invited us to uh invest up to $und00
million
>> and of course we were we were uh very
happy and honored that they invited us
but we will invest uh one step at a
time.
>> Mhm.
>> All right. But uh is that overall
commitment still stands or it it's not a
commitment?
>> I told you just now.
>> Okay.
>> Yeah. You keep putting words in my
mouth. It's not
>> Yeah. Yeah. Yeah. I know that. Yeah.
>> They invited us to invest up to
>> $100 billion and and we are honored that
they invited us. We will consider each
round one at a time. Yeah. It appears
that confidence in open AI is fading. As
reported by the Financial Times, their
closest partner, Microsoft, has signaled
that they're distancing themselves from
open AI. Microsoft's AI chief, Mustafa
Sullivan, said that Microsoft is aiming
to be self-sufficient in the AI space.
So, the problems for open AI can be
split into four main parts. One, the
scaling problem. Two, losing market
share. Three, the financial black hole.
and four, the trust problem. If Open AI
was the only company on Earth with this
technology, then maybe there'd be more
of a chance to overcome these
challenges. But with so much
competition, it's going to be tough.
Now, researching this topic made me
realize one thing. Understanding
software is crucial, which is why we've
partnered with today's sponsor,
Boot.dev, to help you learn how to code.
It sounds great in theory, but most
people just watch a few tutorials, copy
some code, and then give up. But with
Boot.dev, instead of passively watching
videos, you learn by building things and
solving problems from day one. Just like
how developers learn on the job. We've
been using boot.dev called Fusion, and
it feels more like a game than a course.
You earn XP, complete quests, and fight
bosses, all while learning real back-end
development skills. Their curriculum
takes you through languages like Python,
SQL, and Go. And when you're stuck,
Boot.dev's AI tutor called Boots helps
guide you back on track. They've also
just launched the training grounds where
you can grind unlimited challenges to
lock in your skills before moving
forward. Even if you vibe code for your
projects, knowing how to code results in
much better outcomes. All of Boot.dev's
content is free to read and watch, and a
paid membership unlocks the interactive
coding, AI assistance, project tracking,
and game mechanics. There's also a
30-day no questions asked refund and a
free demo of the interactive experience
on every course. So, if you've ever
wanted to learn how to code or actually
stick with it this time, head to
boot.dev and use my codefusion to get
25% off your first year on the annual
plan. Thanks to boot.dev for sponsoring
this video. Now, back to the story.
The first problem for OpenAI is that the
capabilities of chat GBT have somewhat
stalled. It's at the infancy stages, but
you can tell by their recent decisions.
Sam Oldman has gone from curing cancer
to AI sex bots and a meme slop factory
and more recently a translator app. All
of this isn't a sign of a healthy
business. Moreover, AI capabilities that
are getting exponentially more powerful.
Chat GPT made an absolute splash in its
release in December 2022. ChatGpt 4 was
another leap forward, but GPT5 and
beyond wasn't quite the revolution that
was promised by Sam Alman. It seemed
like stagnation had hit and hit hard.
But why is this? It's an issue called
the scaling problem. The scaling problem
in AI, put simply, is the following.
Giving LLMs exponentially more compute
doesn't make them proportionally
smarter. Once upon a time, this was
true, but that seems to be coming to an
end. Here's computer scientist Cal
Newport to explain it in more detail. It
takes a second to get through the story,
but it's interesting. In the beginning,
we had language models. So, we had these
for a long time, and they were pretty
good. You you you give a bunch of text,
and they they're pretty they were pretty
grammatically good. they could produce
pretty fluent text but it was kind of
they would veer off and they couldn't
really respond well to specific
questions but that was like the state of
the world right so we had language
models these were studied for years in
academia then we start to get this sort
of accelerating sequences of advances so
the first of these advances comes in
2017 it's a team of researchers at
Google figure out a better way to build
these models they're called transformer
architectures the details don't matter
but it made it possible for these models
to produce produce like long text like
produce a whole article to produce a
couple thousand words. That was
exciting. Then the second breakthrough
comes they do a research study. There's
a researcher at OpenAI uh named Jared
Kaplan and he he leads a group of
researchers at OpenAI that includes
Dario Amade who went on to become the
CEO of Anthropic and actually brought
Kaplan with them and they do a pretty
simple experiment. They took basically
GPT2
and said what happens if we make this
bigger? It seems like an obvious thing
to check, but there is this whole
conventional wisdom in in machine
learning at this time that says like
look, you can't make a model too big. If
you make it too big, it's just going to
memorize the training and then when you
give it new examples, it'll be terrible.
And they said, let's check what happens
if we actually just make these things
bigger and forget about that concern.
And what they found in that paper was uh
it gets much better. It like defied the
conventional wisdom of decades within
the field of machine learning, which was
like don't get too big. Your model's
going to stop working if it gets too
big. just going to, you know, in it.
They're like, "Oh, it gets better." And
not only does it get better, but it gets
better pretty fast. And they they they
drew a curve through the data points
they had and they extrapolated that
curve and it went up really fast. And so
they said, "Let's try this." And the
thing they tried it on was GPT3. The
GPT3 model hype encouraged OpenAI to
just make the model bigger, 15 times
bigger. The performance was so high that
it validated the scaling laws. This
sparked a frenzy in Silicon Valley. Soon
Sam Olman was saying that AI would
automate the entire economy.
>> So not only did it jump ahead, it jumped
ahead fast. Like it so it really
validated this curve. The normies don't
know this because they weren't as
plugged into the AI world, but this sent
Silicon Valley going crazy and like, oh
my god, if we keep making this bigger
PPT 5 or 6, this thing is going to be
artificial general intelligence. It'll
be able to do anything a human can do.
We might only be like 5 years away. All
right, so what happens next? Well, they
say we need to show this to the public.
So chat GPT is GPT3 tamed for public
consumption. So now the public all knows
about this. Four months later GPT4 comes
out and GPT4
leaped up the curve exactly as
predicted. Exactly. Huge leap forward
exactly as predicted by the paper. So
now they're like, "Oh my god, we're like
two iterations away." Like this is it.
All the money in the world needs to come
to us because whoever wins this race is
going to control the economy.
>> But despite this massive scale, we may
be reaching diminishing returns. GPD5.
They started working on it right away.
So they build an even bigger data
center, an even bigger model. They're
calling this project Orion by the summer
of 2024. So last summer, um they
finished training this thing. Elman is
telling his people, "This thing is going
to blow away GPT4." Like, and it's like,
"This thing scares me." What this thing
is going to do, right? Like I don't you
this is it. We're about to go through
the looking glass. They train this
thing, but then it stops working better
than GPD4.
Like a crap. this leaping up the curve
every time we make this much bigger.
This didn't work anymore. And so there
was like this realization of oh just
making models bigger and training them
on more data. It the scaling law broke.
It broke around GPT4.
>> There's a real risk that there may be
inherent limits to current day LLMs.
Just adding more data may not be
feasible for an exponential increase.
Think of it this way. It's like a father
saying, "My son was born just 10 months
ago, and in that time, he's gone from
crawling to walking, and recently he's
started running. In 2 years, little
Timmy is definitely going to be able to
fly. Obviously, that's not how it
works." But it's the same concept here.
>> There are certain limitations,
mathematical limitation that are not
crossable by steering up. I show it
clearly mathematically
in my book. So there were there were
many different ideas of how to advance
AI. One was we already actually have all
the techniques that we need and we just
need to scale them. But that was
considered a very extreme opinion. And
then on the other extreme it was we
don't even have the techniques yet. And
interestingly recently there's a New
York Times story that says why we likely
won't get to AGI anytime soon by Cade
Mets. and he cites this stat that 75%
of the longest standing most respected
AI researchers actually still think to
this day we don't actually have the
techniques to get to AGI if we will
ever. So it's we're we're kind of coming
full circle now and it is starting to
become unpopular again this idea that
you can just scale your way to so-called
intelligence.
>> GBT I need to wash my car and the car
wash is 100 m away. Should I walk or
drive?
>> Honestly, 100 meters is barely a stroll.
It'll take just a minute or two on foot.
Unless you've got heavy gear or there's
something unusual going on, walking is
probably the simplest and most pleasant
choice. Plus, you'll save a little gas
and get a tiny bit of fresh air.
>> All right. So, you think I should walk
to the car wash instead of driving my
car there?
>> That's what I'd do. It's a short jaunt,
and it'll probably be less hassle than
starting the car just for that. All
right. So, how do I wash my car? If I
walked there,
>> you'll just bring what you need. Maybe a
small bag with your payment method, and
the car wash will have everything else.
Once you get there, they'll handle the
washing while you wait. You'll walk back
with a sparkling car and maybe a little
extra pep in your step.
>> So, my question is, um, how will the car
get washed if the car is still at my
house?
>> Uh, I see what you mean now. If the par
stays at home, it's not getting any
cleaner, right? So, yes, to actually
wash it, you'll need to bring the car
along. In that case, driving those 100 m
makes perfect sense.
While Chat GPT and other LLMs can be
extremely brilliant, even in highle
physics and maths, they don't yet have a
true model of the world. Some computer
scientists believe that that's an
integral part of intelligence. Now, I
could be wrong. A new fundamental neural
network technique could be discovered
and that could move things along again.
But as it stands right now, it seems
that we're reaching a local limit. Now,
I have to be clear, every AI company
faces this problem, but some are fairing
better than others. And one of those is
Google.
Now that Google has found their footing
after the shock release of ChatGpt, new
data suggests that Chat GPT is losing
market share to Gemini.
Chat GPT's market share dropped to 65%
in January, which is approximately 20%
lower than its 86% market share in
January 2025.
ChatGpt usage also stalled in late 2025.
Average daily time spent per user
dropped from 27 minutes to 21 minutes.
While both have their strengths and
weaknesses, Gemini appears to be much
better in research, real-time
information, and multimodal tasks. While
chat GBT is better at writing, coding,
and conversation, real-time information
and multimodal tasks, i.e. uploading a
photo or pointing a phone camera at a
scene and getting information about it,
is arguably more useful for the everyday
person, especially on mobile. So, Apple
pushing Open AI aside and going for
Gemini makes sense. It's amazing to
think that back in late 2022, Google was
caught with their pants down when Chat
GBT first came out, but today they've
more than caught up. And after all, it
was Google researchers who laid the
groundwork for the AI revolution with
their 2017 breakthrough of the
transformer architecture. Open AI simply
took Google's work and ran with it. So
in theory, Google researchers have the
brains to come up with new theories in
computer science to push AI forward.
Some recent papers include nested
learning and Simma 2, an AI that can
reason and play video games. Generally,
open AI, on the other hand, has a
problem with staff continuously leaving.
AI images is also another loss for
OpenAI. The release of Google's Nano
Banana Pro in November of 2025 triggered
an internal crisis at OpenAI. Sam called
a code red and paused all other projects
to focus on image generation, but they
still ended up falling short. And then
there's the flood of open-source Chinese
models. Cling AI and Quen are also
gaining ground. Then there's the wild
cards like Google's Project Genie, an AI
that builds worlds, albeit static, just
from a prompt. All of this is to say
that Open AI has threats from all sides.
Knowing this is possibly the worst time
for Open AI to be shopping around for
billions more in investment if just in a
year's time the competition will only be
stronger.
>> But it is a business. So I'm just
wondering like eventually is the idea to
kind of like license technologies will
you have customers you're going to be
customizing algorithms for them or how
how is it going to work? You know the
honest answer is we have no idea. Um we
we have never made any revenue. We have
no current plans to make revenue. We
have no idea how we may one day generate
revenue. Um we have made a soft promise
to investors that once we've built this
sort of generally intelligent system. Um
basically we will ask it to figure out a
way to generate an investment return for
you.
>> The third issue for open AI is the
company's finances. The publication, The
Information, saw internal documents from
Open AI, and the numbers don't look
good. Setting aside the myriad of
lawsuits, including a $134 billion one
from Elon Musk, there's some real
financial problems. After hundreds of
billions in investment, 2026 will see a
$14 billion loss. That's roughly three
times worse than early 2025 estimates.
Open AAI expects their first profit of
14 billion in 2029, but that's after
losing 44 billion first. By some
estimates, they'll be out of money by
2027.
Open AI is committed to spending over $1
trillion in AI data center
infrastructure over 8 years. And that's
despite only bringing in $13 billion a
year in reoccurring revenue. That's 1%
of what they're promising to spend. Open
AAI has also agreed to pay Oracle $60
billion per year starting in 2027. And
in all of this, somehow Open AI predicts
that they'll be at $100 billion revenue
by 2029. That's close to what Nvidia
makes. So, it's possible, but unlikely.
Other investors think so, too. Blue Owl
Capital recently pulled out of a $10
billion deal to fund an Oracle/ OpenAI
data center in Michigan. It could be a
sign that investors are worried about
Open AI's ability to pay them back.
Google, on the other hand, doesn't
really have to worry about cash flow.
The company made $86 billion in 9
months, and they can basically pour as
much money as they want into AI. Open
AI, on the other hand, has to scream at
the top of their lungs to attract more
venture capital. There's yet more
company behavior that indicates
financial trouble. There's the
floundering to spend 6.4 4 billion
acquiring Johnny IV's design firm and
that's to build an AI hardware device.
But according to reports, the
development is going poorly and it could
end up like the humane's AI pin. The AI
erotica version of Chat GPT is
self-explanatory and the Sora app's user
base has collapsed. Despite not having
much to show versus the competition, Sam
needs to talk a big game to get the
investment rolling in. Curing cancer,
replacing your GP, and discovering new
science is a massive promise. But can we
trust him?
The final issue for Open AI lies with
Sam himself. His track record, frankly,
is poor. It's almost like Alman's entire
career was a series of promises that
didn't pan out. All starting from his
first company, Looped, that he founded
in 2005. It was kind of like a strange
GPS-based social network. Sam Olman
claimed a massive user base of 50,000,
but they didn't exist. In reality, they
had only 500 users, but he sold off the
company for millions anyway. The next
example happened in 2014 with Reddit. He
scraped the whole website to feed into
OpenAI's products, and then he promised
to give 10% of the value back to the
community, but this never happened.
Next, OpenAI co-founder Ilia Sutska, who
has since left OpenAI, has accused Sam
of a consistent pattern of lying.
According to insiders, Sam Oldman lied
to OpenAI board members before being
fired in 2023. So with this kind of
track record, is he the guy who's going
to deliver trillions in value or is most
of this just talk pumping up new
investment? I'll leave that up to you.
So a little personal story. Back in
2022, I believe I was in Melbourne and I
watched Sam Oldman give a talk. After
the talk, he was sworn by crowds of
people wanting to take a photo with him.
But today, the sentiment couldn't be
more different. And it's partly to do
with this. In 2015, Open AI started as a
nonprofit. It was meant to benefit
humanity. Now the only thing the company
cares about is valuation and saying
whatever they need to to attract new
investment by any means necessary. So to
summarize everything, Open AI went from
a nonprofit that had no plans to make
revenue to a for-profit company that
commits to spending a trillion dollars
on data centers, a trillion dollars for
diminishing returns due to the
fundamental scaling problem with LLMs.
all the while losing billions of dollars
and losing out to growing competition in
a sector that may just become a
commodity in the end. Just in my
opinion, it's not really a great
financial bet as it stands. But after
all that we've talked about, what do you
think? Do you think OpenAI will survive
or will the competition eat them alive?
Anyway, that's about it from me. My name
is Dogo and you've been watching Cold
Fusion and I'll catch you again soon for
the next episode. Cheers, guys. Have a
good one.
Cold Fusion. It's me thinking.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.