I don’t really use libraries anymore
FULL TRANSCRIPT
AI is changing software development fast
and as good as some of those changes
are, there are some that are bad and
interesting and worth talking about. One
of those changes is the role of the
library. I've been thinking about this a
lot because I build a lot of things and
I build them using a lot of different
libraries. I've even been meme'd on it
consistently for just npm installing
away my problems. And to be fair, that
is a real thing that we've seen in
ecosystems like npm. The fact that you
can just npm install something as simple
as a way to pad text on the left is a
bit weird. On one hand, it's cool that
these things that are reusable and
common can be shared in such a
convenient way. But on the other, it
definitely made us a bit lazier because
we could just grab a solution that we
barely understand and plug it into our
code. Kind of like we're doing with AI.
Weird, huh? I wonder how these things
will go together. Well, I'm not just
wondering. I'm living it myself. I've
been changing a lot of things in the
projects that I work in, including
things like removing libraries that no
longer really make sense or cause us
problems. And I'm finding that more
often than not, depending on the size of
the library, it's actually more
convenient to just rip it out and
rewrite it using prompts than it is to
try and massage around the library to
try and patch it or work around its
weird traits. In this case, there was a
library called Tkumi that we used for
rendering React components on the server
to generate PGs for the T3 chat wrapped
feature. And for some reason, we could
not get it to bundle in production at
all. And I kept fighting it for like 15
minutes, got annoyed, and just Vibe Code
rewrote the whole thing to be clientside
JavaScript instead. And I'm far from the
only person doing this. Anti-res, the
creator of Reddus, yes, that Reddus, is
starting to gut C++ libraries in favor
of rewritten vibecoded pure C
implementations. This PR removes 4,000
lines of code and only adds 400. And as
he says in the description, this code
was written by cloud code using opus 4.5
and tested carefully, both handested and
tested against the original
implementation in a synthetic way. Code
review was performed independently by
codeex gi 5.2. Very interesting to see.
And I have a feeling this trend is not
just going to continue, but it's going
to fundamentally change the way that our
projects look and most importantly the
way that our package JSONs look. It
seems like we're quickly entering a
world where these libraries are no
longer anywhere near as useful. But you
know what is useful? Today's sponsor.
I'm a big advocate for not wasting my
teammates's time. Which is why I think
it's really important to use a good AI
tool to help you with code review. Even
if you're not using AI to code, it is a
really helpful thing to have an AI check
your work. Which is why Code Rabbit has
been making my team move so much faster.
We've had it on for about a year now,
and my team won't let me turn it off or
move to anything else. They just really,
really like it. for the quality of the
reviews, for the types of things that it
catches, for how tunable it is just by
telling it, "Hey, I don't like that
thing." But honestly, I'm getting more
hyped about Code Rabbit because of all
the other cool stuff you can do with it.
Recently, I've been loving their CLI.
Yes, a vibe coding reviewing CLI. Think
Claude Code, but for telling you what
you did wrong. Sounds crazy until you
combine it with Claude Code. Yes,
really. All these agentic coding tools
can use Bash. So if you give them a tool
to review their code and tell it to use
that, it can output plain text that the
CLI can then use to fix things. And this
does actually increase the quality of
the output. I can't tell you how many
times I was running into the same loop
before where I would make an agent, make
changes, put up the pull request, wait
for a review from an AI code reviewer,
and then have to copy paste the feedback
back into my agent. This lets you trim
that out entirely and just get better
code out as soon as you ship. It also
integrates in your IDE, VS Code,
Windsurf, Cursor, and more. It's kind of
crazy to make changes on your computer
and just have a little thing pop up
saying, "Hey, are you sure about that?
That doesn't seem safe." It saves me so
much time and it's prevented many actual
bugs and outages from shipping on T3
chat. Try it free today at
soy.link/codrabbit.
This is going to be an interesting one
because I found that there are a couple
different archetypes of people that have
opinions on this. Like think of the type
of person that makes fun of JavaScript
developers all of the time for
installing things like is odd or
leftpad. The people who are against
these packages have a real point. It's
outsourcing some level of competency.
Anyone should be able to write the
threeline of code function to pad things
on a string on the left of it. And is
odd is literally one line of code. Like
we're better than this. That said, there
is value in libraries, at least at a
slightly bigger size than these ones.
I'm tired of the meme here. And the
arguments against libraries like this
all make sense. Like outsourcing your
dependencies this way, is a risk because
now you have to worry about if the
dependency changes. Maybe somebody takes
over the package and is malicious and
publishes a bad change. Maybe there's a
major version that breaks the way you're
using it. Maybe you have all of these
things installed and now this huge
dependency chain is making it way harder
to know what is or isn't safe in your
app. There's a lot of things like this.
For the most part, the argument against
these libraries is that people who don't
really know what they're doing are the
users. And they also don't understand
the risk they're taking on when supply
chain attacks can happen, when
dependencies can change, just all the
things that could go wrong when your
package JSON looks like a glossery of
npm. And I've seen projects like this.
I've seen plenty of projects that look
like they just npm install whatever the
hell is in front of them rather than
actually trying to solve problems
themselves. What's always been
interesting to me is the people who
complain about these things tend to also
complain about AI code, which is weird
because in many ways AI code is kind of
the solution to this problem. The reason
people reach for libraries as to be
frank pathetic as is odd or leftpad is
because they don't know what they're
doing. Thereby they are increasing risk
in their codebase to get over the fact
that they don't know what they're doing.
and things like generating AI code
instead or tab completing the function
or just asking the chat next to your
code in the interface for help doing
this thing. Those are much less likely
to have those same problems. Would it be
better to just write the code yourself?
Maybe probably at the very least
understanding the code is important. But
if you look at the things that are wrong
with stuff like these packages, the
classic supply chain attack stuff, the
fact that it's not in your codebase, so
anything that goes wrong with it or
doesn't perform exactly as you expect it
to is now a problem that you have to
deal with. And of course, you don't
actually understand it. These are all
real problems and we are able to
eliminate multiple of these simply by
generating the code ourselves instead.
So for these smaller scale libraries,
yes, people probably shouldn't have been
using them in the first place, but AI as
a way to eliminate them from the lives
of less capable devs that don't actually
know what they're doing. The
stereotypical person who would install
is odd. AI code is a net win for them.
It will also make it harder to tell that
they are as bad as they are, which has
its own benefits and negatives, but at
the very least, AI replacing these types
of things is a good thing. And people
relying on packages instead of the two
lines of code they should write
themselves is not great. Moving this off
is cool. But then there are lots of
other libraries that are quite a bit
more annoying to deal with doing things
that are tough. Things like tkumi for
example or fast float in C++. These
libraries are in Rust and C++
respectively. Tkumi is actually a really
cool project. It makes a ton of sense
for a lot of things. Tacumi is as they
say here, it's a JSX to image. It's a
fast way to do rendering of React
components so that you get an image out.
really cool if you want to
programmatically generate images like
this or like this. It's very cool.
Problem is the bindings just were not
playing nice with our deployments on any
platform, especially on Verscell. I
spent a lot of time trying to fix it,
but whenever we shipped, it would build
fine, but it would error out all of our
instances and our production instance
for T3 chat went down for about 25
seconds when this officially shipped for
us. So, I immediately reverted the ship
and went and fought it a whole bunch
privately and concluded that I didn't
feel like it and it was easier for me to
work around the problem by vibe coding
my own solution. To be fair here, I also
have built things like this before that
render React in the browser, snapshot it
to a canvas, and then let you download
the image. It's not something I'm
unfamiliar with, but this library was
the better solution. Moving off of this
was a downgrade in terms of the
capabilities of the system, the
performance and the fact that we moved
this to the client instead of doing it
on the server like we had previously
wanted to. It also meant that if you
share a link to your wrapped that we
wouldn't have the image there. The image
had to be generated on client. So it had
its compromises. But as a developer that
understood these things, I could make
that decision up myself. I thought about
it thoroughly. I looked at the problems
we were having. I looked at the things
that were going wrong and I concluded
that it wasn't worth it for the benefits
we got when I could instead build my own
solution. I ended up having to spend a
lot of time going back and forth and
editing a lot of the code myself because
once you get into weird canvas
rasterization stuff, the usefulness of
AI agents can go down quite a bit. I
tried everything with this and they
could all get a vague React rendering in
a box that could be turned into a
picture mostly right. But once it came
to things like spacing, word wpping,
general layout stuff, and especially
like the aesthetics we wanted to add
behind the cards, it failed super hard
for all of that. And it is my
understanding that fast float came from
a similar place from our friend
Anti-Res. His goal here was to remove
one of the few C++ dependencies that is
required for building Reddus. This is in
order to simplify the build process.
Funny, not far from what I'm dealing
with. It turns out that bringing in
other people's code and expecting it to
operate in your codebase the same way
your code does is not realistic. And
it's one of the problems that we often
deal with with our environments. This is
why tools like Webpack are still around
in 2026. Because as frustrating and
annoying as they are to work with, there
are often dependencies that companies
are building on and around that expect
tools like Webpack to work a certain way
so that you could integrate them into
your codebase a certain way. A tool like
this fast float dep is expecting g++ to
be installed. Sadly, a lot of Linux
distributions don't have that installed
even when you set up basic build tools,
which means that Reddus can't be built
on those machines until you manually set
up G++. Previously, that made sense
because writing a float library was
annoying and difficult and rewriting it
in C was even more annoying and more
difficult. It's just this type of
tedious thing that is what we reached to
libraries for. And I want to put these
libraries in different categories.
There's libraries that go beyond your
knowledge. This is for people who don't
know what they're doing. This is stuff
like is odd or leftpad for people who do
know what they're doing but maybe just
don't want to fight it. For me, that's
something like Tkumi. I didn't want to
write Rust code to render things on the
back end. Not interesting to me. But
what it does was within my knowledge. So
there's certain libraries you're using
because what they're building is beyond
your knowledge. And to be clear, I want
to categorize this primarily as a thing
that happens to more beginner devs. But
then there is libraries you use because
re-implementing it your way would be
tedious. There are a lot of things that
fall into this. For me, I still don't
like writing my own state management
libraries in my React projects because
as much as I have opinions about state,
all of the little things you have to get
right are frustrating and I will always
prefer to work with an existing
solution, especially since so many of
them are so extensible, so rock solid,
and so performant. There's another piece
to consider here. Maintenance and
extensibility. You can build things in a
way that you'll maintain them yourselves
and they'll be very extensible, but the
chances you do that are very low. So if
you need a really powerful tool to
handle whatever you throw at it in the
future, like let's say you're building
an application for the web and in the
future you want to add login, you want
to add a blog, you want to add sessions
that can be invalidated more easily, you
want to add better SEO through server
rendering, all those types of things.
Maybe you build the app yourself using
vanilla React or you build your own
frameworks to build around this or you
can use a tool like Nex.js which has
decades of work put into it in order to
make all of those pieces more likely to
be there when you need them. Have I
regretted picking next for projects in
the past? Absolutely. But did I regret
it because it was missing functionality
that I needed? No. Next. JS as a
platform was very welcoming to all the
weird things I wanted to do to it.
Whereas now that I'm building more
things with traditional V plus React, I
don't have the ability to server render
the pages that I do want to, which is
very frustrating when I do want some SEO
on certain places or I want to have
static rendering for pages like my docs
page or maybe for my blog or the terms
of service on a site that I'm building.
That stuff is obnoxious to get right.
And having a tool like Nex.js that
allows us to go in whatever direction we
want to is really powerful. When I put
Nex.js JS in here. Am I saying that I
fully understand every detail of how it
works and I could build it myself if it
wasn't so tedious? No. But the things
I'm using in a given project, I probably
could do that for. So, Nex.js is still
the type of depth that would make sense
for a lot of my projects. That said,
there's a lot that it doesn't make sense
for, and I'm much more likely to throw
together my own solution now than I ever
was in the past. Things that were too
tedious to implement ourselves suddenly
make way more sense, though. And this is
where things get interesting. Let's try
to chart this out with complexity versus
your capability. Here's something like
is odd. It's obviously as little
complexity as possible. So, who would
ever possibly use this? Well, somebody
who's just learning how to code, like a
true beginner could absolutely see
themselves using something like is odd
because their capability is relatively
low and this library is relatively
appetizing as stupid as the problem it
is that it solves. So somebody who
doesn't really know what they're doing
reaching for this makes sense. So the
way I'm thinking of this is that on this
section, these are things you would pull
in code or depths for. And down here is
things you wouldn't dare do that for.
Because as your capability goes up, your
willingness to adopt some external
solution to a problem goes down based on
again how complex the problem is. If I
have something as complex as
synchronizing data across client and
server, I might foolishly think I can
build that myself, which I did with
Theo's awful sync engine that used to
power T3 chat. I thought I could do that
myself and that the solutions that
existed just weren't worth the adoption
cost. So I tried this myself and I went
to hell for it. So now I use Convex,
which is an external solution. It is a
service. It is a framework. It is a
library. It is a lot of things. But it
is effectively a database as well as a
compute platform for updating things in
the database. So now that I've
experienced this problem, my willingness
to adopt Convex has gone up a ton. And
there are other benefits too that Convex
by living in my codebase as actual
source code makes it much easier for
these same AI tools to make changes. I
know hardcore convex shell, sorry. I
love this project so much and there's a
reason that it comes up a lot. It really
really helps in this new world. So
what's very interesting here is I feel
like this has bent out a ton. Instead of
this being such a simple linear line,
the willingness to pull in depths has
changed fundamentally. Why would I pull
in a depth when I could use the
dependency as a reference point that I
can then prompt an agent to go build?
And this is where things are going to
start getting really interesting in my
opinion. An idea I've been thinking a
lot about is this quote from Sty Pete,
who I talked about in almost all my
videos recently. Pete's building a ton
of cool open source stuff using agents.
His stuff's getting adopted more and
more, which means he has to review more
and more PRs and requests, issues, all
these types of things. Here's his quote.
I don't like pull requests anymore. A
large chunk of code change doesn't tell
me much about the intent or why it was
done. I now prefer prompt requests. Just
share the prompt you ran or want to run.
If I think it's good, I'll run it myself
and merge it. This is an idea that's so
crazy. I'm probably going to do a
dedicated video on just it. But it also
shows a lot of what I'm talking about
here. Instead of seeing a library and
immediately npm installing it, what if
you see the library and the developer
experience it provides and then you tell
an agent to implement something like it
in your codebase instead of just blindly
installing Zustand, which I love and
will continue to install, especially
because its bundle size is so small and
it's very well documented and agents
understand it well. But hypothetically
speaking, if I liked this but didn't
want to use the package or I wanted
something that only included parts of
it, like I really want this but I don't
want the rest, I can copy this, hop into
my terminal or into whatever other tool
I use, paste it in and say make this
work this way. Don't bring in any
depths. It will implement the parts that
I want. This is one of the harshest
realities when you bring in a library.
There's a very very good chance the
library does a hell of a lot more than
you want it to. Funny enough, it's one
of the benefits of these tiny ones like
is odd and leftpad. These don't do jack
but libraries like this probably
bring a lot of things in that you don't
want. And then libraries like Next are
definitely bringing in a lot of things
that you probably don't want or need.
And this is now a calculation that has
fundamentally changed in our heads.
Previously, the math was pretty simple.
You multiplied how hard the problem was
with how bad you need a solution to it
with how risky it was to adopt that
external solution. This math's gotten
all wonky now because the risk now feels
much greater because any library,
especially after all the npm exploits
and things, every additional depth in
your dependency list feels scarier now.
How hard is this has gone down because
it's easier to implement and how badly
do we need it hasn't really changed, but
it feels like how hard has gone down and
the perceived risk has gone up. This
makes the math work out very, very
differently than it used to. I'm not the
only one saying this. There have been a
couple articles from both Simon and
Jeffrey about how libraries are kind of
dead now. There's the classic meme as
old as time, all modern infrastructure
versus a project some random guy in
Nebraska has been thanklessly
maintaining since 2003. And to be very,
very clear, you're not going to vibe
code an alternative to FFmpeg that is an
incredible [snorts] library that is
going to power things indefinitely into
the future. Open source by design is not
financially sustainable. Finding
reliable, well-defined funding sources
is exceptionally challenging. Projects
grow in size, many maintainers burn out
and find themselves unable to meet the
increasing demands for support and
maintenance. What's funny here is this
is getting even worse as a result of AI.
AI makes it easier to implement these
things yourself, but it also means a lot
of less experienced people are coming
into the field and are adopting these
things. I would guess, I'll go check,
but I would honestly guess these
libraries are probably being installed
more than ever, not less than ever,
simply because of the popularity of
coding going up as a result of these AI
tools. Let's see if my theory here is
right. Is odd has maintained roughly
where it was, but it is going back up
now after the Christmas slump. Yeah,
downloads have not gone down even though
the need to install this has gone down
because you can just vibe code an
alternative or hopefully you're
competent enough to know the one line of
code you want instead. Leftpad people
love downloading as a meme, so it will
always have these weird spikes. But as
you see here, the amount of actual
downloads does indeed appear to be going
up over time, which is weird and scary.
But again, that's because people are
writing more stuff than they ever have
before. What's even funnier in the case
of Leftpad is that this is now built
into JavaScript. There is a concept of
adding padding as a string function
that's built in and people are still
installing this. But when you have
enough more people who are building
stuff using AI that don't know what
they're doing, the likelihood they
install something like this is also
going up as a result. Something that's
important to think about when you're
adopting libraries is how much you can
shift it because you don't control it.
Remember, a library is a bundle of code
that lives somewhere else that you're
effectively pulling into your project.
It's one of the biggest benefits of vibe
coding alternatives because you can now
shift them in different directions when
they don't do what you need. So you have
to think when you adopt this thing, can
you influence it? Like can you change
how it works? And if you do need
something, can you convince people to
make those shifts for you? If no, you're
just a user of the thing. If yes, we
have more questions. Do you have the
capacity to contribute? Do you have the
time to go into this project, make
changes, file pull requests the way the
team wants, and communicate with them to
get them actually merged and shipped? If
no, you're a backer. You're just maybe
contributing money to it, but that's
about it. If yes, we have one more
question. Strategic importance. How
important is it for this thing to be
part of your project to work certain
ways? How important is this to your
business and what you're building? If
yes, you need to be a maintainer. If no,
you need to be a contributor. Roughly,
yes. But if you could throw this whole
thing away by just forking or building
an alternative, the value proposition
changes a lot. One fun example of this
is ink. Almost all of the coding CLIs we
are using right now originally started
based on top of ink. So what is ink? Ink
is react for CLI. It lets you write CLIs
using React components by providing a
bunch of primitives as a library. Kind
of how React Native works where it
provides components that let you render
to native platforms. Inc. provides
components that let you render to the
CLI. I still use this for a lot of the
command line apps that I'm vibe coding
because AI tools know React very well
and this library is well documented
enough and used enough that people seem
to understand it too. And to go back to
npm trends, we can take a look at how
popular ink is and see very clearly that
it has skyrocketed in popularity largely
due to these new tools. INC is really
cool. That said, ink has plenty of
problems of its own. The classic flicker
problem that exists in tools like cloud
code and is starting to finally be
resolved is largely blamed on ink. The
harsh reality is that ink isn't the
problem there. There's a whole world to
go into here of the different rendering
modes in the terminal using the standard
built-in buffer versus the alternative
buffer. And that's where a lot of the UX
differences come between a tool like
Claude Code and Codeex versus a tool
like what you see with Open Code. Open
Code isn't using the standard buffer, so
it's not going to behave the same way
your terminal does. and they have to
reimplement all of those things. That is
why these things are so different. That
said, ink is really powerful, but it has
its limitations around the way it uses
the buffer, around performance
expectations, around a lot of the
assumptions that it has, and most
importantly, the fact that it's just one
dude building it part-time who is busy
defending his country from another
country. I have the utmost respect for
Vimm and I totally understand why he
can't be making changes to this project
constantly, especially considering how
many multi-billion dollar companies are
leaning on it as heavily as they are.
That's why Cyny's largely taken over the
project.
Cyny's built a ton of awesome
including Kai, which is one of my
favorite fetch alternatives in
JavaScript and one of those things that
I was starting to install more that I
now rip. I should sponsor him. I will do
that after stream. You should, too. They
are working really hard to maintain this
library, especially now that it is more
essential than ever. That said, certain
companies that don't necessarily believe
as much in open source that are relying
on this heavily
claic, [cough]
sorry, I've been sick for a bit decided
that rather than contribute to this
library or do an open-source fork that
fixes their problems, they've instead
decided to just internally privately
rewrite it. And now Claude Code is using
their internal alternative to ink that
is closed source and private because
it's cheaper to do things that way. And
that's really what it comes down to. To
go back to this chart, Enthropic decided
that while they kind of have the
capacity to contribute, they don't feel
like doing it, but they didn't like
landing on the backer space. So they
decided to just build it themselves. And
now with cloud code internally, it's way
easier for them to build and maintain
that, which is a big part of why they do
it. their willingness to fork
dependencies and internally maintain
alternatives is a lot higher because
it's easier to do that. This is also why
they chose to buy bun because unlike the
ink dependency, bun is probably not
something you can vibe code your way
around. I know that Jared is using cloud
code to make a lot of changes to bun
now, but that is with a lot of effort
and bun is becoming more and more a core
dependency of cloud code as they're
trying to ship binaries that are native
single packages that could be easily
installed across platforms. Bun has
become more and more core to how the
project works. One way of thinking of
this is what is the relationship between
the library and your project. this is
your project and the library that you're
using is some small part inside of it
like I don't know leftpad maybe that
library is not something that you should
have as a library because what you're
doing when this is a library is
effectively taking this thing that isn't
yours and linking it in as a virtual
project inside because you don't have
the code you just have the package. So
for things like this or in the case that
we're talking about here, inc. If you
can't change this thing because it comes
in through a package, your decision to
kill all of this so that you can own it
makes sense. But if the dependency we're
talking about is a little higher level
like you know this, I would argue bun
effectively wraps cloud code the project
in a literal sense because bun is how
they bundle it, how they build it, how
they create the binary for it. It's also
where all of cloud code's code runs.
Cloud code is a JavaScript project that
has a bunch of dependencies that run
inside of bun. Bun is a higher level
external dep in this case. So bun being
something that they acquire and maintain
makes much more sense because if they
want to reduce the risk of all of their
dependencies, the one that wraps
everything is scary and they should
probably try to find some way to
maintain it. It makes sense that Bun was
acquired by Anthropic when you consider
the nature of how Bun works within their
projects and their work. Whereas inc
they see as easier to just rewrite. And
here's the harsh reality. We all need to
change how we think of it. It's very
strange world that we're living in now.
Anthropic is one of the first to realize
this and do the math and decide inc is
worth forking. Bun is worth buying. We
all need to think about this ourselves.
Which projects are worth forking, are
worth building our own alternatives to,
are worth funding and contributing to or
are worth avoiding because of the risk
profile. The way that we measure which
projects we rely on and which packages
and libraries and tools we build on top
of is fundamentally changing right now.
And it's important that you go through
your dependencies and think about this.
Is this library necessary? Is it holding
us back? This is also why I love
projects like shaden because shaden
doesn't encourage you to install a bunch
of npm packages that you don't have any
control of. It did used to encourage you
installing a ton of the radics packages
but even they are now burnt by that and
are starting to move to base UI. What's
cool with Shad CNN is when you set up
Shad CNN, it is doing such by copy
pasting all of that stuff into your
codebase directly. Shad CNN creates a
folder named components where all of the
files are there. On the topic of having
the code in your codebase, turns out AI
is way better at going through code
bases than it is at trying to find
things through hellish unorganized docs,
LLM's text files, and crappy MCPs that
load way too many tokens into your apps.
Then my channel manager and also fellow
YouTuber built this app BTCA which will
instead of doing that clone the whole
codebase and use the built-in tools in
your agent tooling things like GP and
other search tools to find how this code
works. This ends up being way more
reliable than the classic MCP solution
like context 7 that is trying to index
all of the data in the docs. Yeah, BTCA
is way more reliable because it actually
understands the code because it has the
real code. It turns out these AI tools
are way better at dealing with code than
they are at dealing with poorly
formatted documentation. So when the
code lives in your project, that's going
to be much better than a bunch of old
outdated Stack Overflow threads that
don't actually understand how things
work. But do you know what's better than
better context? Just having the context
in your codebase directly. The value of
having your code inside of the repo,
inside of the place where your agents
are already operating, is higher than
it's ever been, which greatly increases
the incentive to pull things in. It's
part of why Shadian is doing as well as
it is right now. It's why more and more
people are starting to do changes like
this and use tools like BTCA or drop
external libraries where they can
entirely. There's been a ton of progress
in the space. I like Simon Willis's
thoughts on this, too. He specifically
says that a lot of his open source
projects solve existing problems that
are frustratingly hard to figure out. He
also built Django the Python framework.
So he has experience here. He built his
S3 credentials package specifically
because getting read only and read write
credentials for S3 buckets was annoying.
Shadowed upload thing by the way and
that's big part of why we made it.
Getting S3 right is obnoxious. And
instead of having to figure out how to
do all of this properly through IM
policies, he built a package to make it
easier. But now modern LM are very good
at those same policies to the point that
if he needs to solve the problem today,
he doubts he would find it frustrating
enough to justify finding or creating a
library to do it. And I'm seeing this
too. I can't tell you how many times
I've shared something I'm working on and
a bunch of people say, "Why don't you
release it? Where's the code?" And I'll
instead just send them the prompt and
then they go add it themselves. I no
longer want source code. I want prompts.
I want examples of how they're using the
thing. I want to know what the value is
and why they made it so I can then go
make my own equivalent. There are lots
of projects that you can't do in one to
10 prompts, but there are a lot of
things you can. A lot of dependencies
that are so well understood, so well
documented, and so common that replacing
them with your own code suddenly makes a
lot of sense. It's kind of crazy when
you think about it. Previously, the
thing that made a library really useful
was that the problem was annoying, the
solution was simple, and the package was
popular, universal, maintained. These
this is roughly what would make one of
those packages worth installing. These
exact same characteristics make it
really easy to solve with AI. If the
problem is annoying, chances are there's
a lot of public documentation on the
problem. If the solution is simple,
there's probably a lot of documentation
of that solution that the agents have in
their training data. And if the package
is universal, popular, and
well-maintained, there's a very good
chance the agents and the LLMs we're
using have it all in their training data
and can replicate it meaningfully well,
relatively quick. The same things that
made one of these packages worth
installing now make the package worth
replacing. And that's a weird thing
that's going to have long-term side
effects as we keep changing how we
build. This is also a big part of why
we're not talking as much about new
libraries as we were in the past because
the incentive to bring in something new
that the agents might not know about is
a lot lower than the incentive to build
something the agent does know about cuz
it lives in your codebase. And this puts
us in a really weird place where I think
we're going to see people copy pasting
prompts instead of building actual
packages. As NC just put it in chat,
prompts are kind of becoming the new
library. I expect we'll see more and
more things like this. Matt PCO posted a
few days ago that bad agent MD files can
make your coding agent worse and cost
you tokens. Here's a prompt you can use
to clean it up, plus a full guide for
folks that want to learn. This is a
prompt you can go to his website, grab,
copy, paste into your terminal, and then
use to fix things. Here's the prompt. I
want you to refactor my agents MD file
to follow progressive disclosure
principles. Follow these steps. Find
contradiction. Identify any instructions
that conflict with each other. yada
yada. You know what? Let's try this.
Open a new terminal. Paste. Enter. Let's
see what it has to say. Contradictions
found. None detected. Your instructions
are internally consistent. The only
potential ambiguity is that I say use
TMP if the project already uses it.
Otherwise, use bun. But it's clear as
written. And then essentials. These are
things that should stay in the root.
Cool. It suggests that maybe my text
stack and TypeScript restrictions should
be put out, but they're so simple. I
don't think that's necessary. Always try
for concise, simple solutions. That's
funny because that I copied from a Matt
PCO thing. Redundant. That's the default
behavior. Sure it is, Claude. Sure, your
default is concise and simple solutions.
If a problem can be solved in a simpler
way, propose it. Redundant covered by
the above. I don't agree. I found that
this addition has actually been very,
very nice. If to do too much work at
once, stop and state that clearly. Too
vague. What threshold? I sure but this
is actually useful and if I had a bigger
cloud MD this would be very very helpful
in fact let's ask it in T3 chat to do
the same thing here it's actually
finding some useful stuff especially
comparing and contrasting my global
cloudmomd with the internal one like
this is a useful thing and now it's
asking questions about which things I
want to keep track of that's great this
is again valuable this could have been a
library he could have written a library
that I would run that would do a lot of
different things that would make these
comparisons that would run against some
agent he's hosting in the cloud and
charging money for. But instead, it's a
prompt you can copy paste into your
codebase and see what happens. That's
really cool and I suspect we'll be
seeing more and more of these things
going on in the future. I think I've
said all I have to say here. The world
is changing and in some ways it's scary,
but in others it's exciting. How do you
feel about this though? Are you going to
go uninstall all your npm packages or do
you think I'm just blowing smoke up of
Claude's ass? Let me know how you fail.
And until next time, peace nerds.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.