What’s the best programming language for AI?
FULL TRANSCRIPT
Us programmers love to argue about
different languages. Which is best and
which is worst, which is fast, which is
slow, and so many other things about
them. But which language is best for AI?
Apparently, it's Polish. But we're not
here to talk about Polish or spoken
languages. We're here to talk about
programming languages. And just to be
even clearer, we're not talking about
programming languages that were made for
AI. There's already a few of these, like
Nanolang and Nerd, but as cool as those
are, they're nowhere near ready for real
world use. I want to talk about real
programming languages that we already
use every day. Autocodebench is a
benchmark made by Tencent specifically
to try and evaluate how different models
perform in different languages. Their
goal with this was to figure out which
models are the best at certain
languages. Well, we can go a different
direction with it. We can use this to
see which languages the models do best
with. And the results are genuinely
fascinating. I first saw these because
the author of a language that I'm
friends with showed it to me and we were
both just kind of standing there in
disbelief. It's a very interesting
bench. It has some very interesting
results that have very very interesting
implications. Whatever language you
think is the best, I'm almost certain
it's not unless you've already read
this. And spoiler, it's definitely not
Typescript as much as I would like for
it to be. And believe it or not, it's
not Python either. I can't wait to tell
you all about which languages are
topping this benchmark and why I think
they're doing that right after a quick
word from today's sponsor. Stop me if
you heard this one before. AI is great
at coding and it's better at code
review. That's why today's sponsor is
Graptile, one of the best AI code
reviewers. Okay, you've heard this. I
get it. I want to do something a bit
different today. I don't even have
permission from the team, but I I hope
that they'll be okay with this. They're
going to review this before we air it,
so it's kind of their thing. Regardless,
this is a post they just made about the
AI code review bubble. This was very
interesting to me and I thought it was
worth touching on because a lot of other
companies have started releasing AI code
review products as part of their
existing tools be it cognition linear
and more and I really like their takes
on this. Grapiles benchmarks say they're
the best but all of the benchmarks from
all of these companies say that they're
the best. We have to talk about this a
little bit deeper. The first big piece
they touch on is independence. We
strongly believe that the review agent
should be different from the coding
agent. We're opinionated on the
importance of independent code
validation agents. In spite of multiple
requests, we've never shipped codegen
features. We don't write code. An otter
doesn't prepare the books. A fox doesn't
guard the hen house. And a student
doesn't grade their own essays. Today's
agents are better than the median human
code reviewer at catching issues and
enforcing standards. And they're only
getting better. It's clear that in the
future, a large percentage of code at
companies will be auto approved by the
code review agents. In other words,
there will be some instances where a
human writes a ticket, an agent writes a
PR, and another agent validates,
approves, and merges it. This sounds
insane and I never would have believed
it, but I started doing it myself. They
talk a bit about feedback loops here
because they have a Claude code plugin
for Grapile. Yes, really. The plug-in
can be set up to give feedback to the
coding agent after it writes code. And
once the review comes in, it'll address
the feedback over and over again until
there isn't any left. And if there's
ambiguity at any point, the agents slack
the human to clarify. Do you know how
cool that is? you know how much longer
you can run agents for if they're
getting feedback and review, especially
if it's coming from a different harness
or a different model. I've tried the
local review things and they just aren't
as good. Grappile's understanding of
your codebase goes deep. I've seen it
call things out from many files away
that aren't even being touched in the PR
because it understands your code base.
Combine that with a coding agent and it
can iterate on itself way faster than
you can imagine. I guess there's a
reason everyone from Brex to Scale to
Post Hog is using them. They seem to
really get it. And from my experience,
they definitely do. Graile stop me from
shipping bugs. Let them stop you at soy.
Before we can determine what the best
language for AI is, we should probably
start with what makes a good language
and then also, of course, what makes a
good language for AI. So, what makes a
good programming language? I'm sure this
is a really simple question that nobody
will have contradicting opinions about.
What makes a good programming language?
Let's see what theories we all have.
Cotlin makes a good programming language
apparently. Types, type safety, simple
syntax, readability, no error handling,
long live go, lots of training examples,
great compiler, Apple beauty. Ooh, from
YouTube. YouTube's actually playing good
here. Pattern matching. That's one that
I love. That's a good feature. Human
readability.
Simplicity plus functionality equals
sophistication. How fast is it, bro? If
it's called Rust, good docs, emoji to
text ratio. Most languages suffer
immensely here in particular. I am
disappointed at how few languages let
you define variables and values as
emojis. Very frustrating. Higher kind of
types, macros, functors, decorators,
centralized tooling, standard library.
Interesting. I think hopefully from here
we can establish the one thing that
makes a good programming language for
humans. The human likes it. This is
really what it comes down to. As sad as
it is, I wish we could evaluate
programming languages in a more
meaningful way. But sadly, humans do not
allow us to do such. We project our
beliefs onto languages as a whole and
use those to evaluate them and as such
have preferences that go beyond anything
that can be measured. As Joel said in
chat, if you like it, it's good wine.
Yep. So, when we're talking about what
makes a good language for us, we're
going to obviously have our own
preferences biased into it. So let's ask
the next version of this question, the
one that we care about. What makes a
good programming language for LMS? What
are the traits and characteristics that
make a model good with a given language?
Simplicity, types, LSP, token
efficiency, that's an interesting one to
bring up. A word I'm not even going to
try to pronounce. concise syntax, high
verbosity that feels contradictory,
vector spaces, C style syntax, how much
training data that it was trained on,
the quality of the training data in the
language, how much data the language is
in the AI, and how specific and clear is
the syntax. There's a couple themes we
are seeing here. Those are simplicity,
token efficiency, amount of training
data, feedback mechanisms,
i.e. type safety. Anything else
important you guys feel like I'm missing
with this? How unique the syntax is.
That's an interesting one. Another one,
not loads of [ __ ] code in the training
data. So, amount of training data. I
will put underneath that quality of
training data. It's closure. Mandatory
return types. How similar are the
patterns to other languages. I guess
homogeneity is indeed a useful value.
Like how similar is it to other things?
So, let's take a handful of languages
and throw them against this and see how
we think they will rank. JavaScript
simplicity e no token efficiency
definitely no amount of training data
yeah there's a lot of training data for
this quality of that training data mixed
bag feedback mechanisms jack [ __ ]
JavaScript probably not great the
training data is huge so if you think
the training data is by far the most
important thing that you have a ton of
it and JavaScript should be pretty good
so depending on how you weight these
things should be near the top or the
bottom TypeScript similar overall I
would argue the TypeScript training data
available is of slightly higher quality
and the feedback mechanisms are very
important because type safety can be
checked for with a compiler and feedback
can be given to the model. Thereby I
would expect TypeScript to perform
better. Python interesting one probably
the most data and also the most
incentive from the labs to make it work
because they use Python a lot but also
no agreement around the type safety.
There's lots of different ways to do it
and all of them suck. There's a ton of
[ __ ] data for Python and the feedback
mechanisms are garbage. So, probably
mixed bag. Java definitely not hitting
simplicity or token efficiency anytime
soon. Probably has a good amount of
data. Probably is decent quality data.
Not great feedback mechanisms, but not
the worst either. It is kind of checked.
Sure. C++
I think C++ is the only thing to get a
straight zero on every one of these
other than amount of training data. It
has lots of training data, but it's not
a simple language. It is far from an
efficient language in terms of the token
efficiency. The quality of that data
varies comically, and the feedback
mechanisms are did it crash your system
or not. Good luck. C is probably a lot
better in these regards. I would, you
know what I'll do? I'll add one more
piece here. Thing I think is important,
documentation. And I'll say and ease of
accessibility. So, how good are the docs
and how easily are they accessed? And to
be fair, for everything up until this
point, I would say the answer to the
documentation access for that is [ __ ]
for C pretty damn good. The C# community
ecosystem and frameworks around Net and
everything have some of the best
documentation in the world. Whether or
not you like the language, it is very
good at this. How about Ruby? Honestly,
for docs, Ruby does pretty well.
Feedback mechanisms, it does utter
[ __ ] [ __ ] For quality of training
data, it is not very good. Token
efficiency, it is decent. Simplicity, it
could be decent, but Rails ruined that
for us. Could be fine. I would probably
guess it's higher up than Python and
Java based on what we know, but hard to
know. Elixir, my beloved. Simple,
depending on who you ask. Token
efficient, decently so, probably similar
to Ruby there. Amount of training data,
very, very small. Quality of training
data, I'll be a little conceited here
and say all of the Elixir devs I know
are some of the smartest people I know.
So, the quality of that data is likely
very high. Feedback mechanisms, [ __ ]
garbage. They are still introducing type
safety to the language. They are far
from it though. They're not that far
from it. It's coming very soon. But the
tools even understanding that's going to
take a while in the documentation
worldass. Golang simple to a fault.
Token efficient absolutely not. Amount
of training data decent. Quality of data
decentish. Feedback mechanisms garbage.
Documentation eh fine. Rust simplicity
no. Token efficiency no amount of
training data decent. Quality of
training data very decent feedback
mechanisms worldclass the best feedback
you could possibly get from a compiler
even if the compiler is slower than the
model is and documentation very good. So
just off of what we said all here, I
would guess we'd have Rust near the top.
Amount of data is important. So I would
guess Typescript would be decently high
up, right? C's probably decently high up
there. I know Go doesn't have the
feedback mechanisms, but the data and
the quality of the data and the
documentation probably helps it a lot.
C++ is not going to be anywhere near the
top. Python because they care too much.
Ruby because it exists and there's a lot
of it. There's probably a lot of good
examples. JavaScript because there's a
lot of examples. Java because there's a
good enough number of examples. C++
obviously near the end. What is the
other one we're missing? So JS. Oh,
Elixir. Huh. Where does Elixir fit here?
That's a tough one. I would just
instinctively say right below Ruby by
the nature of Elixir being very similar
to Ruby but less popular. This all makes
sense. Like given the biases we have and
what our thoughts are for what makes a
good programming language, Rust
obviously should be near the top. C++
obviously should be near the bottom.
Sure. Spoiler, this one's a pre-eread.
I've already seen the results here and
they are far from what any of us would
probably expect even with an exercise
like this. So, we look at the results
here by the average per language, we'll
see Opus one. And if we look at the more
recent version, they did a V2, we'll see
Opus 4.5 thinking crushed. So, overall,
Opus best language at this bench.
Doesn't seem like they did 5.2. And 5.2
Codex wasn't out at the time they did
this, so we don't know how that compares
yet. They probably do pretty well. But
we're not here to see what model is the
best. We're here for the languages,
which is this section here. So this is
the score based on all of the problems
they had for it. Most of the languages
had roughly the same number of problems
between 185 to 200 of similar problems.
And you'll see the top languages are not
the ones that we suspected. We thought
it would be Rust and Typescript. And the
numbers for Rust and Typescript are
respectively
61% for Rust and 61% for TypeScript.
Almost identical score, 61.3. The
languages we thought would be very good
were not. But then we get to some other
interesting ones like Java that got a
78.7 meaningfully higher scores than
Rust. Weird. C++ 74.7 meaningfully
better than Typescript. Even weirder.
Then we get to C sharp with an 88.4%.
But if you're actually looking at the
screen, which I know not all of you do,
you might have missed the thing I'm kind
of trying to cover up.
My favorite language, my beloved Elixir
got a 97.5.
Wild. Did I do this whole video to trick
you guys into listening to me rant about
Elixir for a bit? Kind of. And I think
I've earned the right to do that for
getting you here for this long. So, I'm
going to do that now. Elixir is an
incredible programming language. I love
it dearly. It was the first time I ever
felt productive as a programmer and the
result is that I felt very productive.
So much so that I did a lot of really
cool things. Ended up getting pretty
wild career starting a company and now I
am known as the TypeScript guy. But
believe it or not, my heart is not in
Typescript. My life is in Typescript. My
work is in Typescript. My heart will
always be in Elixir. Jaz Valim is a
personal hero of mine. I love that man
so much. I would not be the programmer I
am today without him. I have a whole
video I did that was actually a talk I
gave at Elixir comp about how Elixir
made me the engineer I am today. I
[ __ ] love this language. There's a
lot of things I like about Elixir. I'm
just going to talk about the language
ones because that's what you experience
when you write Elixir. Things like the
piping. It's so good. You just pipe
almost the same way you do in a
terminal. And when you do that, you're
taking the value of the previous return
that be a variable or a function and
piping it as the first argument into the
next thing. So here we take the string,
we break it down to graffims and then we
measure the frequency of each value from
that string. So here we see E occurs
once, I occurs twice, L occurs once, R
occurs once, X occurs once. This is such
an elegant way to write this instead of
storing a bunch of random variables and
keeping track of all of it. A way of
thinking of this is effectively that the
pipeline the data goes through is what
you're encoding rather than the
different values that exist throughout.
And this pattern is so powerful,
especially when you combine it with a
lot of the other really cool things you
can do in Elixir. One of my favorite
features in Elixir is how the pattern
matching with overloads works. You can
redefine the same function multiple
times with arguments defined as checks
effectively. So when you define the
arguments for a function, you don't type
them the way you do in other languages.
Like in JavaScript or TypeScript, you
say this function takes in an object. It
has name, which is a string, and it has
age, which is a number. In Elixir, you
can define specific values that match.
And if you have three versions of the
same function, it matches based on what
the values are encoded as in the
arguments. So here we have a Fibonacci
function. The Fibonacci function is
redefined four times. So this first one
is defined as zero. So if you call fib
with zero, it will hit this first. So it
returns zero. If you call it with one,
it will hit this and return one. If you
call it with n when n is an integer and
greater than two, then we will do this
where we call the same thing with n
minus one and n minus2. Instead of doing
an if statement inside of the function
to handle these cases, we are defining
the functions via those cases. And then
we have a fallback here which is what
catches if none of these other things
catch. Because effectively the way this
is evaluated by the language in the VM
is it goes through these one at a time.
It takes the value in the function
you're trying to call and it sees if the
value matches this one. If it doesn't,
it checks the next one. If it doesn't,
it checks the next one. And if it
doesn't, it checks this next one. It's
so cool. It's so readable. It's so easy
to gro. It keeps you from having to
deeply nest things because of complex
logic. And there are so many little
things you can do with this. Like you
can redefine the checkout function where
if user isn't signed in or if user is
admin
and I see a lot of people saying really
dumb things in chat right now like your
compiler for a statically typed language
can do this. Rust can do this. Haskell
can do this. Okay, Haskell can kind of
do things like this. But this this idea
of matching the expectations of an
argument as part of the function's
definition rather than part of the
contents of the function is really
really really cool. And there are a lot
of programmers who fell in love with
this concept, including funny enough
Primagen because I ranted to him about
this so non-stop every time we hung out
that he went and explored Elixir out of
curiosity. And while a lot of the quirks
of the language aren't for him, he did
fall in love with this aspect. It's so
[ __ ] cool. And there's just a lot of
these types of things in Elixir that are
weird one-offs that are novel to Elixir,
which I thought would make it way harder
for models to understand because
training data for other languages is not
going to map well to these patterns.
Maybe some ideas from Ruby will, but not
too many. And once you start combining
this with pipes, things get really
crazy, especially because nothing is
mutable in Elixir. Everything is an
immutable value. If you want to change
it, you create a new value. It's a weird
language and I deeply dearly love it.
It's so fun. But none of these aspects
are the ones that I think make it good
for tools to use like AI agents. If
anything, these might actually hurt a
little bit because they are so unique to
Elixir. Some part of that helps because
it's not able to use the training data
for other things and make those mistakes
because it's so different. And the
immutability definitely helps a lot too.
But the things I think help the most
aren't necessarily the syntax or the
language or its characteristics. I think
a lot of it is hex. If you're not
already familiar with this fact, Elixir
doesn't compile to machine code like a
C++ or a Rust does. Elixir compiles to
Erlang, which runs in a VM closer to how
Java works. But Erlang's VM, Beam, is
actually magical. The things you can do
with Beam are unfucking believable. Not
relevant right now. So, I will resist
the urge to deep dive on my loyalty to
that VM because of how absurdly powerful
and cool and fascinating it is. Let me
know in the comments if I can do that in
the future because I would love to just
sit here and rant about all the things
that make this ecosystem cool. Believe
me. But since Elixir is running on Beam
and is effectively just Erlang, Elixir
can use the whole Erlang ecosystem as
well as Gleam, which is another language
built on top of Erlang and Beam VM. It's
so cool. This ecosystem is great. The
reason I'm here though is not just
because I want to talk about Erling. As
tempting as it is, it's because I want
to show you guys some things. Here is
the most popular package on Hex JSON.
It's a blazing fast JSON parser and
generator in pure elixir. Let's click on
it. Here we can see all the things you
would expect from an npm type site. You
can also go to the online docs because
every single package on Hex has a Hex
Docs site. Every single package in hex
comes with documentation built in
because you document it as you build it.
There is syntax in elixir where when you
write source code, you write comments in
a specific format that result in
documentation being built out of it as a
result of just writing the code. You
document it by commenting it. And the
results are really, really good docs
when you're done. I see chat has this
starting to click now. That's so good.
Holy [ __ ] That's so cool. Wow. Yeah,
it's another one of those things where
it's cool for humans and it's even
cooler for models. Let's take a look at
the decoder module. Here's the file in
Elixir, of course, for this project for
decoding JSON. And as we scroll through
here, you see at module doc false. This
is saying don't include this in the
module docs directly.
So, this one is specified like don't
generate docs for this for me directly.
Let's find something where they do.
probably JSON.ex. Here we are. Module
doc, a blazing fast JSON parser in
generator in pure elixir. Look at that
right there.
At doc parse a JSON value from input IO
data options all described right here.
And they're even getting early on the
type stuff with specs.
All of this is in the source code
alongside the actual thing that you're
writing the code for. The code and the
docs are not just like in the same
codebase. They are intertwined together.
That means not only that the docs are
easy to access and generate, but more
importantly, it is significantly more
likely these docs are up toate and the
relationship between these docs and the
source code is very very tight. That
makes models really good at doing this.
As people are correctly identifying in
chat, collocation makes LLMs go burr.
Absolutely. It's great for context,
great for LLMs. For the same reason
Tailwind is good for models because it
colllocates the markup, the styling, and
the behavior, actual built-in
documentation standards in the language
make the modules we use much easier to
understand and adopt. I see a lot of
people saying this is like JSTOC. No,
it's not because JSTO is barely a used
standard. Nobody has an actual good
autogenerator for things that you do in
JSO doc. Nobody [ __ ] uses it and it
was added too late to be relevant. The
thing that's magic about the module docs
in hex docs is that every single package
has them. I guarantee you if we just sit
here and go through those top 10 here,
every single one of these I'll just pick
the top six. Okay, I I have self-owned
already. One of the first ones I checked
is an Erling dep, not an elixir dep. And
the Erling people aren't quite as good
about the documentation.
So, one bad one. Let's see how many more
bad ones we hit. Let's try the telemetry
one.
This one looks really, really good,
actually. Cool. The next package is
good. Cowib also Erlang, not elixir.
Sad. The Erling ones are going to be
worse. The elixir ones are all going to
be perfect. Again, when we go back to
the bench here, I don't know if Erling
made it in as a test language. I don't
think it did yet. Sadly, didn't. But I
would bet Erling would score much worse
because the Erling ecosystem, as insane
and powerful as it is, was started
before programming languages even really
existed. Erling is so [ __ ] old and
weird. And Elixir came out way later. A
big part of why Jose made elixir is that
he wanted to take all of these lessons
that he had learned about what makes
languages good and maintainable, what
patterns do we develop that are good
later on and bring as many of those in
as possible to elixir. So while this was
disappointing to see how many of those
aren't well documented, I would be
surprised if we go through any of the
elixir depths and saw any problems.
Yeah, again all the elixir ones are in a
very good state. I will resist the urge,
the deep, deep primal urge to keep
glazing elixir because there's other
interesting data here too like C scoring
so absurdly high compared to a lot of
other things. What would make C
similarly high up here? It's almost
certainly said documentation. Similar to
Elixir, there is a concept of doing
comments for documentation in the C#
language directly. You can in the middle
of a C# file, you can triple comment and
write XML that describes the section of
code that you're writing. You can just
do this at any point and do XML
definitions of the things you're working
on and similarly generate code and
generate more importantly documentation
as a result. The depth and quality of
these docs is absurd. I would argue that
if you were to go train a new model from
scratch purely on programming language
documentation, it would be better at C#
than anything else because the quality
of these docs is insane. But there are
other aspects here too, things that we
didn't necessarily touch on here.
Quality of training data definitely
fits. But another similar thing, how
many solutions to a problem? In
languages that I use a lot, things like
TypeScript for example, there are tons
like nearly infinite different solutions
to a given problem. But in languages
like Elixir and C, there are many, many
fewer. Languages that came in with a
strong design philosophy that knew that
there were only so many ways that these
things should be done that provided
those things alongside good
documentation, those tend to do very
well. The points I would think about are
quite different. How easy is it to find
the right solution? How hard is it to
find a bad solution? How easy is it to
understand code that you already have
available to you? How clear are your
external dependencies and their specific
behaviors? And one last one, this is
kind of in the realm of type safety, but
different in a way I think you'll
hopefully understand now. How easy is it
to assure that this code will only run
when my expectations are met? I would
argue, and again, this is all theory
crafting. None of this is stuff that we
know for sure. I would argue these
traits are ones that really affect and
shape how good models are with a given
language. In a language like TypeScript,
it's pretty easy to find a good
solution, but not the right solution.
There are so many different options that
it's easy to get lost in the sauce
trying to pick the right one. And if you
pick one in one place and a different
one somewhere else, things get much
harder to maintain over time. And more
importantly, how hard is it to find a
bad solution? In Typescript, it is
trivial. It is so easy to find bad
solutions. You can just press tab a few
times and you'll find a bunch. How easy
is it to understand code that you
already have available to you? This
one's also fun. Type systems and LSPs
are great, but they only help when you
have a cursor you can hover over them to
see what the type definition is. If you
give the right feedback mechanisms to a
model to find these things, it can help.
But if it has to hover over every single
thing to see what it does or run a tool
call to check it every single time, that
is going to perform much worse than if
the language describes the things that
they want to know. This is also an
interesting thing because token
efficiency does not map directly to
performance at all. A lot of people seem
to think token efficiency matters. I
certainly did myself. But if you look
here, C is one of the least token
efficient languages and it was one of
the best performers on that bench. How
clear are the external dependencies and
their specific behaviors? As a
TypeScript dev,
especially once you get into post
install hell, things can be rough. I'm
currently working on porting an old
codebase for the ping zoom product, the
video call app for streamers. The amount
of weird things those old packages do is
horrifying. And even getting the model
to understand the difference between the
old version and the new version can be
really, really rough. And how easy is it
to assure that this code will only run
when my expectations are met. Type
safety and type systems can help here.
But nothing will protect you better than
asurances in the language itself that
these expectations are met before the
code runs, which is again a thing Elixir
does. exceptionally well. So to go back
through all of these with Elixir, how
easy is it to find the right solution?
Pretty easy. How hard is it to find a
bad solution? Extremely hard because
there's so little Elixir code published
that all of the code that is published
is for the most part pretty good. How
easy is it to understand code that you
already have available to you? Trivial.
It is a very clear language once you
understand its idioms. It is very very
readable, especially once you start
chaining things. Again, just some
examples. With Elixir, you can pipe a
value to another function trivially. So
if we have an input, we can pipe it to
two string. String.trim, string
downcase, string.replace, and then
string.trim. Sure, in JavaScript, we
could do dots over and over. But that's
assuming that the type of the value
stays consistent throughout and that you
don't need to do other things and pass
other arguments. Like here, we want to
pass other arguments to thereplace
function. And it's also of note that
this is a value and that the functions
aren't things that exist on values. You
don't have input. Uppercase, you have
string.2 to uppercase and you pass it a
value. Since this is just strings, you
could rewrite this as a oneliner in JS,
but that's not going to look very great
or be very readable if we're being real.
But in a language like JavaScript, you
have to reassign these values over and
over again before you can get to the
result you're looking for. Oh, this is a
much better example. Take a query string
like a= 3 and bals 4. We have to parse
it, coersse it to integers, compute a
result, and then format it. So, we take
query string, we pass it to URI. code
query pass it to then extract ins. This
is a syntax for like catching it if
there's an error. Then fn a b maths
square root a * a plus b * b. This index
here is similar to in javascript where
you can dstructure the arguments. So
here it's being called with an object
that has keys a and b. And this is now
two values we can access here a and b.
And that function you can see right here
dstructured object a is a and b is b.
And here we now return string that to
integer a string to integer b. We have
now extracted this from the URI
decoding. So the extract ins function
will parse out the strings and give us
that. And now we have a and b is ins map
square root a * a plus b * b. Pass it to
float.round. Pass it to two string. I
was wrong. Then is a new thing which is
why I'm not familiar. It's for moving
the arguments around. Not too important
here, but you get the idea. There's a
lot of different indexes to change where
the argument goes. If you don't want it
to be the first or the last argument,
you can pass it around using helpers
like that, not too important. You get
the general flow here very easily. Even
if you don't know the syntax, you can
clearly read what this does, which is
really, really cool. Whereas with
TypeScript, you can read this and
understand it. But it takes a lot more
mental effort because the core syntax of
the function is no longer just
describing the flow of information. The
core syntax of the function is defining
a bunch of how we pass values around.
Since we have to assign every bare value
throughout this, the code is as much how
the memory is being used as it is what
we are doing. And this is the magic of
elixir that I love so dearly. The top
level function is describing what is
being done, not the how in the same way.
Good point in chat here is that it
almost feels like pseudo code. very much
does and that is why I think it is so
[ __ ] cool because it so clearly
describes what is happening not all of
the other details and let's be real you
don't know how the memor is being
managed here anyways when does this a
raw and b raw value get cleared up good
question I got 5.2 two to remove all of
the thens to make it a little easier to
read. You have to define the functions a
little bit more aggressively externally
as a result, but it's still very
readable. Like just this is so good.
It's so good.
And people are asking about effect TS.
Do you think this is actually more
readable? I say as a fan of effect.
Yeah, effect is just like elixir. It
even has pipes.
Sure. This is the equivalent of that top
snippet from before. Pipe parse params
query comma effect.flatmap p nested pipe
effect.all a pipe require param a p
effect flatmap to number a. No, I'm not
saying you shouldn't use effect. In
fact, if you're using typescript, I
think effect is a phenomenal way to
structure your typescript in a way that
is more reliable. You will never beat
the readability and elegance of this.
It's so good. It's so good. And that's a
large part of why the models like it so
much. It's extremely clear when you read
code what it is doing. And then the last
parts here, how clear are your external
dependencies and their specific
behaviors? Very clear because the source
code is bundled and has the docs
alongside it. And how easy is it to
assure that the code will only run when
my expectations are met? Incredibly,
because again, pattern matching. Here,
when we extract ins, we only do this
when we have an a value and a b value.
If we have a different value, we raise
an error. The fact that the argument for
the function itself is what defines if
it can be hit, not if it should be hit,
if it can be hit, is magical. I think
I've said all I have to here. If
somebody who's deeper in the C world
wants to do the equivalent of this
video, but for the C# side instead of
just me glazing elixir constantly, that
would be really cool to see. And please
let me know if you do that. I would love
to see that. But for now, it's probably
time for me to go back to the trenches
of Typescript instead of sitting here
glazing the language I love most. But
maybe, just maybe, in this AI powered
world, Elixir might finally win. Let me
know how y'all feel.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.