DevOps from Zero to Hero: Build and Deploy a Production API
FULL TRANSCRIPT
DevOps, the word that makes half of
YouTube tutorials feel like you don't
really belong. Like you should stick to
HTML while the real engineers handle the
big stuff. But when you try learning it
yourself, you're slammed with Docker
commands that look like alien code, YAML
files that read like broken poetry, and
the jungle of tools that never seem to
connect. That's the trap. DevOps gets
sold as a scary gatekeeping monster. And
the problem is that no one shows you how
this puzzle fits together. You either
get a 10-minute look, it works demo or a
six-hour death by PowerPoint lecture
that has nothing to do with the apps you
actually want to ship. So, welcome to
the full DevOps course where you'll
finally see the whole picture step by
step as we go from fundamentals to a
productionready API that looks and feels
like something you'd actually run in the
real world. Here's what you'll get. A no
BS intro to what DevOps actually is and
why it matters. A hands-on Git crash
course so version control stops feeling
like black magic. CI/CD pipelines that
deploy your code automatically. A Docker
crash course to containerize your apps
for dev and production. Kubernetes
deployments like real companies use
them. Infrastructure as code to spin up
environments on demand and monitoring,
logging, and security baked in from the
start. And of course, we won't end
there. In the signature JavaScript
mastery style, you'll then use this
learn knowledge to build and deploy
acquisitions, a realworld API for buying
and selling SAS businesses with tons of
features like JWTbased authentication
and authorization. Role- based access
control for admins and users. User
management for accounts. Business
listings to create, update, delete, and
browse. Deal management to track deals
from pending to completed. Health
monitoring of all the endpoints, request
validation using ZOD, structured logging
with Winston, completely secured
endpoints using Arkjet, a painless
security system for developers that
protects your app from bots, spam, and
abuse in real time. For the database,
we're using Postgress with Neon DB. It's
serverless, lightning fast, and built
for modern cloud apps that even work
locally. Adding on to it, Drizzle RM for
type- safe database queries, Docker for
containerizing applications across dev
and production environments, Kubernetes
to orchestrate containers at scale,
justest and superest for automated
testing and validating application
behavior, CI/CD pipelines for linting,
testing, deployment and automation,
clean absolute imports, eslint and
prettier for clean maintainable code and
much more using warp, the fastest way to
build and ship applications with AI.
It's an all-in-one environment where you
run commands, write code, and prompt top
AI agents in parallel. This isn't just
another DevOps demo. By the end, you'll
have hands-on experience and
productionready backend you can actually
ship tomorrow. And if you want to dive
deeper into building scalable
productionready back-end systems from
the ground up in a beginner-friendly
way, check out the upcoming ultimate
back-end developer course. It's the
perfect combination of strong
foundations taking you all the way from
core networking concepts all the way to
building and deploying APIs with full
DevOps and AWS integration. You can
either join the wait list if it's not
out or if you're lucky, it's already out
and you can get started with it right
away. So, grab your cup of coffee and
let's ship code like six figure salary
engineers. By the way, I recently opened
up channel memberships. So, if you've
been enjoying these videos and want to
support the channel, that's one of the
best ways to do it. In return, you'll
also get access to extra resources like
the Figma design files from my
tutorials, detailed cheat sheets paired
with this video, and even complete
ebooks. Plus, as a member, you'll
actually get a say in what I record next
for YouTube. It's totally optional, but
if you've been finding value here and
you're not quite ready to become pro
yet, if you want to dive a bit deeper
while helping me make these full courses
free, I would be thankful if you joined
today. Before we start building, let's
get our environment set up. Because
here's the thing, DevOps isn't something
you watch, it's something you do. And to
actually follow along without getting
stuck, you'll need the same stack I'm
using. The good news, it's all free. And
these are real tools companies are using
every day. First, make sure you've got
Node.js installed. That's the core
runtime we'll be using to build our
backend API. Then our database. We'll be
using a cloud Postgress provider called
Neon DB. This will power up everything
in our API. So, you'll want an account
ready before we dive in. Click the link
down in the description, create an
account, and you're good to go. Next,
Arjette. You can build the cleanest API
in the world, but if it's wide open to
bots, spam or abuse, it's useless. Arjet
gives you a real-time protection while
you build. So security isn't something
you tackle on later. It's there from day
one. And finally, warp. This is where
all the action happens. Running
commands, shipping code, and even
prompting AI agents to speed up your
workflow. It's a modern AI powered
developer environment you'll never want
to leave. And right now, their pro plan
is literally one buck, which makes it
kind of a no-brainer. So once you've set
those up, you'll have the exact stack
I'm using. That way when I say run this
command, you can run it too with no
interruptions, no detours, just straight
into building. Very soon, you'll build
and deploy your very own production
ready API. But first, let's dive right
into the crash course.
You've heard this term a hundred times,
but let's be real. DevOps sounds scary
as hell, right? Is it a job? A tool? A
secret club for 10x engineers with six
monitors and a tolerance for drinking
five Red Bulls? You're not alone.
Everyone feels like this at first.
DevOps sounds massive, mysterious, and
way too complicated. But here's the
truth. DevOps is simpler than it sounds.
And in this crash course, I'll break it
down like we're just chatting over
coffee. Here's how software used to
work. You write some code, it runs on
your laptop, you deploy to a server,
done. And if you're a vibe coder, you
might even skip testing entirely, push
it straight to production, and call it a
day. That actually works. If your app is
tiny, and no one cares if it crashes.
But the second your app grows, traffic
spikes, and the real money is on the
line, that's when this vibe ship starts
sinking. Servers crash, hackers attack,
thousands of users log in at once, and
your app dies at 3:00 a.m. Tell me, who
handles all that chaos? Early on, it was
the same developers writing features.
But most devs aren't trained to babysit
servers or fight off security attacks.
Their job is to build, not panic at
midnight. So companies created operation
teams or ops. They managed servers,
scaled apps, monitored uptime and
performance, applied security patches,
and got woken up at 3:00 a.m. when
everything broke. Honestly, props to
them. Ops became the guardians of
stability, while devs focused on speed.
So now you've got two camps. Devs who
ship features fast and ops who keep
things stable. So what happened? Well, a
tug of war. Devs tossed code over the
wall. Ops slow them down, blocked
releases, or spent weeks cleaning up.
Result: frustration, silos, and slow
delivery. So finally, DevOps was the
solution. Not a tool or a role, a
culture shift. It's about breaking down
silos so devs and ops work together.
Backed by automation, they deliver
software faster and safer. DevOps is a
culture of collaboration, a set of
practices for building, testing, and
releasing reliably and an automation
toolbox for deployments, monitoring, and
infrastructure. Think of your app like a
restaurant. Devs are the chefs who are
cooking meals or code, and ops are
waiters and managers getting meals to
customers. They handle servers and
uptime. Without DevOps, chefs would toss
random dishes onto the counter and hope
for the best. Chaos. With DevOps, the
kitchen runs like a welloiled machine.
Orders flow smoothly, food is
consistent, and customers are happy. So,
why not just Vibe Code? I mean, you can
skip all of this if your project is just
for fun, but once you have real users,
especially paying users, vibe coding
quickly turns into vibe burning. So,
let's skip the burnout and learn DevOps
the right way.
You have a sense of DevOps, but not the
full picture yet. You might be thinking,
I get that DevOps is a culture, but how
do I learn it? And what does it look
like day-to-day? If you've ever Googled
or chat GPT, if that's even a word, the
term DevOps, you've definitely come
across the famous infinity loop diagram.
You know the one where arrows chase each
other endlessly labeled plan, code,
build, test, release, deploy, operate,
and monitor. That loop is not just a
conference graphic even though it looks
that way. It is the process of how an
idea becomes software in users hands and
then gets better with feedback. Dev and
ops are connected the entire time. So,
let me show you what each stage means in
practice. We start with a plan. Imagine
you're building a multi-million dollar
SAS application. Doesn't hurt to
imagine, right? Before a single line of
code, you decide what you want to build,
when to ship, who owns that, and how you
will measure success. Page speed, user
growth, revenue, and make it traceable,
not sticky notes that'll disappear. Use
tools that the team will actually keep
up to date. Be it Jira, Linear, GitHub
projects, or just notion, the tool is
less important than the discipline. A
plan without action is just a dream. So
make it explicit and visible. Once we're
done with planning, we move over to
code. Code quality matters as much as
code output. So write clean, modular,
and testable code that others can
extend. Use Git with reviews, branch
rules, or automated checks, ship
readable code, and not hacky solutions
that only you can understand. And now we
enter the build phase. Because when you
finish writing source code, it's still
just raw text files. Those files cannot
always be executed directly in
production. They often need to get
compiled or transpiled. Dependencies
need to get installed. It needs to get
bundled or packaged like in Docker. And
we need to check for linting or security
mistakes. The build step is all about
preparing code so it can actually run an
artifact, a ready to run package. Think
of it like baking dough into bread. You
can't eat raw dough, which is the source
code. So you can bake it build into
something edible, an artifact. In
DevOps, this process is automated for
consistency. You build Docker images,
run containers through make files, and
build them every time you make a commit.
You'll learn all of that in the next
lesson. And then we get to testing. Of
course, you wouldn't want untested code
to reach production. The test stage
exists to catch problems early when
they're still cheap and easy to fix.
Automated tests run as part of the CI
pipeline, covering everything from basic
unit tests to integration tests,
end-to-end workflows, and security
scans.
For instance, let's say you're building
a payments feature. Unit tests verify
the math for totals. Integration tests
ensure the checkout flow works with a
database. And end-to-end tests simulate
a customer actually completing a
purchase.
Security tests scan for vulnerable
dependencies or insecure patterns. And
by the time the code passes all of these
stages, the team can be confident that
it behaves as expected and won't break
the system when deployed. All of this
runs in automation pipelines. So when
all the tests pass, the build is finally
marked as ready for release. This
doesn't mean it's already running in
production. It means that the artifact
has been approved and cued for
deployment. Think of this as moving from
dev to ops. In practice, the release
stage involves versioning, tagging
artifacts, and pushing them into a
release repository like Docker Hub,
Nexus, or an internal store. This
ensures the exact same build that was
tested will be the one deployed later
on. No surprises or mismatches. And then
comes the big moment, deployment. In
traditional setups, someone might have
to log into a server at 2 a.m. to run
some manual scripts. But in DevOps,
deployments are automated and
repeatable. Pipelines handle the
process, and tools like Kubernetes
orchestrate deployments at scale. But
more on that soon. For now, understand
that you have to use these tools to
build pipelines for automated
deployments that scale up and down based
on defined needs. For example, when
deploying a new version of a web app,
Kubernetes might spin up new pods with
the updated code while gradually phasing
out the old ones, a strategy known as
rolling deployment. This ensures a
minimal downtime and a smooth user
experience. Some teams even practice
blue green deployments or canary
releases to test new versions with a
small subset of users before rolling out
widely. And once deployed, the
application is live. But the work isn't
over. You have to operate. See, the
operate stage is all about ensuring the
system continues to run reliably under
real world conditions.
This includes monitoring server health,
scaling resources when traffic spikes,
applying security patches, and managing
infrastructure configurations. Think
about an e-commerce platform on a Black
Friday. At 2 p.m. traffic might
skyrocket, requiring the system to scale
horizontally across multiple servers. At
2 a.m., when traffic is low, resources
can be scaled down to save costs. DevOps
teams ensure that the system is
resilient, stable, and performant at all
times. So, finally, we have the
monitoring. Here teams gather data about
the systems performance. Uptime, error
rates, and business metrics. Monitoring
tools like Prometheus, Graphana, Data
Dog, New Relic, or Century act like CCTV
cameras for your app. They let you see
not only technical metrics like CPU
usage, latency or error logs, but also
business outcomes like orders processed,
signups completed, and revenue
generated. For instance, if your new
checkout flow increases card
abandonment, monitoring will catch it.
That insight then loops back into the
plan stage where the team can adjust
priorities and improve the product and
then the cycle starts again. That's the
beauty of DevOps. It never really stops.
Each stage feeds the next, forming a
continuous loop of building, testing,
delivering, operating, and improving.
And it's not just a set of tools or a
job title. It's a culture of constant
learning and iteration. But wait, does
that mean that as a DevOps engineer,
you're expected to handle everything in
this cycle? Are you supposed to code
like a developer, test like a QA, deploy
like ops, and monitor like SRRES? That's
a great question. No, you're not
supposed to do everything. This is one
of the biggest misconceptions about
DevOps. If DevOps is a culture, a way of
working where dev, ops, QA, and security
work closely together, then a DevOps
engineer is not a superhum who replaces
all those roles. Instead, your role is
all about bridging the gap between
teams, automating the boring or manual
work so teams can move faster, and
setting up tools and practices that make
collaboration smoother. See, developers
still write features. QA still ensures
quality. Ops still keeps server running,
but as a DevOps engineer, you make sure
that all these moving parts are
connected and running without chaos. So,
what does a DevOps engineer actually do?
You're not here to replace developers,
QA, or ops. You're here to connect all
the dots and keep things running
smoothly. So let's run through the
entire life cycle once again, but from
your point of view. When it comes to the
planning phase, developers and product
managers decide what to build. You make
sure that planning tools like Jira,
Confluence, or notion are wired into
your pipeline. For example, when someone
closes a Jira ticket, it should link
straight to a commit or a deployment
log. In the coding phase, you're not
writing every feature, but you set the
rules of the road. Branching strategies,
code review requirements, autolinting,
and security scans all run
automatically. So code quality stays
high. Then in the build phase is where
raw code becomes something that can
actually run anywhere like a Docker
image. Your job is to set up pipelines.
So this happens on every commit
consistently with zero manual steps.
When it comes to testing, QA owns it,
but you make it effortless. You
integrate unit tests, integration tests,
and security scans into the pipeline so
bugs get caught early before they even
reach production. And then for the
release and deploy phases, this is where
you shine. Instead of someone sshing
into a server at midnight, you automate
deployments with Kubernetes. Terraform
or Helm. In the operation phase, the ops
teams keep everything alive, but you
help them codify the infrastructure with
Terraform or cloud for set scaling rules
and make systems highly available. An
example of this would be spinning up a
new AWS cluster with a single config
file instead of spending hours in the
console. And finally, in the monitor and
feedback phase, once the code is live,
you keep watch. Metrics, logs,
dashboards, and alerts feed straight to
Slack or Teams. If something breaks or
slows down, you know it before customers
do. So, no, you're not coding the app
and writing tests and running servers
all by yourself. You're the air traffic
controller. Developers fly the planes or
build the features. Ops is the ground
crew. QA is the safety check. And you're
in the tower keeping everything smooth
and safe.
Let's dive into the heartbeat of DevOps.
CIC CD. But what does that actually
mean? Well, picture a kitchen in a
restaurant. The chef chops the
vegetables. The assistant cooks them.
The manager inspects the dish. And the
waiter finally serves it to the table.
If every step was manual, the service
would be slow. In the same way, we
manually go through different stages
such as coding, building, testing, and
deployment. Pretty manual, right? Now,
imagine a conveyor belt moving the dish
automatically from one step to the
other. That's a pipeline. It's an
automated conveyor belt for software.
So, let's break it down. CI stands for
continuous integration. Every time a
developer pushes code, tests run
automatically. And if something breaks,
the pipeline stops and you fix it before
moving forward. CD stands for continuous
deployment where once tests pass, the
app is automatically deployed to staging
or production. No late night manual
deployments. Your job is to write these
pipelines using tools like GitHub
actions, GitLab CI, Jenkins, or
CircleCI. You'll define each step from
build, test, release to deploy. Back in
the day, running apps was messy. You'd
install them directly on servers, and if
one needed version 18 of Node.js and
another one needed version 20, you'd
have conflicts everywhere. Containers
solve this by packaging apps with
everything they need. The most popular
tool here is Docker. Now imagine not
just one container, but hundreds of them
for microservices, background jobs, and
databases. Someone needs to decide where
do these containers run? How many copies
do we need right now? What happens if
one crashes? And how do they securely
talk to each other? That's called
orchestration. And the go-to tool here
is Kubernetes, which acts like the
conductor of an orchestra, coordinating
containers, so everything runs smoothly
and scales automatically. Traditionally,
people clicked around in a cloud
dashboard to create servers and
networks. That's like building IKEA
furniture without instructions. Slow,
errorprone, and almost impossible to
recreate. With infrastructure as code or
ISC for short, you describe your entire
setup. Servers, networks, databases, all
in code. You store it in Git, review
changes, and recreate environments any
time. Tools like Terraform or AWS Cloud
Form make your infrastructure
predictable and repeatable. So if
something goes down, you just rerun your
code to rebuild it from scratch. And
almost no company runs their own
physical service anymore. Instead, they
rent them from cloud providers like AWS,
which are Amazon's web services, Azure
from Microsoft, or GCP, which is Google
Cloud Platform. You don't need to master
all three. Focus on one primary
provider. AWS is the most common. And
get comfortable deploying apps, setting
up networks, and managing storage. Once
you know one well, switching to another
is easier. Like learning to drive one
car and then trying out a different
brand. And once your app is live, you
need visibility. Monitoring and logging
tools show what's happening in real
time. Not just server performance, but
also user behavior and errors. You've
seen me use Sentry in some of my
projects to track errors and
performance. That's a great starting
point. Some DevOps teams use different
tools like Prometheus and Graphfana for
custom metrics and dashboards, Elk Stack
or Data Dog for centralized logs, or
Post Hawk for product analytics and
funnels. With the right setup, you know
about problems before your users do. And
finally, scripting ties everything
together. A DevOps engineer should be
comfortable with at least Bash or Shell
scripting and one programming language
like Python or JavaScript. Why? Because
automation is the heart of DevOps. And
if you're onboarding 10 new developers
and need to set up their accounts and
permissions, don't just click through
dashboards 50 times. Write a script once
and automate the whole process.
I know it's a lot, but if you take it
step by step from Git, pipeline,
containers, ISC, cloud monitoring, and
scripting, you'll go from confused to
confident pretty fast. And in this
video, we'll skim the surface of each
one of these topics in a
beginnerfriendly way. And if you'd like
some deep dives on advanced DevOps
concepts, drop a comment down below and
I'll make it happen. Imagine you're
working on a coding project and you make
a mistake that breaks everything. Your
boss would most likely fire you if you
were even able to get a job in the first
place. Without Git, you'd have no easy
way to go back and undo the changes.
You're toasted. Git is the industry
standard. Most companies, team, and
open- source projects use Git. So,
naturally, every job description
mentions it. Learning Git isn't just a
nice to have. It's your get good or get
out moment. It's a must for any serious
developer wanting to land a job. So,
what is Git and why is it so popular?
Git is a distributed version control
system. Sounds fancy, right? Well, let's
break it down. The version control part
helps you track and manage code changes
over time. While distributed means that
every developer's computer has a
complete copy of the codebase, including
its entire history of changes and
information about who changed what and
when, allowing you to get blame someone.
Hopefully, people won't blame you. But
do you really need it? Can you code
without using it? Well, of course you
can, but then your workflow would look
something like this. You start coding
your project in a folder named my
project. And as you make progress, you
worry about losing your work. So you
create copies, my project v1, v2, v3,
and so on. Then your colleague asks you
for the latest version. You zip up my
project v3 and email it over. They made
some changes and sent it back as my
project v3 johns changes.zip. zip.
Meanwhile, you've continued to work. So
now you have my project V4. You then
need to manually compare J's changes
with your V4 and create a V5
incorporating everyone's work. And then
a week later, you realize you
accidentally removed a crucial feature
in V2. You dig through your old folders
trying to figure out what changed
between versions. Now imagine doing this
with 10 developers, each working on
different features. It's a recipe for
chaos, lost work, and countless hours
wasted on a version management system
instead of actual coding. Git solves all
of these problems and more. It tracks
every change automatically, allows
multiple people to work on the same
project seamlessly, and lets you easily
navigate through your project's history.
No more final version v2 final, really
final zip files. Git does all of this
for you, but in a much more powerful and
organized way. To get started, you need
Git installed. Whether on Windows, Mac,
or Linux, it's just two clicks away.
Google download Git and get it for your
operating system. Once Git is installed,
open up your terminal. Nowadays, I
prefer using a terminal built into my
IDE. First things first, let's check
whether you've installed Git properly.
run git d- version and you'll get back
the version that is installed on your
device. Next, you need to configure git
to work with your name and email. This
is just to track who made the changes in
the project so your colleagues know who
to blame. Here's the command. git
config- global user.name and then in
single quotes put in your name. Once you
do that, you can repeat the same
command, but this time instead of
changing user.name, we'll change user.
And here you can enter your email. Press
enter. And that's it. You're all set up.
Now, let's talk about repositories. A
repo or a repository is where Git tracks
everything in your project. Think of it
like a folder that stores all the
versions of your code. Simply put, if a
folder is being tracked by Git, we call
it a repo. Now, let's create a new
repository. In your terminal, type git
in it and press enter. As you can see,
git has just initialized a new
repository. On top of the success
message, we can also see a warning. In
previous times, the default name of a
branch has been master. But nowadays,
you'll see main used much more
frequently as the name for the primary
branch. So, let's immediately fix it by
configuring the initial branch name. You
can copy this command right here. And at
the end, you can just say main.
Now, considering that we have just
changed the initial configuration
settings, we have to create a new
folder. create a new one called
something like mastering git. Open it
within your editor and then rerun git in
it. As you can see here and here, now
we're in the main branch. That means
that git has initialized an empty
repository. You won't see any changes
yet in your folder, but a hidden.git
folder has been created inside your
directory. You don't need to touch this
folder. Git handles everything inside
from commit history, branches you'll
make, remote repos, and more. Most of
the time, Git will already come
pre-initialized by the framework or
library that you use to set up your
project with. That's how integrated Git
is into every developer's life. So now
that we have this main right here, what
does that exactly mean? Well, main is
the default branch name of your repo
created by Git. Every time you
initialize git, this branch will be
automatically created for you. I'll
teach you more about Git branches soon,
but for now, know that a branch is
nothing but a parallel version of your
project. All right, let's add some files
and track changes. I'll create a new
file called hello.js.
And you can see how smart WebStorm is.
It automatically asks me whether I want
to add it to Git. But for now, I'll
cancel that because I want to explain
everything manually. Let's make it
simply run a console.log that prints
hello get. Alongside this file, let's
create another new file and I'll call it
readme.md.
In here, we can do something similar and
say hello get. And now run git status.
Git will tell you that you're currently
on the main branch, that there are no
commits yet, and that there are two
unttracked files, one of which is a
markdown document. So to track it, use
git add readme.md.
After adding a file, we need to commit
it. Committing in git is like taking a
snapshot of your project at a certain
point. Think of it as creating a whole
new copy of your folder and telling git
to remember when you did it at what
time. So in the future, if anything
happens, you'll time travel to this
folder with the commit name you specify
to git and see what you had in there.
It's essential to commit your changes
regularly. Regular commits help you keep
track of your progress and make it
easier to revert to previous versions if
you break something. You can commit by
running get commit-m
which stands for message and then in
single quoted strings you can add that
message. For example, add readme.md
file. There we go. Congrats. You just
created a checkpoint in your project's
history. Now let's try running git
status again to see what it shows. As
you can notice that other file hello.js
is still there. It's not tracked. We
asked git to track only the readme file.
To track this file or other files that
you may create, we'll have to run a
similar command. It'd be too much work
to commit each file individually.
Thankfully, we have a command that
commits all the files we've created or
modified that Git is not tracking yet.
To see this in action, let's create
another file test.js
and let's add a simple console log that
simply console logs a string of test.
Now to track both files and commit them
in a single commit action, we can do
that by running git add dot. The dot
after git add tells git to add all files
created, modified or deleted to the git
tracking. Next, as usual, we can specify
the commit name for this tracked version
by using git commit-m
add hello and test files.
There we go. So now you can see that all
of these files are tracked. And since
I'm using webstorm, it also has a hidden
ideid folder. So it added it to tracking
as well, which I'm okay with. Well done.
Now to see the history of all commits
we've created, we can use a new command
get log. And there we have it, our git
history. It contains a commit ID or a
hash automatically created by git, the
author we specified when using git
config, a timestamp, and the commit
message we provided. Great. But how do
we switch to an older commit and restore
it? Let's say the commit add hello and
test files introduces some buggy code
and we want to restore our project to a
previous version without these files.
our brain would immediately suggest
deleting those files entirely or
clearing up their code. And if you do
that, you'll most likely break your
production because other files depend on
those files. So instead of deleting them
manually to restore to the first version
where we had only committed the readme
file, we can use a new command. First,
you have to copy the commit hash. Yours
is going to be different from mine. So
make sure to copy yours. I'll get this
one first that says add readme file. and
I'll press copy. Then you have to exit
this git log by pressing the Q letter on
the keyboard. And then you can use a
command get checkout and then you can
provide a hash of a specific commit or a
branch you want to check out too. Now
press enter. Okay, something happened.
First of all, our two files are gone.
Detached head experimental changes.
What's happening? Well, in git there is
a concept of a head which refers to the
pointer pointing to the latest commit
you've created. When we created our
second commit, our head shifted from
readme commit to the latest add hello
and test files commit. But when we ran
get checkout command, we moved the head
to the previous older commit. That's why
we got this detached head warning. It's
a state where the head pointer no longer
points to the latest branch commit. And
the rest of this message tells you that
you can create a new branch off of this
commit. But don't worry, your files are
still somewhere. When you use a git
checkout command, you're simply viewing
the repository state as it was at the
time of a specific commit. Like right
now, we're viewing a snapshot of your
codebase at a previous moment in time
when we only had a readme.md file. The
beauty of this is that all the logs and
files, whether created or modified,
remain untouched. The get checkout
command won't delete any logs or
history, so you can safely explore past
states without worry. But what if you
actually want to discard changes made
after that commit? Maybe you want to
quickly roll back to a stable state
after an issue hits production, tidy up
messy commits to look more professional
or undo a bad push you regret making.
Perhaps you've been experimenting with a
refactor that didn't pan out, or you
need to recover from a messy merge
conflict. Thankfully, Git provides a few
commands that'll help you in these
scenarios and I'll teach you how all of
that works very soon. So, just keep
watching and we'll dive into these more
advanced commands that are really going
to help you well fix a broken
production. Now, to go back to a current
state, which is often called the head
state, you simply have to run get
checkout main. And there we go. Previous
head position was at the hash of this
checkout. And now you switch the branch
to main. You can see the same thing
happen right here on the bottom right or
the top left depending where your
branching is. And if you made any
changes while in the detached head state
and you want to discard them, you can do
the same thing. Get checkout dash if
that means force and then get back to
main. In this case, we're good. We're
already on main. And that's it. You
already know more about git than most
developers do. Of course, we'll dive
deeper into advanced use cases and tips
and tricks soon, but now let's talk
about GitHub and how it differs from
Git. Git is a tool you use to track
changes. Whereas GitHub is a cloud
platform that allows you to store your
Git repositories online and collaborate
with others. To push your local project
to GitHub, you'll need to link your
repository to a remote. But what's a
remote? Well, there are two types of
repositories. Local repository is a
version of a project that exists on your
own machine, laptop, or whatever else
you use where you do your developer
work. When you initialize a repo using
git init, you create a local repo in
your folder. Changes you make there are
private until you push them to a remote
repository.
So a remote repo is a version of a
project stored on a server like GitHub,
GitLab or Bitbucket. It's used to share
code between collaborators and keep
project versions in sync across
different users computers. When
collaborating with a team, you'll have
two kinds of repos. Everyone in the team
will have a local repository on their
machine and there will also be this one
common remote repo from which everyone
will sync their local repository
versions. Now head over to github.com
and create an account if you don't
already have one. Once you're in, press
the little plus icon on the top right
and select new repository. Enter a
repository name such as mastering git.
Choose whether you want to make it
public or private. Leave the add readme
file checkbox unticked and click create
repository.
This is a remote repository. Here you
can see your repository's origin. Copy
it. When you clone a repository from
GitHub, Git automatically names the
remote repository as origin by default.
It's basically an alias for the remote
repositories URL. Now, our goal is to
link our local repository to the remote
origin. If you haven't yet switched the
default master branch name to main, you
can do that by running git branch- m
main and this will change the branch
name to main which is a standard
practice nowadays. And now we are ready
to link our local repo to a remote
origin. You have to run a command get
remote add origin and then you have to
paste the link to the origin that you
just copied and press enter. And a good
thing to know is that you can have
multiple remote repositories. You just
have to rerun the command and change the
origin name to something else. Of
course, that's the name of your choice.
And then you can also update the new
URL. But in most cases, you'll be fine
with just one remote repo. Finally, to
push your local commits to GitHub, use
get push-
origin main. And remember, we used
origin here to refer to the remote
repository instead of typing the full
URL.
So, press enter. And there we go. This
worked. If anything with git goes wrong,
typically it goes wrong at this point
when you're trying to push to a remote
repo. So, if you don't see what I'm
seeing right here, and instead you got
some error, typically all of these
errors are very easily resolvable. I
would just recommend copying the error
message, pasting it in Google, and then
fixing it right there and then. But in
this case, we're good. And now, if you
go back to your GitHub repository and
reload, boom, your code is now online
for the world or your team to see. And
okay, okay, you might have already known
this. For some of you, that's about as
far as you've gone with Git. create a
repo, push your changes, and call it a
day. But Git has so much more to offer,
especially when you're working within a
team. So now, let's take things up a
notch and dive into branching and
merging. This is where Git truly shines.
Branches in Git allow you to create
different versions of your project, like
making a copy of a project at a specific
moment in time.
Whatever changes you make in this copied
version won't affect the original. The
main project or branch stays untouched
while you experiment, modify, or add new
features in the copied branch. If
everything works out, you can later
merge your changes back into the
original project. If not, no worries.
The original remains safe and unchanged.
When working in a team, using separate
branches for different features or bug
fixes is essential. It allows you and
your team to work independently on
different parts of the code without
causing conflicts or errors, ensuring
everyone can focus on their own tasks.
At the start, you'll have one default
branch called main. To create a new
branch, run get branch and then type a
branch name.
This will create a new branch. And if
you want to switch to this newly created
branch, then run get checkout and then
enter the branch name you want to check
out too. And there we go. Switch to
branch branch name. Now, if you want to
go back to main, just run get checkout
main. There we go. And here's a little
pro tip. There is a shortcut to create a
new branch and immediately move to it.
To do that, run git checkout with a
-ashb flag and then enter a branch name
such as feature branch. Of course, this
branch name and feature branch are just
dummy names. Make sure your branch name
is short and explains which changes will
you be making on that branch. For more
tips on how to properly name your
branches, you can download the git
reference guide. So, let's create and
move to this feature branch in one
command. There we go. And what I'm about
to say next is very important. So keep
it in mind. When you create a new
branch, it'll be based on the branch
you're currently on. So if you're on the
main branch and run the command, the new
branch will contain the code from the
main branch at that point in time.
However, if you're on a different branch
with different code, the new branch will
inherit that code instead. So to ensure
you're creating the new branch from the
correct starting point, you should
either first switch to the branch you
want to base the new one on or run this
command get branch. Then you can enter a
new branch name
and then the next thing can be the
source branch. So if you do it this way
and replace the new branch name and the
source branch with the names of actual
branches, then it'll create a new branch
from another specific branch. So if you
run this command, you can directly
create and switch to a branch based on
any other branch without needing to
check out to it first. For now, I'll
remove that. And let's say that we want
to go into our code and implement this
feature we're working on. Let's say that
in our case the only feature we want to
do is to modify the readme. So below
hello git I'll say I'm adding this from
feature dash branch. There we go.
Feature implemented. If only it was this
easy. And you can see that our IDE
immediately highlighted this readme file
in blue indicating that it has some
changes. Now we need to add it commit it
and push it. This time instead of saying
get add readme md let's just use the get
add dot which is a command that you'll
use much more often. Next we need to
commit the changes with git commit-m
and then we have to add a commit
message. So this is the perfect time to
learn how to write a proper commit
message. A quality commit message is
written in the imperative mood. a
grammatical mood that sounds like you're
giving a command
like improve mobile responsiveness or
add AB testing. When writing your commit
message, make it answer this question.
If applied to the codebase, this commit
will and then fill in the blank like
this commit will upgrade the packages or
this commit will fix thread allocation.
And why do we do this? Well, because it
answers the question, what will happen
when I merge the branch containing this
commit? It will add AB testing. For
example, be direct and eliminate filler
words. For example, let's use modify
readme. In this case, it's short, sweet,
and in an imperative mood. And press
enter. There we go. We've just made get
aware of our commit. Now that you know
how to write better commits, let's take
a moment and check out our remote
repository. What do you think? Will it
have the latest commit we made? Let's
reload it. And it's the same. It doesn't
contain our newly created feature
branch. Do you know why? It's because
the changes we made are in the local
repository, which has not yet been
synced with the remote repo. To see
those changes, first you'll have to
publish your local branch. And you can
do that using the git push d- set dash
upstream origin and then the name of the
branch. In this case, feature dash
branch and press enter.
There we go. An upstream branch is a
remote branch that your local branch
tracks. When you set an upstream branch
using set upstream, you're essentially
linking your local branch to a branch on
a remote repo. Through this command, you
push a local feature branch to the
origin remote repository and then you
set the upstream branch for your local
feature branch to track origin feature
branch. Alternatively, you can also use
get push- u origin feature- branch or
the name of your branch. Of course, both
set upstream and -ash u establish a
tracking relationship between your local
branch and the remote branch. This way,
in the future, if you want to push
something from your local branch to your
remote branch, you simply have to run
get push. That's it. At this moment for
us, everything is up to date. But as you
make future changes, you don't have to
rerun set upstream or -ashu. You only
have to add it, commit it, and push it.
That's it. And if somebody else made
changes to your remote branch, either
directly or by merging some other
changes into it, you have to make your
local branch up to date with the remote
branch. And you do that by using the get
pull command. There we go. It's already
up to date in this case. This command
fetches changes from the remote repo and
merges them into your local repo for
that branch. There are all and the story
doesn't stop there. Git has plenty of
advanced features like merge conflicts,
reset, revert, stash, cherrypick, and
more. And if you want to truly master
Git, I've got a complete crash course
waiting for you on YouTube, totally
free. You'll find the link down in the
description. But for now, what you've
learned so far is enough to kick off
your DevOps journey. Though, it's
definitely just the beginning. Make sure
to keep digging deeper into Git as you
grow. You'll soon use this knowledge to
build pipelines that automatically
build, test, and deploy code to staging
or production servers. And guess what?
That entire process begins with a simple
Git push. Git also plays a huge role in
managing infrastructure with tools like
Terraform or Pulumi. Your cloud setup
whether it's virtual machines, databases
or networks, lives inside a TF or ayaml
files in git repository. Change a line,
commit and push, and your entire
infrastructure updates automatically.
All in all, in DevOps, you might not
always write the application logic
yourself, but you'll constantly review
PRs, check configurations, and maintain
secure workflows. GitHub or GitLab pull
requests will become your daily
workbench. And now you know exactly how
they work. And once you've mastered Git,
you're ready for the next big leap of
building pipelines that take your
commits and turn them into live runnable
applications. So, let's dive into that
next.
CI/CD pipelines. What are those? Well, a
pipeline is just a set of automated
steps that takes your code from the
moment you push it all the way to
production. Instead of manually running
mpm install, mpm test, docker build, and
cubectl apply. Every single time, your
pipeline does it for you. Over the
years, the industry has developed a
bunch of tools for managing pipelines.
Some of the most popular ones are
Jenkins, which is the OG, highly
customizable but self-hosted and heavy.
GitLab CI/CD, which is built right into
GitLab, great if you have your repos
there. CircleCI, TravisCCI, and Azure
DevOps if you're on the Microsoft
ecosystem. They all automate, build,
test, and deploy. But most devs today
prefer GitHub actions because it's
deeply integrated with where your code
already lives. I prefer it because it's
built right into GitHub which means no
extra setup or third party login. It's
event driven which means that it
triggers on push PRs or cron jobs. So in
simple words when you push your pipeline
runs. It has a massive library of
pre-built actions which means that you
can just grab community workflows for
testing docker deployments and more. and
it has version controlled YAML configs.
This is where the automation lives right
within your repo. So, it's easy to audit
and reuse. In other words, it's super
simple to use and that's why I
personally use it and recommend you do
too. At its core, a GitHub actions
pipeline is called a workflow. Workflows
live in your repo inside the
folder.github/workflows.
And each workflow is just a YAML file
defining triggers, what events start the
workflow like pushing code or opening a
PR, jobs which are a set of tasks each
running on its own virtual machine, and
the steps which are actual commands or
actions executed in a job. So what is
YAML? Well, YAML stands for YAML aim
markup language. Funny name, right? It
basically means that YAML is not about
complicated tags like HTML or XML.
Instead, YAML is designed to be human
friendly and machine readable. Compare
that to JSON or XML. YAML is cleaner,
which is why Kubernetes, Docker Compose,
and GitHub actions use it for configs.
So, to truly master creating pipelines
with GitHub actions, you need to
understand YAML syntax rules. Unlike
other file formats, YAML is whitespace
sensitive, which means that indentation
is everything. You have things like key
value pairs like language is Python. For
indentation, there's no debate here. It
only accepts spaces, no tabs. You can
also have lists, nested structures where
you can combine mappings and lists,
scalers for strings, numbers, and
booleans, and even things like
multi-line blocks, and anchors and
aliases where you can reuse different
configs. You don't need every detail
right now. Just focus on writing clean
YAML. And similar to any other language,
YAML also has specific keywords and
syntax. So, let me walk you through them
real quick. Some of the core actions
include the keyword name which is used
to define the workflow or the job title
on which defines triggers such as push,
PR or schedule. Jobs to define jobs,
their OS and steps. Steps which are
sequential commands or actions run which
includes shell commands to execute. uses
if you want to use pre-built actions
with where you can pass params to
actions env to set environment variables
and needs to make one job depend on the
other list can go on and on but these
are some special keywords that matter
most in GitHub actions YAML files and
the list could go on and on and that's
why I prepared a complete YAML and
GitHub actions CI/CD cheat sheet it's
packed with so many useful commands And
hey, if you'd like to buy me a coffee
and support me in making more
highquality YouTube videos like this
one, plus get a detailed cheat sheet on
YAML and CI/CD pipelines, you can join
our new channel membership. No pressure
at all. The link should be just
somewhere below this video. All right,
let's keep going.
For now, let's move forward and put what
we've learned into action by creating
our first GitHub actions pipeline using
YAML. I'll start with creating a new
GitHub repo. Let's call it something
like DevOps pipelines. Create a new repo
and then go ahead and clone it locally
by copying this URL and then just clone
it. In this case, I'll be using WebStorm
to clone it automatically. Once you're
in, you'll have an empty repo. So, what
you can do is initialize a project by
running mpm init-y.
And this will give you a new package
json with a new node project. Alongside
it, go ahead and create a new file
called index.js
which will act as the starting point of
our application.
And in there, add a new console log
saying hello DevOps. We can also add
maybe an additional console log saying
something along the lines of I'm
learning CI/CD using GitHub actions.
Perfect.
And what we can do is also create
another file called test.js.
Within it, I'll add a console log when
we start the tests. So I'll say starting
tests. And I'll also add a timeout right
here that'll wait for 3 seconds. So say
console.log
waiting 3 seconds. And of course it'll
take 3,000 milliseconds which is 3
seconds. And then I'll add another
console log saying something like tests
complete. Now we can add both of these
files as scripts within package JSON. So
right here instead of test I'll simply
add a run script which will run node
index.js and I'll also add a test script
which will also run the test.js
file. Now we want to create a special
folder right here in the root of our
application. Create a new folder that's
called.github.
And within GitHub create a new folder
called workflows.
Within workflows, you can see how my IDE
automatically recognizes that I might
want to do a GitHub workflow, which
would create a workflow. file, but in
this case, we can create it manually by
creating a new pipeline.
And we can start creating it. First, it
needs a normal human readable name. So,
I'll say name is CI pipeline, which will
make it easier to identify this pipeline
in GitHub actions tab as you create more
pipelines.
Then it needs an on property which will
define the trigger event. What action in
GitHub should start this pipeline.
In this case, we can say that it'll be a
push action. So whenever somebody pushes
code to the repo and then we only want
to restrict it when changes are pushed
to the main branch. So I'll say branches
is going to be set to an array of main.
So if you push to a feature branch,
nothing happens. Only when you merge
into main, which is our production ready
branch, the pipeline will run. This will
prevent the unnecessary builds for every
branch and ensure that main always stays
tested. After that, we can define the
jobs. A job is a group of steps that run
together. You can think of it like a
container in which multiple actions are
executed in sequence. In this case, we
will name this job build and we have to
define what it will run on. So in this
case, we'll tell it to use runs on the
latest fresh YUbuntu latest machine. Why
Ubuntu? Because it's fast, lightweight,
and widely supported. Then inside a job,
you have to define different steps.
These are the actual instructions to be
executed one after another.
Each step has a name. In this case,
we'll call it checkout code. And then
you can say what it uses.
So in this case, you can say uses
actions/checkout
atv5.
This means that we're using a pre-built
GitHub action such as this one right
here, GitHub actions checkout. And this
is the action we're using that checks
out our repository under the GitHub
workspace so our workflow can access it.
Now we can add some more steps such as
another one with a name of set up
Node.js JS which will also use so uses
another predefined action of actions
forward slash setup- node at v3
with and here you can define the node
version that you want to set it up with
in this case let's do 16 after that we
want to run some dependencies so I will
create another package that'll have a
name of install dependencies and the
only thing that it'll is it'll run mpm
install and after that we'll have
another one which will run the tests. So
it'll be a step of name run tests
and it'll run mpm test. This is
typically done using just or some other
testing framework. But for now we'll
just run our manual test file. Now as
you can see we have some yellow squiggly
lines right here saying that we have
some issues with our setup. And that's
because I used tabs for indentation
instead of using spaces. So after fixing
the indentation, it should look
something like this. You should have two
spaces for each indentation point. We
have jobs, then the build, then we have
runs on, and then steps is directly
below runs on. Two spaces indented after
build. Perfect. Now let's go ahead and
add and commit those changes to GitHub
by running git add dot getit commit-m
feature add ci pipeline and then get
push.
Now if you head back over to your GitHub
repo, you'll be able to see your latest
changes. And now if you head over to the
actions tab, you'll be able to see your
newly created pipeline right here.
expand it and you'll see all the job
details if you enter the build such as
setting up the job, setting up Node.js,
installing dependencies, and then even
running tests. And you can see the
results of those tests right here.
Starting, completed, waiting 3 seconds,
and the job finally is done. Now, this
pipeline will run on every single main
push. I can prove that to you by heading
over to the code and adding a readme
which will automatically push a new
readme.md file to main. So I'll say
testing
the CI pipeline
and commit and push. Since a push has
been made, if you head over to actions,
you'll see that a new action run
automatically based off of my commit
which pushed over to main. It'll rerun
the tests for the entire application and
make sure that everything is good. From
here on, you can add additional test
coverage checks, deploy to staging your
production, or find actions from the
GitHub marketplace where there are so
many different actions that you can
immediately use within your application.
So, in the build and deploy stage, this
allows you to see your CI/CD pipelines,
which enforce code quality, passing
tests, and shipping inside Docker
container. This was just one simple
example. Based off of this, you can
create more workflows or reuse existing
workflows from the community. Later on
in the build and deploy stage of this
project, you'll actually set up real
CI/CD pipelines for our acquisitions
application, ensuring that your
formatting is consistent, your tests
pass successfully, and everything runs
smoothly within a Docker container.
Speaking of which, let's dive into
Docker next.
It works on my machine. Have you ever
heard or said this? Or it works on
Windows but not Mac OS. Or have you ever
struggled with juggling different NodeJS
versions for different projects? This is
why Docker was created in 2013. And it's
not just a tool to solve compatibility
issues. It's a critical skill required
for the highest paying jobs as the
surveys find Docker to be the most
popular tool used by 57% of professional
developers. If you don't learn it now,
you significantly lower your chances of
landing a job. You can think of Docker
as a lunchbox for our application. In
the lunchbox, we pack not just the main
dish, which is our code, but also all
the specific ingredients or dependencies
it needs to taste just right. Now, this
special lunchbox is also magical. It
doesn't matter where we want to eat, at
our desk, a colleagueu's desk, or have a
little picnic. No matter the environment
or different computers, wherever we open
the lunchbox, everything is set up just
like it is in our kitchen. It ensures
consistency, portability, and prevents
us from overlooking any key ingredients,
making sure our code runs smoothly in
any environment without surprises.
Technically, that's what Docker is. It's
a platform that enables the development,
packaging, and execution of applications
in a unified environment. By clearly
specifying our applications requirements
such as NOJJS versions and necessary
packages, Docker generates a
self-contained box that includes its own
operating system and all the components
essential for running our application.
This box acts like a separate computer
virtually providing the operating system
runtimes and everything required for our
application to run smoothly. But why
should we bother using Docker at all?
Big shots like eBay, Spotify, Washington
Post, Yelp, and Uber noticed that using
Docker made their apps better and faster
in terms of both development and
deployment. Uber, for example, said in
their study that Docker helped them
onboard new developers in minutes
instead of weeks. So, what are some of
the most common things that Docker helps
with? First of all, consistency across
environments. Docker ensures that our
app runs the same on my computer, your
computer, and your boss's computer. No
more it works on my machine drama. It
also means everyone uses the same
commands to run the app, no matter what
computer they're using. Since
downloading services like Node.js isn't
the same on Linux, Windows, or Mac OS,
developers usually have to deal with
different operating systems. Docker
takes care of all of that for us. This
keeps everyone on the same page, reduces
confusion, and boosts collaboration,
making our app development and
deployment faster. The second thing is
isolation. Docker maintains a clear
boundary between our app and its
dependencies. So we'll have no more
clashes between applications much like
neatly partitioned lunchbox compartments
for veggies, fruits, and bread. This
improves security, simplifies debugging,
and makes development process smoother.
Next thing is portability. Docker lets
us easily move our applications between
different stages like from development
to testing or testing to production.
It's like packaging your app in a
lunchbox that can be moved around
without any hassle. Docker containers
are also lightweight and share the host
system resources making them more
efficient than any traditional virtual
machines. This efficiency translates to
faster application start times and
reduced resource usage. It also helps
with version control as just like we
track versions of our code using Git,
Docker helps us track versions of our
application. It's like having a rewind
button for our app so we can return to a
previous version if something goes
wrong. Talking about scalability, Docker
makes it easy to handle more users by
creating copies of our application when
needed. It's like having multiple copies
of a restaurant menu. When there are
more customers, each menu serves one
table. And finally, DevOps integration.
Docker bridges the gap between
development and operations, streamlining
the workflow from coding to deployment.
This integration ensures that the
software is developed, tested, and
deployed efficiently with continuous
feedback and collaboration. How does
Docker work? There are two most
important concepts in Docker. images and
containers and the entire workflow
revolves around them. Let's start with
images. A Docker image is a lightweight
standalone executable package that
includes everything needed to run a
piece of software, including the code,
runtimes like Noode.js, libraries,
system tools, and even the operating
system. Think of a Docker image as a
recipe for our application. It not only
lists the ingredients being code and
libraries, but also provides the
instructions such as runtime and system
tools to create a specific meal, meaning
to run our application.
And we would want to run this image
somewhere, right? And that's where
containers come in. A Docker container
is a runnable instance of a Docker
image. It represents the execution
environment for a specific application,
including its code, runtime, system
tools, and libraries included in the
Docker image.
A container takes everything specified
in the image, and follows its
instructions by executing necessary
commands, downloading packages, and
setting things up to run our
application. Once again, imagine having
a recipe for a delicious cake. The
recipe being the Docker image. Now when
we actually bake the ingredients, we can
serve it as a cake, right? The baked
cake is like a docker container. It's
the real thing created from the recipe.
Just like we can have multiple servings
of the same meal from a single recipe or
multiple documents created from a single
database schema. We can run multiple
containers from a single image. That's
what makes Docker the best. We create
one image and get as many instances as
we want from it in form of containers.
Now, if you dive deeper into Docker,
you'll also hear people talk about
volumes. A Docker volume is a persistent
data storage mechanism that allows data
to be shared between a Docker container
and the host machine, which is usually a
computer or a server or even among
multiple containers. It ensures data
durability and persistence even if the
container is stopped or removed. Think
of it as a shared folder or a storage
compartment that exists outside the
container. The next concept is Docker
network. It's a communication channel
that enables different Docker containers
to talk to each other or with the
external world. It creates connectivity,
allowing containers to share information
and services while maintaining
isolation. Think of a Docker network as
a big restaurant kitchen. In a large
kitchen being the host, you have
different cooking stations or
containers, each focused on a specific
meal. Meal being our application. Each
cooking station or a container is like a
chef working independently on a meal.
Now imagine a system of order tickets or
a Docker network connecting all of these
cooking stations together. Chefs can
communicate, ask for ingredients or
share recipes seamlessly.
Even though each station or a container
has its own space and focus, the
communication system or the Docker
network enables them to collaborate
efficiently. They share information
without interfering with each other's
cooking process. I hope it makes sense
but don't worry if it doesn't. We'll
explore it together in the demo. So
moving on, the Docker workflow is
distributed into three parts. Docker
client, Docker host aka Docker Damon,
and Docker registry aka Docker Hub. The
Docker client is the user interface for
interacting with Docker. It's the tool
we use to give Docker commands. We issue
commands to the Docker client via the
command line or a graphical user
interface instructing it to build, run,
or manage images or containers. Think of
the Docker client as the chef giving
instructions to the kitchen staff. The
Docker host or Docker Damon is the
background process responsible for
managing containers on the host system.
It listens for Docker client commands,
creates and manages containers, builds
images, and handles other Docker related
tasks. Imagine the Docker host as the
master chef overseeing the kitchen
carrying out instructions given by the
chef or the Docker client. Finally, the
Docker registry aka Docker Hub is a
centralized repository of Docker images.
It hosts both public and private
registries or packages. Docker is to
Docker Hub what git is to GitHub. In a
nutshell, Docker images are stored in
these registries. And when you run a
container, Docker may pull the required
image from the registry if it's
unavailable locally. To return to our
cooking analogy, think of Docker
registry as a cookbook or recipe
library. It's like a popular cookbook
store where you can find and share
different recipes. In this case, Docker
images.
In essence, the Docker client is the
command center where we issue
instructions. The Docker host then
executes these instructions and manages
containers. And the Docker registry
serves as a centralized storage for
sharing and distributing images.
Using Docker is super simple. All you
have to do is click the link in the
description, download Docker Desktop for
your own operating system, and that will
help you containerize your application
in the easiest way possible.
It'll definitely take some time to
download, but once you're there, you can
accept the recommended settings and sign
up. Once you're in, on the left side,
you can see the links to containers,
which display the containers we've made,
images, which shows the images we've
built, and volumes, which shows the
shared volumes we have created for our
containers, and other beta features like
builds, dev environments, and docker
code. Now, return to the browser and
Google Docker Hub. The first result will
surely be hub.docker.com.
Then open it up. Go to explore and you
can see all of the public images created
so far by developers worldwide. from
official images by verified publishers
to sponsored open-source ones covering
everything from operating system images
like Ubuntu languages like Python and
Golang, databases like Reddis,
Postgress, for MongoDB, MySQL,
runtimes like Noode.js to even Hello
World Docker image and also the old
peeps like WordPress and PHP. Almost
everything that you need is right here.
But how do we create our own Docker
images? Easy peasy. Creating a Docker
image starts from a special file called
Docker file. It's a set of instructions
telling Docker how to build an image for
your application. There are some
specific instructions and keywords we
use to tell Docker what we want through
the Docker file. Think of it as Docker
syntax or language to specify exactly
what we want. Here are some of the
commands from specifies the base image
to use for the new image. It's like
picking a starting kitchen that already
has some basic tools and ingredients.
Work deer sets the working directory for
the following instructions. It's like
deciding where in the kitchen you want
to do all your cooking. Copy copies the
files or directories from the build
context to the image. It's like bringing
in your recipe, ingredients, and any
special tools into your chosen cooking
spot. Run executes commands in the shell
during image build. It's like doing
specific steps of your recipe, such as
mixing ingredients.
Expose informs Docker that the container
will listen on specified network ports
at runtime. It's like saying, "I'm going
to use this specific part of the kitchen
to serve the food."
Env sets environment variables during
the build process. You can think of that
as setting the kitchen environment such
as deciding whether it's a busy
restaurant or a quiet home kitchen. Arg
defines built time variables. It's like
having a note that you can change before
you start cooking, like deciding if you
want to use fresh or frozen ingredients.
Volume creates a mount point for
externally mounted volumes. Essentially
specifying a location inside your
container where you can connect external
storage. It's like leaving a designated
space in your kitchen for someone to
bring in extra supplies if needed. CMD
provides default command to execute when
the container starts. It's like
specifying what dish you want to make
when someone orders from your menu.
Entry point specifies the default
executable to be run when the container
starts. It's like having a default dish
on your menu that people will get unless
they specifically ask for something
else. And you might wonder, isn't entry
point the same as cmd? Well, not really.
In simple terms, both cmd and entry
point are instructions in Docker for
defining the default command to run when
a container starts.
The key difference is that cmd is more
flexible and can be overridden when
running the container while entry point
defines the main command that cannot be
easily overridden. Think of cmd as
providing a default which can be changed
and entry point as setting a fixed
starting point for your container. If
both are used, cmd arguments will be
passed to entry point. And this are the
most used keywords when creating a
docker file. I have also prepared a list
of other options you can use in docker
files. You can think of it as a complete
guide and a cheat sheet you can refer to
when using docker. The link is in the
description. But now, let's actually use
some of these commands in practice.
Let's try to run one of the images
listed in the Docker Hub to see how that
works. Let's choose one of the operating
system images as an example. Let's go
for Ubuntu. On the right side of the
details of the image, you'll see a
command. Copy it and try executing it in
your terminal. But before we paste it,
first create a new empty folder on our
desktop called docker_course
and then drag and drop it to our empty
Visual Studio Code window. Open up your
empty terminal and paste the command
docker pool Ubuntu. It's going to do it
using the default tag latest and it's
going to take some time to pull it. As
you can see, it's working. Docker
initially checks if there are any images
with that name on our machine. If not,
it searches for the Docker hub, finds
the image, and automatically installs it
on our machine. Now, if we go back to
Docker Desktop, we'll immediately see an
Ubuntu image right here under images. To
confirm that we actually installed a
whole different operating system, we can
run a command that executes the image.
Do you know how that process is called?
creating a container. So let's run
docker run dashit
for interactive and then Ubuntu and
press enter. After you run this command,
head over to docker desktop and if you
go to containers, you'll see a new
container based off of the Ubuntu image.
Coming back to our terminal, you'll see
something different. If you've ever
tried Ubuntu before, you'll notice that
this terminal looks exactly like the
Ubuntu command line. Let's test out some
of the commands. ls for list. We have cd
home to move to our home directory. MK
dear, which is going to create a new
directory called hello. We can once
again ls
cd into hello to navigate to it. We can
create a new hello-ubuntu.txt.
txt.
We can ls to check it out if it's there.
And it is. We have just used different
Ubuntu commands right here within our
terminal. Amazing, isn't it? We are
running an entirely different operating
system simply by executing a Docker
image within a Docker container. For
now, let's kill this terminal by
pressing this trash icon and navigate
back to our Docker desktop. Now a bigger
question awaits. How do we create our
own Docker images? We can start from a
super simple Docker app that says hello
world. Let's create a new folder called
hello-ashd
docker. Within it, we can create a
simple hello.js
file. And we can type something like
console.log log hello
docker.
Then comes the interesting part. Next,
we'll create a docker file. Yep, it's
just Docker file like this. No dots, no
extensions. VS Code might prompt you to
install a Docker extension. And if it
does, just go ahead and install it. Now,
let's figure out what goes into the
Docker file. Do you remember the special
Docker syntax we talked about earlier?
Well, let's put it to use. First, we
have to select the base image to run the
app. We want to run a JavaScript file so
we can use the node runtime from the
Docker Hub. We'll use this one with an
Alpine version. It's a lightweight
version of Linux. So, we can type
something like from node 20-p.
Next, we want to set the working
directory to forward slapp. This is the
directory where commands will be run.
And then forward/ app is a standard
convention. So we can type work there
and then type forward slash app.
Next we can write copy dot dot like
this. This will copy everything from our
current directory to the docker image.
The first dot is the current directory
on our machine and the second dot is the
path to the current directory within the
container. Next, we have to specify the
command to run the app. In this case,
cmd node hello.js
will do the trick.
And now that we have created our Docker
file, let's move into the folder where
the Docker file is located by opening up
the terminal and then running cdello-d.
Inside of here, let's type docker build-
t and t stands for the tag, which is
optional. And if no tag is provided, it
defaults to the latest tag. And finally,
the path to the docker file. And in this
case, that's hello-ashdocer
dot because we're right there. And press
enter.
It's building it. And I think it
succeeded. Great. To verify that the
image has been created or not, we can
run a command docker images. And you can
see that we have two images, Ubuntu as
well as hello docker created 16 seconds
ago. Now, if you're a more visual
person, you can also visit Docker
desktop. Here, if you head to images,
you can see all of the images we have
created so far.
Now that we have our image, let's run or
containerize it to see what happens. So,
if we go back, we can run docker run
hello-doer.
There we have it, an excellent console
log.
If we go back to Docker desktop and then
open up that container
and navigate inside of the files,
you'll see a lot of different files and
folders, but there is one special file
here. Want to make a guess? Yes, it's
app which we created in Docker file.
Moving inside it, we can see that it
contains two of the same files we have
in our application. Docker file and
Hello.js exact replica. Also, if we want
to open up our application in shell
mode, similar to what we did with
Ubuntu, we have to run docker run it
hello-d.
This puts us directly within the
operating system and then you can simply
run node hello.js
to see the same output.
We can also publish the images we have
created on docker. But before that,
let's build something a bit more complex
than the simple hello world and then
let's publish it to the Docker Hub.
Which means that now we're diving into
the real deal, dockerizing ReactJS
applications. Let's dockerize our first
React application. I'm going to do that
by quickly spinning up a simple React
project by running the command mpm
create vit latest and then react- docker
as the folder name. If you press enter,
it's going to ask you which flavor of
JavaScript you want. In this case, let's
go with React. Sure, we can use
TypeScript. And we can now cd into
React-doccker.
And we won't run any mpm install or mpm
rundev because the dependencies will be
installed within our dockerized
container. So with that said now if we
clear it we are within react docker and
you can see our new react application
right here. So as the last time you
already know the drill we need to create
a new file called docker file. As you
can see, it automatically gets this
icon. And it's going to be quite similar
to the original Docker file that we had,
but this time I want to go into more
depth about each of these commands so
you know exactly what they do. And
because of that, below this course, you
can find a complete Docker file for our
React Docker application. Copy it and
paste it here. Once you do that, you
should be able to see something that
looks like this. It seems like there's a
lot of stuff, but there really isn't.
It's just a couple of commands, but I
wanted to take my time to deeply explain
all of the commands we're using right
here. So, let's go over all of that
together. First, we need to set the base
image to create the image for React app.
And we are setting it up from Node 20
Alpine. It's just a version 20 of Node.
You can use any other version you want.
And in these courses, I want to teach
you how to think for yourself, not
necessarily just replicate what I'm
doing here. So, if you hover over the
command, you can see exactly what it
does. Set the base image to use for
subsequent instructions. From must be
the first instruction in a Docker file.
And you can see a couple of examples.
You can use a from base image or you can
even add a tag or a digest. In this
case, we're adding a tag of a specific
version, but it's not necessary. And if
you click online documentation, you can
find even more instructions on exactly
how you can use this command. Next, we
have to play with permissions a bit.
Now, I know that these couple of
commands could be a bit confusing, but
we're doing it to protect our new
container from bad actors and users
wanting to do something bad with it. So
because of that we create a new user
with permissions only to run the app.
The s is used to create a system user
and g is used to add that user to a
group. This is done to avoid running the
app as a root user that has access to
everything. That way any vulnerability
in the app can be exploited to gain
access to the host system. This is
definitely not mandatory, but it's
definitely a good practice to run the
app as a non-root user, which is exactly
what we're doing here. We're creating a
system user, adding it to the user
group, and then we set the user to run
the app user app, and you can see more
information about right here. Set the
username to use when running the image.
Next, we set the working directory to
forward slash app. And then we copy the
package JSON and package log JSON to the
working directory. This is done before
copying the rest of the files to take
advantage of Docker's cache. If the
package JSON and package log JSON files
haven't changed, Docker will use the
cache dependencies. So copy files or
folders from source to destination in
the images file system. So first you
specify what you want to copy from the
source and then you provide a path where
you want to paste it to.
Next, sometimes the ownership of the
files in the working system is changed
to root and thus the app can't access
the files and throws an error eax
permission denied. To avoid this, change
the ownership of the files to the root
user. So we're just changing it back
from what we did above. Then we change
the ownership of the app directory to
the app user by running a new command in
this case chown where we specify which
user and group and directory we're
changing the access to and then we
change the user back to the app user and
once again if these commands are not
100% clear no worries. This is just
about playing with user permissions to
not let bad actors play with our
container. Finally, we install
dependencies. copy the rest of the files
to the working directory. Expose the
port 5173 to tell Docker that the
container listens on that specified
network and then we run the app. If you
want to learn about any of these
commands, hover over it. You can already
get a lot of info and then go to online
documentation if you need even more.
With that said, that is our Docker file.
Another great practice that we can do is
just go right here and create another
file similar to.git ignore. This time
it's called docker ignore. And here you
can add node_modules
just to simply exclude it from docker
because we don't really need it in our
node modules on our github. We don't
need it anywhere, not even in docker.
Docker is playing with our package json
and package lock json and then rebuilds
it when it needs to. Now finally once we
have our docker file we are ready to
build it. We can do that by opening up a
new terminal, navigating to React
Docker, and we can build it by running
the command docker build- t for tag,
which we can leave as default. React-
Docker, which is the name of the image,
and then dot to indicate that it's in
the current directory. And finally,
press enter. This is going to build out
the image, but we already know that an
image is not too much on its own. To use
the image, we have to actually run it.
So, let's run it by running the command
docker run react-doccker
and press enter. As you can see, it
built out all of the packages needed to
run our app and it seems to be running
on localhost 5173.
But if we open it up, it looks like the
site isn't showing even though we
specified thatexpose endpoint right here
saying that we're listening on 5173.
So why is it not working? Well, first we
need to understand that expose does only
one job and it's to inform docker that
the container should listen to that
specific exposed port in runtime. That
does make sense. But then why didn't
work? Well, it's because we know on
which port the docker container will
listen to. Docker knows it and so does
the container. But someone is missing
that information. Any guesses? Well,
it's the host is the main computer we're
using to run it. As we know, containers
run in isolated environments and by
default, they don't expose their ports
to the host machine or anyone else. This
means that even if a process inside the
container is listening on a specific
port, the port is not accessible from
outside the container. And to make our
host machine aware of that, we have to
utilize a concept known as port mapping.
It's a concept in Docker that allows us
to map ports between the Docker
container and the host machine. It's
exactly what we want to do. So to do
that, let's kill our entire terminal by
pressing this trash icon. Reopen it. Reg
to React-docker
and let's run the same command docker
run. And then we're going to add a P
flag right here and say map 5173 in our
container to 5173 on our host machine.
And then specify which image do we want
to run and press enter. Now as you can
see it seems to be good. But if I run
it, same things happens again. It's not
Docker's fold, but it's something that
we missed. It's a Vit. If you read the
logs right here, it's gonna say use-host
to expose. So, we have to expose that
port for vit 2. So, let's modify our
package json by going right here and
adding the d-host
to expose our dev environment. And now
again we'll have to stop everything,
kill our terminal, reopen it, ren to
React Docker, and then run the image
again. Which makes you wonder, wouldn't
it be great if Docker does it on its own
whenever we make some file changes? And
the answer is yes, definitely. And
Docker heard us. Later in the course,
I'll teach you how to use the latest
Docker features that allow us to
automatically build images and save us
from all of this hassle. But I first
want to teach you how to do it manually
to understand how cool Docker Compose
is, which I'm going to teach you later
on. So, let's just rerun the same
command.
And now we get an error. This means that
something is already connected to that
port. And this indeed is true. If you
check out our containers or images, we
have accumulated a large number of
images. So let's do a quick practice on
how to clear out all of our images or
containers.
Back in our terminal, we can run a
command docker ps, which is going to
give us a list of all of the current
containers alongside their ids, images,
created status, and more, as well as on
which ports are they listening on. This
is for all the active running
containers. And if you want to get
absolutely all containers, we can run
docker ps- a. And here you can see
absolutely all containers that we have.
That's a lot. Now the question is how do
we stop a specific container? Well, we
can stop it by running docker stop and
then use the name or the ID of a
specific container. You can use the
first three digits of the container ID
or you can use the entire name. So let's
use C3D
C3D. And if you get back the same
command, it means that it successfully
stopped it. If we go back to containers,
you can see that the C3D is no longer
running. But now let's say we have
accumulated a large number of
containers,
which we indeed have both the images and
containers. So, how can we get rid of
all of the inactive containers we have
created so far? Well, we can do that by
running docker container prune. If you
run that, it's going to say this will
remove all stopped containers. So, let's
press y. And that's fine. We only had
one that was stopped that we manually
stopped and it pruned it. But you can
also use the command docker rm to remove
a specific container by name or its ID.
So let's try with this one. A A7 docker
rm aa7 and press enter. Here we get a
response saying that we cannot stop a
running container. Of course you could
always use the d-force and that's going
to kill it. We can verify right here.
These commands are great and it's always
great to know your way around the CLI.
But nowadays we also have Docker Desktop
which allows us to play with it within a
graphical user interface which makes
things so much simpler. You can simply
use the stop action to stop the
container or you can use the delete
action to delete a container. It is that
easy. Similarly, you can do that for
images by selecting it and deleting all
images. And you can follow my process of
deleting everything. Right now, I just
want to ensure that we have a clean
working environment before we build out
our React example one more time. And
while we're here, if you have any
volumes, feel free to delete those as
well. There we go.
So, moving back, we want to first build
out our image. And now, let's repeat how
to do that. You simply have to run
docker build-asht the name of the image
and then dot. This is going to build out
the image. After you do that, we have to
run it with port mapping included. So
that's going to be docker run-b
map the ports and then the name of the
container you want to run and press
enter. It's going to run it. And you can
see a bit of a difference. Right now
here it's exposed to the network. And if
you try to run localhost 5173,
you can see that this time it actually
works.
That's great. But now if we go back to
our code, go to source app and change
this vit and react to something like
docker is awesome and save it.
Back on our local host, you can see that
it didn't make any changes.
That's very unfortunate. We hope that
this container could somehow stay up to
date with what we are developing.
Otherwise, it would be such a pain to
constantly rebuild containers with new
changes. This happens because when we
build the Docker image and run the
container, the code is then copied into
that container. You can see all the
files right here and they're not going
to change. So, even if you go right here
to app and then source and then app tsx,
rightclick it and click edit file,
you'll be able to see that here it still
says vit plus react.
So, what can we do? Well, we'll have to
further adjust our command. So, let's
simply stop our active container so we
can then rerun a new one on the same
port. Let's go back to our Visual Studio
Code. Clear it. Make sure that you're in
the React-docker folder and we need to
run the same command. Then we have to
also add a string sign dollar sign pwd
close it and then say col/app
and close it like so.
It seems a bit complicated, doesn't it?
What this means is that we tell docker
to mount the current working directory
where we run the docker run command into
the app directory inside the container.
This effectively means that our local
code is linked to the container and any
changes we make locally will be
immediately reflected inside the running
container. This tiny pwd represents the
current working directory over here. It
executes in the runtime to provide the
current working directory path. And V, V
stands for volume. That's because we're
creating a volume that's going to keep
track of all of those changes. Remember
that we talked about volumes before.
They try to ensure that we always have
our data stored somewhere. But before
you go ahead and press enter, there is
one more additional flag that we have to
add to this command. And that is yet
another dashv but this time forward
slapp slode
modules. Why are we doing this? Well, we
have to create a new volume for the node
modules directory within the container.
We do this to ensure that the volume
mount is available in its container. So
now when we run the container, it will
use the existing node modules from the
named volume and any changes to the
dependencies won't require a reinstall
when starting the container. This is
particularly useful in development
scenarios where you frequently start and
stop containers during code changes. So
let's run it. It's running on localhost
5173.
Docker is indeed awesome. But now the
question is if we change it, what's
going to happen? So we go here and say
something like Docker is awesome, but
also add a couple of whales at the end.
Press save. And then you can see PM V
update source app tsx. And now if we run
it, we have a couple of whales right
here. There we go. So whenever you
change something, you'll see the result
instantly in the UI. That's amazing.
And even if we go back to our Docker
desktop, you can see that now we have a
volume that keeps track of these
changes. And if you go under containers,
go to our active container, go to files,
and then let's go to app source app tsx
and edit. You can see that the changes
are also reflected right here. So that's
it. You have successfully learned how to
dockerize a front-end application. Not
many developers can do that. But you,
you are just getting started.
Now that we have created our Docker
image, let me teach you how to publish
it. We can do that using the command
line. So let's go right here, kill our
current terminal, reopen it, and cd into
React Docker.
Next, we can run docker login. And if
you already logged in with docker
desktop, it should automatically
authenticate you. Next, we can publish
our image using this command. Docker tag
react-doccker.
Then you need to add your username and
the name of the image.
You can find your username by going to
docker desktop, clicking on the icon on
top right, and then copying it from
there. In my case, it's JavaScript
mastery. And then I'm going to do
forward slreact-docker.
It's okay if we don't provide any tag
right here as the default tag is going
to be colon latest. Also, don't forget
that below this course, I provided a
complete list of all of the commands
including different tag commands to help
you get started with Docker anytime,
anywhere.
So, check them out and try running some
of them. Finally, let's publish our
image. And now we have to run docker
push javascript mastery or in this case
your username/react-doccker.
And this is going to actually push it to
our docker hub. There we go. Now if you
go back to docker desktop, you can see
that we have a JavaScript master react
docker image that is now actually pushed
on the hub. And you can also check it
out right here by going to localhub
images. And then you can see JavaScript
mastery has one latest image. And
another cool thing you can do is go to
hub.docker.com
where you can find your image published
under repositories and then check out
your account right here and you'll be
able to see your React Docker image
right here live on DockerHub. And now
other people can run this image as well
and containerize their applications by
using it. How cool is that? And that's
all it is to it. You have successfully
published your first Docker image. But
now that you know the basis, let's find
a more efficient way of dockerizing our
applications. Oh yeah, developers are
lazy. So writing and running all of
these commands for building images and
containers and then mapping them to host
is just too much to do. But it's not the
only way. We can improve or automate
this process with Docker Compose and run
everything our application needs to run
through Docker using one small single
command. Yes, we can use a single
straightforward command to run the
entire application.
So, say hi to Docker Compose. It's a
tool that allows us to define and manage
multicontainer Docker applications. It
uses a YAML file to configure the
services, networks, and volumes for your
application, enabling us to run and
scale the entire application with a
single command. We don't have to run 10
commands separately to run 10 containers
for one application. Thanks to Docker
Compose, we can list all the information
needed to run these 10 containers or
more in a single file and then run only
one command that automatically triggers
running the rest of the containers. In
simple words, Docker Compose is like a
chef's recipe for preparing multiple
meals in a single dinner. It allows us
to define and manage the entire cooking
process for recipes in one go,
specifying ingredients, cooking
instructions, and how different parts of
the meal should interact. With Docker
Compose, we can serve up our entire
culinary experience with just one
command.
And while we can manually create these
files on our own and set things up,
Docker also provides us with a CLI that
generates these files for us. It's
called Docker Init. Using Docker Init,
we initialize our application with all
the files needed to dockerize it by
specifying our tech choices.
So let's go ahead and create another VIT
project which we can use to test out the
features of docker compose and docker
init. We can open up a terminal and then
run mpm create vit add latest. In this
case we can call it vit- project and
press enter.
It's going to ask us a couple of
questions. It can be a react typescript
application. We can cd into it. And
please make sure that you are in the
Docker course, meaning in the root of
our folder. So it needs to create it
right next to React. If you were in
React before when you run this command,
it's going to create it inside of it. If
that's the case, delete it and just
navigate to Docker course and then run
the command. Now we can cd into V
project and we can learn how to use
Docker in it. It's so simple. You simply
run Docker in it. That's all there is to
it. And it's going to ask you many
questions based off which it's going to
generate a perfect YAML file for you. So
what application platform are we
planning on using? In this case, it's
going to be node. So you can simply
press enter. What version? You can just
press enter one more time to verify what
they're saying in parenthesis. 20 is
fine with us. MPM is good. Do we want to
use mpm run build? No, actually uh in
this case we're going to say no and
we're going to say mpm rundev. That's
what we want to use. And the server is
going to be 5173.
And that's it.
We can see that this has generated three
new files for us. The docker file which
we already know a lot about. This one
has some specific details in it. But you
can see that again it's based off of the
same thing.
It starts from a specific version, sets
up the environment variables, sets up
the working directory and run some
commands. We also have a docker ignore
where we can ignore some additional
files. And then there's this new file
compose.yaml.
While all of these files are important
with using docker compose, yaml is the
most important one. And you can read all
of these comments, but for now I just
want to delete them to show you what it
is comprised of.
We simply define which services we want
our apps or containers to use. We have a
server app where we build the context,
specify environment variables, and
specify the ports. Of course, these can
get much more complicated in case you
have multiple services you want to run,
which is exactly what I want to teach
you right now. Here, they were even kind
enough to provide an example of how you
would do that with running a complete
Postgress database. So you can specify
the database, database image, and
additional commands you can run. But
more on that later. We're going to
approach it from scratch. For now, we
can leave this empty compos yaml. And
first, let's focus on just the regular
Docker file. In this case, we can
replace this Docker file with the one we
have in our React Docker application.
So, copy this one right here and paste
it into this new one. This one we
already know what it is doing.
Now moving inside of the YAML file here,
we can rename the server into web as
that's a common practice for running web
applications and not servers.
We can also remove environment variables
as we're not using any
and we can leave the port. Finally, we
need to add the volumes for our web
service. So we can save volumes.
Make sure to provide a space here and
then a dash. And that's going to be
colon slapp
and another dash/app/node
modules.
Does this ring a bell? It's similar what
we have done before manually by using
the docker build command, but now we're
doing it all with this compose yaml
file. And now all we have to do is run a
new command. docker compose up and press
enter. And as you can see, we get a
permission denied. You never want to see
this. If you're in Windows, maybe you're
used to seeing this every day. In which
case, you simply have to close Visual
Studio Code, rightclick it, and then
press run as administrator. That should
give you all the necessary permissions.
On Mac OS or Linux, you can simply add
sudo before the command.
Then it's going to ask you for your
password and it's going to rerun it with
admin privileges. So let's press enter.
And the process started. It's building
it out. Now let's debug this further. We
get the same response we've gotten
before. Hm. What could this be? Port is
already allocated. Oh yeah, we forgot to
delete or close our container that we
used for previous React application. So
now we know the easy way to do it. We
simply go here, we select it and we can
stop it or delete it. Once it is
stopped, we can go back and then simply
rerun the command. I want to lead you
through all of the commands together,
even if the failed ones, just so you can
get the feel of how you would debug or
reapproach specific aspects once you
encounter errors. That's what being a
real developer is. getting stuck,
resolving the obstacle, and getting
things done. And finally, let's run the
command. It's running it. And if we go
to localhost 5173, ah, the same thing as
before. Any guesses? The answer is that
we once again forgot to add the d-host
to our vit dev script right here. So if
we add it stop our application from
running by pressing Ctrl C, this is
going to gracefully stop it. The cool
thing about Docker Compose is that it's
also stopping the container that it spun
up. And now that we have canceled our
action, we can try to rerun it with
pseudo Docker Compose up, but this time
with host included. And press enter.
It's going to rebuild it. And if we open
it up now, it works. By now, you should
have a solid understanding of how to
containerize applications within Docker.
Of course, you can take this further by
experimenting with dockerizing projects
like MER or Nex.js apps. And if you want
to deepen your Docker skills and see how
it applies across different types of
projects, check out my full Docker
course on YouTube. It's completely free
and the link is down in the description.
And if everything makes sense so far,
trust me, you're on the right track with
DevOps. You'll soon be applying this
knowledge to containerize production
ready applications and connect them with
CI/CD pipelines. I'll cover all of that
here. So, let's just keep going.
In the previous lesson, you learned how
to build and run containers with Docker.
You saw firsthand how containers package
your applications into neat portable
boxes that can run anywhere. That's
already super powerful, but everything
feels amazing until we start getting
more users. Suddenly, there are too many
requests for a single container to
handle. That container has a ceiling and
if it dies, your app dies with it.
That's where Kubernetes comes in.
Kubernetes is a container orchestration
platform. Its job to schedule, scale,
self-heal, and load balance containers
across machines so your app stays up.
So, one container isn't enough. Well, a
single process handles only a limited
number of concurrent requests before CPU
and memory become bottlenecks. Sure, you
can tune and scale vertically, but
there's always going to be a ceiling and
a single point of failure. And you can't
bet on one thing for life. If it goes
down, your application goes down and so
do you. So, you need replicas and an
automated way to place and manage them.
That is Kubernetes. often abbreviated as
K8s with the eight representing the
eight letters between K and S is an
open- source container orchestration
platform. At its core, Kubernetes helps
you run your app across multiple nodes,
scale replicas up and down based on
demand, restart unhealthy containers
automatically, and distribute traffic
across replicas, all while rolling out
updates without downtime. Docker gives
you containers. Kubernetes decides how,
where, and when they run. You can think
of Kubernetes as the operating system
for your container. Without it, you'd be
manually starting and stopping
containers, keeping track of IP
addresses, restarting crashed apps, and
scaling things up or down by hand. But
to really understand Kubernetes, let's
break it down into its building blocks.
First, we have the cluster. A cluster is
a group of machines either physical or
virtual that work together as one single
system. In Kubernetes, a cluster is made
up of a control plane which decides,
schedules, reconciles and monitors
health. And then we have worker nodes,
physical or virtual machines where your
containers actually run. Each worker
node runs the cubullet, an agent that
communicates with the control plane and
a container runtime like docker. There
is also something known as a cube proxy
that handles networking and routing
inside the cluster for every node. So
when you tell kubernetes run three
copies of my node.js app, the control
plane decides where these containers go
and the worker nodes actually run them.
Next up, we have pods. In Kubernetes,
you don't run containers directly.
Instead, containers are wrapped in
something called a pod. A pod is the
smallest deployable unit in Kubernetes.
There's usually one container per pod,
and each pod gets its own IP address.
So, when you deploy your app, Kubernetes
runs it inside a pod. You never interact
with containers directly in Kubernetes.
you only interact with pods. And you can
even run multiple pods by specifying
something known as a replica set. A
replica set ensures a specified number
of pods are always running. If you say,
"I want three replicas," Kubernetes
makes sure three pods are running at all
times. And if one pod dies, Kubernetes
spins up a new one automatically. This
is where scaling comes in. You don't
manually start containers. You just
declare how many replicas you want and
then comes the deployment.
Deployment is the higher level object
that manages replica sets. It allows you
to define updates to your application.
Kubernetes can do a rolling update,
gradually replace old pods with new ones
so your users never see downtime. It
handles all of these and ensures reality
always matches the desired state. If one
pod crashes, it creates a new one. So
instead of saying, "Go ahead and run
these containers," you say, "Here's my
app. Here's the image. Here's how many
replicas I need. Manage it for me." But
there's one issue. Pods are temporary.
They come and go. And each time they get
a new IP. So how do users or other pods
connect to them? They connect using
something called a service. A service is
a stable endpoint which is a permanent
IP or DNS name that automatically routes
traffic to the available pods behind it
and it also load balances requests among
multiple replicas. Think of a service as
the reception desk. You don't care which
employee helps you as long as someone
does. And apps often need configuration
and credentials. So that's why we have
config maps and secrets. Config maps
store configuration data, for example,
like a database URL. And secrets store
sensitive data like passwords or API
keys. Kubernetes injects these into pods
securely without baking them into your
Docker image. All of this sounds good,
right? But how do the external users
access this? Welcome, Ingress. Ingress
is like a smart router that exposes HTTP
and HTTPS routes to the outside world.
For example, it can map api.mmyapp.com
to your backend service. And similarly
to Docker, Kubernetes also has volumes.
Since containers are ephemeral, meaning
data is lost if restarted, Kubernetes
provides volumes for persistent storage.
That's what you need to know for now. If
you want a separate advanced deep dive
on Kubernetes, let me know down in the
comments and I'll record one for you.
And yeah, I think this goes without
saying, but you should never do anything
directly on production. Before touching
production clusters, you can create
local clusters that allow you to
experiment safely. They simulate a full
Kubernetes environment without risking
your production apps or cloud costs.
Miniube is the most popular choice. It's
lightweight and runs a single node
cluster inside a virtual machine or
container. When I say mini cube runs a
single node cluster, I mean that that
cluster has only one node that acts as
both the control plane and worker node.
And this is different from production
where clusters are usually multi-node
which means that control plane nodes
manage the cluster like the API server
or theuler and worker nodes run your
application workloads containing pods
and containers. So with a single node
cluster both roles live on the same
machine which is perfect for development
and testing. There are also some
alternatives like kind kubernetes in
docker which runs clusters inside docker
containers. It's great for CI/CD
pipelines and automated testing and K3S
a lightweight Kubernetes distribution
good for internet of things or resource
constraint machines but I still
recommend mini cube for learning because
it gives you all the Kubernetes
components locally including the API
serverul and cubullet so you can see
production-like behavior. So let's get
our hands dirty on creating your very
first local cluster and running
Kubernetes.
Let's dive right into the Kubernetes
demo. First things first, we'll create a
new repo. So I'll call it Kubernetes
demo and create. Now you can copy this
URL. So we can clone the repo within our
IDE. I'll do it using WebStorm. So I
simply need to provide the URL right
here and click clone. If you're using a
terminal, you can just say get clone and
then paste the URL. Once you're within
it, we can run mpm init-y
to initialize a new node application.
It'll start with only package json. So
while we're here, let's also install
express by running mpm install express.
Now after that, we'll have to install a
CLI for running Kubernetes. And if you
head over to Kubernetes documentation,
head over to tasks and then install
tools, you'll see the cubectl
installation on Linux, Mac OS or
Windows. So just proceed with your
operating system. I am on Mac OS Apple
Silicon. So I'll simply copy this girl
command, head back within our console
and type this. It'll download it and set
it up. But of course, the setup for
Windows is a bit different. So pause
this video right here and install
Cubectl. Once you've installed it, you
can run cubectl version-client
and you'll be able to see a version
right here. After that, you'll also need
to install mini cube, which is local
Kubernetes focusing on making it easy to
learn and develop for Kubernetes. And
here you can choose your operating
system, the architecture, and just copy
the installation command. Once again, I
will paste it for my device. You can
clone it for yours. Let's wait until it
gets installed. It might ask you the
password as well. So just type it in and
press enter and you'll be done. Once it
is done, you can run mini cube version
to see whether it has been installed
properly. If you see something like
this, we're good. With mini cube and
cubectl installed, we are ready to
create our index.js file, the starting
point of our express application. So
just go ahead and create a new file
called index.js.
And within it you can import express
from express you can initialize a new
app and define a port. It can be 3000
8080 5,000 or anything. After that you
can create a new empty route that's
going to be a forward slashhome route.
You can open up a request and a response
for this route and then you can send
some kind of an output like a message of
hello world. Of course, to be able to
reach this endpoint, we need to turn the
server on and make it listen on a
specific port. And we can also give it a
console log saying something like
example app listening on port. And then
we define the port right here. Now,
while we're here, we can also provide a
bit more information. So when somebody
tries to make get requests to our API
alongside passing the message of hello
world instead of it actually we'll say
something like hello from a container
because we'll be running this within a
cube container we'll also define a
service which will be hello-ash node
then I will also give it a pod which
will be process.env.pod_name
pod_name
or we can make it unknown if we don't
have a pod within our environment
variables and finally we can define the
time this is going to be a new date dot
to ISO string so we return it in a human
readable format alongside this home
route we can also add some basic health
endpoints for the probes we can do that
by saying app.get get and then listen on
something like ready Z where we'll also
have a request and response and we'll
just response with a status of 200 and
send a message of ready. If we get this
message that means that we are running
and another very important endpoint that
we need to have to make this work is
this app.get health Z. This one can send
a response of okay. Both of these two
endpoints are needed for Kubernetes to
know that our app is alive and well. So
make sure to have it and then we can
head over to our package JSON. Let's
change the type of this application from
common.js to module. So we can use ES6
imports and exports. And then let's also
add a new dev script that'll run node
with a watch flag. So whenever we make
any changes, the terminal will be
restarted. And we want to run this
index.js file. And after adding this dev
script, also make sure to add a start
script. This one will be even simpler.
It'll be just node index.js. In this
case, we don't have to add the watch
flag because in deployment, we won't be
making any changes that we'll have to
listen for or watch for. Rather, we'll
just run the finished server. Once you
do that, we want to start doing the
Docker setup. Since we've already
watched a crash course on Docker and
since later on in the build and deploy
part we will dockerize that application
for now I'll provide you with the files
needed to make it work. So first things
first we need a docker file which you
can create by just creating a docker
file and then within it within the video
kit down below you can copy and paste
the Kubernetes demo docker file or feel
free to pause this video and type it
out. We're first defining a base image
of an operating system that we want to
run and setting a working directory.
Then we're installing all the
dependencies separately for caching,
copying the app source, and then running
the app as a nonroot user. After that,
we will also need a docker compose file.
So create a new file called
docker-mpose.yaml.
And once again, I will provide this over
within the video kit down below or feel
free to write it by hand, but make sure
to use the two spaces to do the
indentation and not the tabs. Here we
have the services where we're defining
the API service. And within it, we have
a build where we want to build this
docker file with a specific container
image running on port 3000 with a node
env.
We give it access to different volumes
and a command of mpm rundev. I will also
define a getit ignore so that it knows
what not to push over to GitHub. It's
going to be node modules of course. And
we also want to do another one which
will be docker ignore which will include
node modules. mpm-debug.log.
So all the log files
docker filed
docker ignore.git
and dogggetit ignore. We don't need
those within docker. And now we are
ready to create a new docker image and
run a docker container in the foreground
which means in an attached mode. You can
use it this way when you've changed your
docker file or code and want to rebuild
it and run it in one go. Basically this
is your go-to command in development
mode. So open up your terminal and
simply type docker compose up- d-build
and press enter.
And if you run it, you'll see that it
says no configuration file provided but
we have our docker compose here or wait
it is docker compon rather we want to
make it compose.
So, if we fix this snipo, which you most
likely didn't have, and we rerun this
docker compose up build command, you'll
see that it'll start building, but it's
having trouble finding the Docker file.
And it looks like I misspelled that one,
too. It's the Docker filer. So, let's go
ahead and rename it to Docker file.
Hopefully, you had both of these, right?
And now, if I rerun the Docker Compose
up build command, and there we go. Our
Kubernetes demo is now running on port
3000. If you head over in your browser
and head over to localhost 3000, you'll
be able to see a JSON output that we're
sending over from our application. Now,
we can publish this Docker image so we
can refer to it when deploying
Kubernetes clusters. To publish our
Docker image, you can head over to the
Docker Hub. Just Google for it and
you'll be able to find it. Then, if you
head over to your profile, you'll be
able to find your username. So just copy
it. Once you get your username, you can
head over to your terminal. I'll open up
a new one because this one is running
the server. So now we can use this
username and the repo name to add a tag
to it by saying docker tag kubernetes
demo api or maybe it's something
different for you. Add latest. Then put
your username. For me, it's GSM mastery
pro slash Kubernetes demo API add latest
and press enter. This will apply a tag.
Once the tag is applied, you can then
run docker push jsmastery pro or your
name in this case forward slash the name
of the repo or the container at latest
and press enter. This will push this
image to docker hub. As soon as this is
done, you'll be able to head back to
your Docker Hub and under repositories,
you'll be able to see your first repo.
So, just reload and there we go. It's
pushed right here. Okay, dockerization
is done. But now is the time to make our
Docker image horizontally scalable so
that we don't just depend on the
resources of a single container. This
means that we can actually start working
on Kubernetes.
First, I'll create a new directory
called K8S, which is an abbreviation for
Kubernetes. And within it, I will create
a new file called deployment.yaml.
Now, Kubernetes is declarative, which
means that you describe what you want,
not how to do it. You do this using YAML
files. Remember those? So within this
YAML file, we can declare absolutely
everything from how many copies of the
app we want called replicas to which
docker image we want to use and which
ports to listen on. Everything can be
done right here. Now I'll share this
full deployment with you in the video
kit down below. So simply copy it and
paste it right here and we can go
through it together. First we define the
API version. Then we add a bit of
metadata about the name of our app. And
then the most important part is the
specification. Here you define how many
replicas of the app you want. In this
case, I said two. You add some labels
and then you further define the
specification of those containers. You
give each container a name and the image
to run off of. You also define on which
port they'll be running. You can pass
some additional environment variables
and attach different amounts of
resources to these containers and then
you can provide some additional
information. This is how you configure
Kubernetes. But after that, we also have
to provide network access to our pods.
So within the K8s folder, create a new
file called service.yaml.
I'll also pass this service file within
the video kit down below. So simply
paste it here. And the same as before,
we have to define our API version, the
kind, in this case, it's a service, pass
some additional metadata, and select
which app we want to connect with and on
which board it is and which type of a
protocol we want to use. And now is the
time to deploy it all locally. So, first
things first, we'll use the CLI that we
installed at the beginning of this
lesson. It's mini cube, so I'll just say
mini cube start. You'll get back the
output of what's happening such as
whether the host cubeled and API server
are running and the cube config should
be configured. It might take some time
to run it properly for the first time.
Done. Mini cube has now been configured.
And once it is done, you can run cubectl
get nodes. In the console, you'll see
that mini cube is running a control
plane. To check if your cluster is
running, you can run cubectl cluster
info. And here you can see more info
about the control plane. If the cluster
is healthy, you'll see something like
this where you can see the port where
it's running without running mini cube.
If you directly try to run this cubectl
commands, you won't get anything as
there is no cluster because mini cube is
a tool that sets up a local Kubernetes
cluster on a laptop. And without mini
cube start, there's no cluster running.
So cubectl can't connect anywhere. And
finally, we are ready to deploy these
files service and deployment.yamel. You
can do that in two separate lines by
deploying each one individually by
saying cubectl apply-f and then target
the path to each one of these files. Or
you can do it in a shorter single
command by running cubectl apply-f
k8s which is targeting the entire folder
containing both of these files. If you
do that for you, it should say that both
got configured and through this process
the Kubernetes API server will read
those YAML files. Deployment will create
pods via replica set and the service
will set up network routing to the pods.
Exactly what we learned about not that
long ago. So now we can get access to
the pod information to check the pod
status by running cubectl get pods-w.
You should be able to see two right
here. I was doing some additional
testing so I have four but essentially
you should see Kubernetes demo API two
times because we span up two replicas of
our Docker image. You can also get
access to the services by running
cubectl get services and you can see
those services running right here. And
finally it's time to test out the
application. Mini cube thankfully
provides us with a very simple way to
access your service and that is by
running mini cube service and then you
have to provide the name to your service
and that name was provided for us right
here when we ran cubectl get services in
this case it's referring to the
kubernetes demo API service so copy
whatever you have right here and run
minicube
service and then the service name. If
you do it correctly, you'll immediately
see that Kubernetes will open up a
service for you in your default browser.
Thankfully, it did it for me as well.
I'll zoom it out just a bit. And you can
see that our server is now live. If I
zoom it out even more, I want you to pay
attention to one thing, and that is the
pod Kubernetes demo API. And then it has
a specific ID. Now, if you reload it a
couple of times, oh, it looks like I'll
have to pretty print it every time. Or I
can be smart and just install a JSON
viewer pro extension from the Chrome web
store and add it to my Chrome or Arc,
which would give me a more beautiful
tree view of the data. So, if you now
reload it a couple of times, you can see
that the last part of the pod ID will
change, which essentially means that
you're making requests to two different
servers. This simple pod change shows
that Kubernetes can automatically
replace, replicate, and rebalance
workloads across the cluster, letting
the system scale up or down without
intervention. So if one container or pod
goes down for whatever reason, the other
one is up here and ready to serve your
users. This is huge. So what's happening
behind the scenes? Well, the API server
receives your YAML file. The scheduler
assigns pods to nodes. Cublet starts
containers inside pods and then the
service ensures that network is running
to these pods. But that's a lot of
things that we had to go through. You
had to create a Docker image, push it
over to Docker Hub, start mini cube, do
a Kubernetes deployment, get listed
pods, get services, and finally test it
out. a lot of actions that you would
have to repeat again and again. But
instead of doing that, you can simply
write a bash script and execute it
whenever you want to deploy your app. So
create a new file and call it deploy.sh.
Let's write it together. First you can
say set e, which means that we want to
run this script with bash and we want to
exit it automatically if it fails. Then
you want to define the name of your API.
For me, this means that it is a
Kubernetes demo API which is how we
called it before. You can also define
your username which in this case is JS
mastery pro or for you it's going to be
your name. And finally, you can provide
your docker image which is going to be a
combination of your username. So in bash
you use a dollar sign to use a variable.
So forward slash usernameward slashname
at latest. Then we're ready to build the
docker image. And while running this
script we can also put out some console
logs. And in bash you do it with echo.
So you can say something like building
docker image dot dot dot and then you
can run docker build- t and use the
dollar sign image variable which we
created above and then put a dot here to
build it right here. Then you want to
push that image over to the docker hub.
So let's say that using an echo command
something like pushing image
to docker hub and you can do that by
saying docker push dollar sign image.
Then you want to apply your kubernetes
deployments and services to yaml
configs. So we can also add an echo for
that saying something like applying
kubernetes manifests which are basically
just yaml files. So you can run cubectl
apply -f and then point it over to
k8s/deployment.yaml
and you can duplicate it for the
service.yamel file as well. Here we're
applying our Kubernetes deployments in
service yaml configs. Finally we want to
get all the pods. So we can say
something along the lines of getting
pods dot dot dot and run cubectl
get pods. Then we want to do a similar
thing for getting the services. So we
can say getting services by running
cubectl
get services. And once we list those
services we can get a service with the
name that we have declared above. So we
can say something like echo fetching the
main service and you can do it by
running cubectl get services and then
you can provide a name of the service.
So that's a name dash service like this.
And finally to test it out we can stop
mini cube from running and basically
stop everything else that is running.
The mini cube has to be stopped by
running mini cube stop. While that is
happening, open up Docker Desktop and
delete the Docker images from desktop.
So everything that has to do with
Kubernetes demo, the most important one
to delete is the one with your username
before it. So just go ahead and delete
it. Now you can head back over to your
terminal and run mini cube start to
restart this local mini cube service
that allows us to run local Kubernetes
deployments. And then when it starts, we
will simply run this single script
instead of running all the commands that
we previously ran. There we go. It is
done. And now just run mpm run deploy.
Oh, but let's make sure to add this
deploy script to the package json by
saying deploy. And to run it, you can
say sh deploy.sh.
So now if you run this command you'll
see that one by one it'll say first
building the docker image then pushing
images to docker hub then getting pods
applying kubernetes manifests
and finally getting the services and
listing the last service and this
basically tells you everything is ready
to run this service. So you can copy the
service name from here.
That's Kubernetes demo API service and
run minicube
service and then paste the name of the
service.
If you press enter, you'll see that
it'll be running on this port and you
can see the message from a new deployed
app on your computer. The pod changes
every now and then and your app is
running in two containers and you never
know from which container your request
is going to be executed. Kubernetes will
handle all of it for you. And if you
want to scale the app further, just head
over to your deployment YAML and change
the number of replicas and you can
immediately scale the app within seconds
by rerunning the deployment script. I
know it might feel overwhelming right
now, but with consistent practice, it'll
all start to click. The best part is you
can keep repeating and testing all these
commands in Mini Cube without breaking
anything. Once you feel confident you
have mastered the basics, that's when
you can step up and move to the cloud.
I've already covered a lot about
Kubernetes in the context of DevOps
here. But if you'd like me to create a
dedicated deep dive video focusing
solely on Kubernetes, drop a comment
down below and I'll make it happen for
you soon. And if you're looking for a
complete resource that takes you from
start to finish with plenty of real
examples, deeper foundational knowledge,
and a true understanding of how
everything works, then my ultimate
back-end course is made for you. YouTube
videos will give you a solid surface
level foundation, but with our courses,
you'll develop the mindset of a senior
developer. So, click the link down in
the description and I'll see you inside.
And if the course is not out yet and you
really want to dive deeper into
Kubernetes, you can check out our
Kubernetes reference guide and an ebook.
It's a part of this new YouTube
membership thing that I'm doing. So if
you want to support the channel, that's
a very easy way to do it. Anyways,
amazing job on learning Kubernetes in
this part of the course. But now we move
forward. Great job.
So far, you've seen how Docker makes
apps portable and how Kubernetes makes
them scalable and resilient. But here's
the real question. Where do these
clusters actually run? On servers, of
course. And those servers could be
anything from physical machines in a
data center to virtual machines in AWS,
GCP, or Azure.
Either way, before your first
deployment, someone has to spin up
compute instances, configure networking,
VPCs, subnets, firewalls, and load
balancers, set up storage, volumes,
databases, and backups, and install
runtimes and dependencies.
Traditionally, all this was manual work.
Click through dashboards, SSH into
servers, run ad hoc scripts. That's
slow, errorprone, and impossible to
scale.
This is where infrastructure as code or
ISC for short changes everything. ISC
means managing infrastructure servers,
networks and databases with code instead
of manual setup. Think of it like
writing blueprints for your entire
infrastructure. So instead of saying,
"Hey Ops team, can you create these
three servers and hook them up with a
load balancer?" You just write code that
looks something like this. resource AWS
instance web AEI ID instance type and
the count. Run this and you get exactly
those three servers. Change count to
five and two more are provisioned. Your
infrastructure immediately becomes
version control because you can track
changes in git, testable so you can
verify configs before deployment.
Reproducible because you can rebuild
those environments instantly and
sharable as you can onboard new
engineers fast. So, IA is all about
consistency, speed, scalability, and
collaboration. And you don't need to
master every provider service like AWS
cloud for, Azure resource manager or so
on. The industry prefers cloud agnostic
tools that work anywhere. One of those
is Terraform by Hashi Cororp. You simply
define infra in HCL and deploy across
AWS, Azure, GCP and more with a single
workflow. One script could create a
Kubernetes cluster in AWS, a database in
Azure, and storage in GCP. There's also
Helm, a package manager for Kubernetes
that turns messy YAML files into
reusable configurable templates. So if
you're starting out, simply focus on
Terraform as a generalpurpose ISC for
all clouds, Helm for managing Kubernetes
deployments, and then cloud specific
tools later if you go deep on one
platform. This skill set will make you
valuable anywhere and keep you out of
that vendor lock so Bezos can no longer
control what you do.
Over the last couple of years, you've
built a solid DevOps foundation. You
started with what DevOps really is and
why it matters. Then moved into hands-on
skills that every DevOps engineer needs.
Version control for smooth
collaboration. CI/CD pipelines to
automate testing and deployments. Docker
to package and run applications
consistently. Kubernetes to orchestrate
containers, scale apps, and manage
clusters. And then ISC to define and
deploy infrastructure reliably. With
these fundamentals, you now understand
how modern software goes from code to
production, automated, scalable, and
secure. Now, it's just practice building
real projects and layering on advanced
tools as you go. And speaking of
projects, let's put everything together
into action. It's time to actually build
and deploy a scalable API using all the
DevOps practices you've learned so far.
That means that this is not the end.
It's where the real learning actually
begins.
All right, we're finally about to jump
into the big part of this video,
building and deploying our production
ready API. And just a quick reminder
before we continue, DevOps is all about
doing and not watching. So that means
that you got to have the right stack of
tools set up. As I mentioned before, for
the database, we'll be using Neon. So go
ahead and click the link down in the
description and click start for free.
For security, we'll be using ArcJet. Bot
detection, raid limiting, email
validation, attack protection, you name
it, and we get painless security. and
then Warp, where we'll be running all of
our commands, shipping code, and even
automating tasks with AI. If you're not
using it, you'll constantly feel a step
behind. And don't forget, the Pro plan
is still just $1, which gives you
unlimited workflows and faster builds.
Once again, the link is in the
description. So, if you haven't yet set
these up, pause for a second, get them
done, and then come back. Once you've
got all three, you'll be able to follow
along seamlessly as we build and deploy
the API. So now the first step is to
create a new GitHub repo. You can call
it acquisitions. We need to create a
repo as soon we'll need to implement
CI/CD pipelines. So better to create a
repo from the start. You can just click
create repository and then clone it
locally on your device by copying this
link and cloning it locally onto your
system. In this case, I'll be using
WebStorm. So, you can just go ahead and
click clone repository. Paste the URL
and just click clone.
You can do it normally using git. Once
you're there, we need to initialize a
new NodeJS project. So, just run ls to
make sure you're in the right repo. I'll
go ahead and expand my terminal as we'll
be spending quite a lot of time within
the terminal, but later on we'll be
using warp. And I'll run mpm init. And
you can also add the dash y to just
press enter to all the default options.
Just like that, you'll be able to see a
new package json which is the root of
our application. Then you can install
express by running mpm install express
as well as environment
variables. Once that is done, you'll see
that you'll get a package json. And I'm
currently hiding the node modules folder
as I don't typically want to go into it.
But it is important that we exclude it
from git so it doesn't get pushed over
to GitHub. To do that, you can add a new
file called.git
ignore and then you can just say
node_modules
to exclude it from being pushed over to
GitHub. After that, you want to head
over into the package JSON and modify
this application to use the ES6 import
system, which you can do by adding an
additional type property or key and
setting the value to module. This refers
to ES6 plus modules. Oh, look, type is
already here below. So, we just want to
switch it over from CommonJS to module.
Next, you want to create a new file in
the root of the directory and you can
call it index.js. This will be the
starting point of our application.
Within it, simply import express from
express. Initialize a new application by
setting it equal to the call of the
express library. Then set the port equal
to either process.env.port
if it exists or by default we can make
it 3000. You can also do 8080 5,000 or
any other number. Finally, we want to
make the app listen on that port. And
once it starts listening, we want to
simply put a console log out saying
listening on port. Now to run this
application, we need to add an
additional script within a package.json
under scripts. For now, I will remove
this test script and instead of it add a
dev script that when ran will run node
d-watch index.js.
Now this d-watch flag tells Node.js to
automatically restart your program
whenever a file changes in your project
directory. Very important. And now if
you go ahead and run mpm rundev, you'll
see that it'll say node watch index.js.
GS listening on localhost 3000. Perfect.
And you can stop it from running by
pressing Ctrl + C. And instead of just
checking out this project locally and
calling it a day by having a single file
from which all of it is running, I
actually want to teach you how to create
a proper production level file and
folder structure. So let's start by
creating a new folder which you can call
SRC or source. Within the source folder,
we want to create a new app.js
file. Then, right next to the app, still
within the source folder, we'll create
another file, which is going to be
called index.js.
And there's going to be a third one
called server.js.
All three of these files will have their
own purpose. The app file is all about
setting up that Express application with
the right middleware, whereas the
server.js JS is all about running that
server, implementing some logging and
everything else to make sure that the
server is running properly and then
index is just like a starting point. Now
let's create a couple of other folders
within the source folder. The first one
I'll create will be called config. This
is a folder for all different kinds of
configurations.
Then we have controllers which is also
within the source folder. As a matter of
fact, every new folder we create will be
within the source folder because the
source is basically our entire
application. Now, when speaking of
controllers, that has a lot to do with
the model view controller paradigm in
developing backend applications. That's
something we'll dive much deeper into
within our backend course. Alongside
controller, we also have the middleware.
So, create a new folder called
middleware. Middleware are functions
that are run before or after some other
functions that our app does. Maybe
logging functions, so whenever a request
is made, you can see what happened. Or
maybe authentication or verification
actions to make sure that when somebody
tries to perform a specific API action,
the middleware checks whether that user
has the permissions to do so. After
that, another folder that we'll create
will be a models folder defining how our
database schemas and models look like.
Next, we'll also have a routes folder
defining our API routes. And you can see
how in my IDE, each one of these folders
has their own icon because these folder
names are actually a convention which a
lot of developers use. After routes, we
can also create services. After
services, we can create utils for
utility functions that's also within the
source folder. And finally, we'll have
validations for all different kinds of
validations within our application.
Right now, these are just different
empty folders and meaningless names, but
as soon as we dive deeper into
developing the application and we start
putting actual files within them, it'll
all start making so much more sense.
Now, you want to move the current
index.js gs express app setup that is
within this outside index.js into the
app.js. So simply copy it and move it
over to app.js. This is where we're
setting the express application. And
instead of defining the app listen and
the port, we can create a new endpoint
by saying app.get
forward slash. So this is the home
route. We get a request and a response
and we can respond something once the
user triggers this endpoint or reaches
it. Rest status of 200 send hello from
acquisitions API or just acquisitions is
enough. And then we can finally export
default this app. Now we can once again
copy this index.js JS and paste it over
into server because here we won't have
the actual app.get route but here we'll
actually be listening over to the
server. So we don't have to recreate the
app from express rather we just have to
import the app from slashapp.
So now we can see how we're connecting
it together. Now if you head over to the
index.js within the source you can now
import.env/config. env/config
to make sure that we can properly read
environment variables. And you can also
import the dot /server.js.
So now if you head back over to your
package.json,
you can modify your devcript to run
source index.js instead of just
index.js. And then we can delete this
index.js from the root because we no
longer need it. So now everything we
need is within the source folder.
Primarily we're creating the express
application right here within the app
and then we're also listening to the
server within the server file. What you
can do is maybe even say http colon/
slash and then add the port and delete
these three dots. Oh, and before we run
it, make sure that at the end of all the
imports, you add the.js extension or
whatever other extension we have in
React or Nex.js environments, it's no
longer necessary. But here with ES6
modules in Node, you have to specify the
extension. So once you do this, you can
go ahead and open up the terminal and
run mpm rundev. And now it'll say
listening on http colfor slash. Oh, and
looks like I forgot to add localhost.
But now if I add it, you can see that it
auto restarts because of the watch flag.
And now you can just click on this link
within your terminal and it'll open it
up within your browser saying hello from
acquisitions. This means that we have
now created the base file and folder
structure of our running express
application. Congrats. Let's not forget
to push it over to GitHub. And just to
stick with proper programming habits,
let's go ahead and commit this. No
matter how small of a commit it is, the
smaller the better. So, I will rename
this active terminal to app because it's
going to be running our app. And I'll
create another terminal right here
within which we can run additional
commands. So I'll just say get add dot
getit commit-m
initial commit
and get push. Immediately all the
changes are pushed.
In this lesson we'll implement a step
that is very easy to skip and a lot of
people on YouTube teaching these courses
simply skip over it and that is setting
up and installing eslint and prettier.
It is always useful to have it, but even
more so when you combine it with CI/CD
pipelines so it always properly formats
your code. First things first, we got to
install all the necessary dependencies
by running mpm install eslint
at eslint/js
prettier eslint-config
prettier to make them work together
eslint-plugin-pier
and dash capital d which stands for
development. Only install these as
development dependencies. When you
install these packages, create a new
file called eslint.config.js.
And within it, we can paste our new
eslink config. Now, this is the config
that I was building over the last couple
of years, but in simple terms, it just
extends the recommended JavaScript
config. And then it adds some additional
rules. You can find it and copy and
paste it from the video kit down below.
Once you add it, we can also add another
file for prettier. Oh, and make sure
that it's not within the source folder,
but within the root of the application.
The prettier file is also within the
root and it's called prettier rc. And
within here, we can form an object with
some settings such as whether you need
or don't need semicolons. The trailing
commas, that's the comma at the end
where you don't actually need it, but
you can have it. whether you want to use
single quotes or double quotes and so
on. Feel free to pause the screen right
here and type these out or you know what
I'll also leave it within the video kit
down below. Finally, we can add an
additional file called
prettier ignore and within here you can
paste the files and folders that you
want prettier to ignore. It's going to
be node modules coverage logs drizzle
logs and package log json. Once you do
that, head over into your source app.js.
And right here, you should be able to
see red squiggly lines telling you that
we have some issues with ESLint, such as
a missing semicolon, inconsistent
spacing, or a missing semicolon here as
well. This means that linting is
working. Now, to lint across all of our
pages, head over within our
package.json, and then let's add a new
script for linting. We'll use this
script later on in our CI/CD pipelines
to ensure our code is formatted
properly. So just add a new lint script
and make it simply call eslint dot. Then
you can also add a lint
fix which will run eslint dot-fix.
You can also add format which we'll use
with prettier.
So it's prettier-
dot. And finally, we'll have format
check. And this will be equal to
prettier
d-check dot. Now, if you open up your
app.js
and open up your terminal at the same
time and type mpm run lint, you'll see
that we'll have 55 errors in this very
small application, mostly due to
indentation errors and missing
semicolons. Then to automatically fix
them, simply run mpm run lint fix. And
as you can see, within a single terminal
command, all of these issues were fixed
automatically across all files. And to
also enforce prettier formatting, you
can also run mpm run format.
In this case, it was already good. Or to
check if formatting has been done
properly, you can run mpm run format
check.
Perfect.
All matched files use prettier code
style. Wonderful. This means that now we
have all of the necessary scripts that
we'll be able to later on run from our
CI/CD pipelines. So what I'll do is run
git add dot get commit-m
implement eslint and predier and get
push. Perfect.
If you've been following along, you most
likely already have an account on Neon.
But if not, let's create it right now by
clicking the link down in the
description and starting for free or
simply logging in. Once you're in,
create a new project. You can call it
JSM_acquisitions.
And you can choose the region that is
closest to you
and click create. Once you're in, you
can click connect and then copy this
connection string.
Then within your application, create a
new env file in the root of your
application.
And let's add a comment for server
configuration.
So here we can paste all of the
environment variables that have
something to do with the server such as
the port by default set to 3000, node
environment by default set to
development,
and log level, which we can set to info.
Then we can also do a database
configuration.
And here you can paste the database URL
and make it equal to the string that you
just copied over from neon. But make
sure to remove this psql at the start if
you have it and also the ending string
sign right here. Then we can also update
our get ignore to ignore all types of
different env files by sayingv dot and
then asterisk. And then it is always a
good idea to also alongsidev
create a newv.example
file. This serves the purpose of telling
people which variables they need, but it
won't actually include the sensitive
information. Now let's install neon by
running mpm install at neon
database/serverless
drizzle-m.
So we're installing both Neon as well as
Drizzle to keep our database queries
type safe. And we also want to add one
dev dependency which will be for Drizzle
Kit. So as soon as this is installed,
simply run mpm install-d
drizzle kit. And then we are ready to
configure our drizzle config by creating
a new file. It's going to be in the root
of our application and you can call it
drizzle.config.js.
js.
Start by importing env/config
at the top so we can actually refer to
our environment variables. And then
you'll need to export the configuration
for drizzle which will include a schema
which is actually a path to all of the
models. So that's going to be dot
/source/models/aststerisk.js.
This means we will store the schemas
right here. Then we can choose the
output which is just going to be
/drizle. We can also do a dialect of SQL
that we're using. In this case, it's
going to be posgress QL. And we need to
pass the DB credentials. That's going to
be an object where the URL is
process.env.
database URL. And now if you run that
eslint fix command
or just run eslint fix it'll fix all the
inconsistencies within this file. But
again we don't really have to worry
about it because later on we'll make
sure that linting is a part of our CI/CD
pipeline. Now let's go ahead and set up
the database by heading over into source
config and create a new file within the
config folder and call it database.js.
within it. You can also import.
You can also import neon as well as neon
config coming from at neon
database/serverless
as well as import drizzle coming from
drizzle or in this case we can use neon
http. Then we want to initialize the
neon client by saying constql is equal
to neon to which you need to pass the
database URL. So process envatabase
URL and you'll need to initialize the
drizzle by saying db is equal to drizzle
to which you pass this SQL variable. And
finally you export both the database and
the SQL. For now, since the neon config
is not used, we can remove it and bring
it back later on if needed. Now, we can
head back over to our package.json
and update our scripts to add additional
drizzle scripts. We can add them by
saying dbgenerate. That'll be
drizzle-kit
generate. And I will duplicate it two
more times. Then we will have db migrate
which will run drizzle kit migrate. And
finally DB studio which will run drizzle
kit studio. And now I can show you how
we can create a first model in our
application. That model is going to be
for users. So head over into source
models and create a new file called
user.mod.js.
within it you can export const
users and make it equal to pg table
which you need to import from drizzle rm
pg core as the first variable you're
going to pass the users that's the name
of the table and finally then you have
to pass the columns in this case we can
say that a user table needs to have an
ID which is going to be a serial ID and
it'll act as the primary key of this
table it'll also have a name
that'll be a varchar which we can import
from the same driorm pg core
of a name and a max length of 255 and we
need to make sure that it is not null.
Now I will duplicate this one two three
more times. For the second one we're
going to be talking about the email. So
we can say that email is of a varchchar
email with a length of 255 not null and
this one also has to be unique. After
that we're going to have a password. So
that's going to be a password of a
varchchar password with a length of 255
not null. And finally a roll. So I'll
say roll varchar roll with a length of
50 not null. And by default we can set
the role to just a regular user.
Finally, we can set a created at field
which is going to be a timestamp
coming from PG core and it'll default to
now. So default now dot not null and I
will duplicate it so we can also store
the updated at field and let's not
forget to import this serial at the top.
Perfect. So now we have created a users
table and the way it works with posgress
databases and drizzle is that now we
need to generate SQL schemas using
Drizzle by opening up a terminal and
running mpm run db generate.
Once you run that we will have gotten a
new SQL migration file right here under
drizzle. And here you can see that we
just created a new user stable. After
that, we'll need to migrate or push the
changes over to Neon DB by running mpm
run db migrate. If you do that, you'll
be able to see a warning, but everything
should have went through successfully.
So now, if you head back over to your
Neon dashboard and go over to tables,
you should be able to see a new users
table. Perfect. This means that we have
successfully set up a Neon Postgres
database with Drizzle ORM. So let's go
ahead and commit it by saying get add
dot get commit-m
setup neon posgress
with drizzle and get push. Perfect.
In this lesson we'll set up logging and
middleware. For that we'll use a super
popular logging library that has over
24,000 stars on GitHub and it can log
just about anything. info, errors,
debugging, and more. So, let's just
install it by opening up our terminal
and running mpmi winston. Then, we can
set it up by heading over to source
config. And within config, you can
create a new file called logger.js.
Now, if you head back over to this
GitHub repo, you'll see some
documentation about how to set it up.
So, you can just copy this usage part
where they guide you how to create your
own logger. If you paste it, you'll see
that we first import it. In this case,
they're using the old require. So, what
I'll do is I'll just change it over to
import Winston from Winston.
There we go. And once that is done,
we're basically creating our own logger
by calling Winston.create
logger. And then we pass over some info
such as the info level. In this case, if
we pass something different from our
environment variables such as
process.envlog
level, then we'll use that else we'll
just use info. But if remember this log
level for now is just info anyway. Then
for the format, instead of using the
default JSON one, we'll actually combine
a couple of things. So say combine and
then we can pass over all the different
formats. I will use the timestamp format
so we know when the log happened. We'll
combine it with the errors by saying
Winston.
And I want to see the whole stack of the
error. So I'll set it to true. And then
only then I want to see the whole JSON
right here. Then we have the default
meta which is going to be the name of
our application. So I can set it to
acquisitions API. And then we have
transports where you define the
importance level of error or higher to
error.log. So in this case we have new
winston.transports
file with file name error.log. So here
decide where should the Winston create a
new file for the error logs. I'll put
them over to logs error.log
with a level of error. And then for the
other ones, we can just pass them over
to logs slash combined.log.
Then if we're not in production, then
log to the console with the format info
level info message JSON stringified. So
in this case, we're simply saying if the
node environment is not production, then
log something to the console. In this
case, I'll also modify the format by
saying winston.format.
combine. I want to colorize it so we can
see it in colors as well as I want to
keep it simple. So I'll say formats
simple. I found these two properties to
work the best when logging. And finally
I want to export that logger by running
export default logger. So now if you
head over into source and then app.js,
we can now add that logger to whenever a
user makes a request to forward slash.
So, I'll say logger. Make sure to import
it from /config logger.js.info
and I'll say hello from acquisition. So
now we'll be able to see it not just in
the browser or the return of our API,
but also from the logs. But before we go
ahead and test it out, check out this
top part right here where it says
/config/loger.js.
There's nothing necessarily wrong with
that. That is the file path of where
we're importing this logger object from.
But you can easily make a mistake here.
Maybe if you forget a dot or a forward
slash or if you mess up the path. So
this is called a relative import system.
But I would much rather prefer to use
the absolute import system. So let me
show you how. You know how inexjest,
right? how you're importing different
packages as well. They're just there.
You don't have to search throughout the
entire relative path. Oh, and if you're
importing logger from somewhere else,
you would have to maybe go a couple of
levels deep to get into that file. So,
it's prone to errors. But imagine that
you could just call the logger from
anywhere by saying something like add
logger. This would be pretty cool,
right? So, let me show you how to set it
up. It all has to do by heading over
into package.json. Yep. Again, you can
see how DevOps has a lot to do with
setting up the scripts, but this time
it's not going to be the scripts, it's
going to be the imports. So, right above
scripts, you can create imports,
which is going to be an object. And
there we can define h#config
forward slash everything. So everything
within the config folder, we can
automatically point to that path
slashsource
config everything. And now I will
duplicate this for every single folder
that I have. 1 2 3 4 5 6 7 I don't know
how many there are. But we can do the
same thing for every single one. So I'll
repeat it over for controllers. Then we
have middleware. Then we have the
models. There's the routes services
utils.
And look at that. I duplicated it the
exact number of times ending with
validations. So now if you go back to
app.js and you want to use the absolute
import system to import the logger now
you would go ahead and say something
like hashconfig/loger.js.
Now I'll show you how Winston does all
of this logging very soon. But just
before we do that let's also install
something known as a helmet. See
helmet.js helps secure express apps with
various HTTP headers. It also has over
10k stars and is a widely recognized
package. So we first have to install it
by running mpmi helmet and it is a very
lightweight package. So it'll get
installed within a second. Then we can
copy its usage right here. Head over
within the app.js and paste it. You'll
see that we have some duplicates. We're
already creating the app and we're just
importing helmet from helmet.
And then right below we initialize the
app, we say app do use helmet. In this
case, helmet would be considered a
middleware. And alongside helmet, we'll
also set up Morgan. It's a logging
middleware that'll show you details like
the method, URL, status code or response
time whenever somebody makes a request
to our API. Basically, we use it to
monitor traffic and debug requests
easily, especially in development. So,
let's scroll down to its usage. First
you have to install it by running mpm
install morgan and then you'll have to
use it. Using it is super simple. You do
almost the same exact thing as before by
first importing morgan coming from
Morgan. Then in this case we'll also
allow our application to pass JSON
objects through its requests by saying
app dot use
express.json
and also app.use use express URL encoded
extended to true.
This is a built-in middleware function
in express and it allows you to parse
incoming requests with URL encoded
payloads based on body parser. So
finally we can now also use the Morgan
by saying app.use
Morgan
combined. So both in dev and production
stream and here we can define what it'll
actually do. So it'll write and then you
can define a callback function. When it
gets a message, it will simply return
logger.info
and then it'll pass the message.trim.
So in this case, we're actually
combining both our logging library
through Winston and Morgan by passing
over Morgan's messages into our logger.
And while we're here, let's also set up
a couple more very important pieces of
middleware. I'll open up the terminal
and install them. one by one by running
mpm install course. See course lets your
backend decide which external domains
can make requests to it. Without it,
browsers will block calls from different
origins like a React app on localhost
3000 calling an API on localhost 5000.
Then we also need cookie parser. Cookie
parser will read cookies from incoming
requests and make them available in
recit.cookies.
It's super useful for handling sessions,
authentication, and storing small bits
of user data. And finally, express.json,
which we used. This one you don't have
to install. It's already built in. But
basically, it parses JSON data in the
request body and exposes it to you. So,
you can access it within rec. It's
essential for APIs since most clients
send data in a JSON format. So, let's
install them and let's set them up
within our app.js.
You can already see how the app is
growing larger.
Right here after helmet, I will say app
dot use and setup course. Make sure to
import it at the top by saying import
course coming from course. And right
here at the end, I'll also say app dot
use.
And we will get access to the cookie
parser.
Make sure to also import the cookie
parser right here at the top.
from cookie dash parser like this. Now,
if you reload your application or just
rerun it on localhost 3000 and just make
a request to it by heading over to
localhost 3000. When you close it,
you'll be able to immediately see a
Winston log within the console where it
says hello from acquisitions. That is
this part right here, logger info. It's
also letting us know which service this
is coming from. gives us more
information about the date, the
operating system, and all the other
information. Oh, and also, if you head
over into acquisitions logs, the folder
we created not that long ago, you can
also see the logs created for us by
Morgan, all of the HTTP information is
stored right here, so we can retrieve it
whenever needed. This is super important
when debugging your servers. Perfect.
And with that in mind, you have just
successfully set up logging and
middleware within your backend
application.
In this lesson, we'll get started with
implementing the authentication. So,
head over into our source and let's
create our first group of routes. I'll
create a new file which will be called
o.outes.js.
And within it, you can first import
express coming from express. And then we
can get access to express's router
functionality by saying router is equal
to express.outer.
Router allows you to create routes.
You can do it like this. Router.post.
So you're creating a post route atward
slash sign-up.
And then as the second parameter you
provide a handler which is a function
that defines what will happen once this
endpoint is reached. So in this case we
can define a new callback function that
will be executed within the block of the
code. You write what's going to happen
but within the first two parameters
you're getting access to the request and
the response. The response has the send
method that allows you to respond to the
user that's trying to access this
endpoint. So here you can tell it
something like post/api/signup
response. And now I will duplicate this
two more times. For the second one we
will trigger the signin functionality.
So I'll say coming from sign in. And for
the third one I'll do sign out.
Perfect. Now for this router to work, we
first have to export it by saying export
default router and then we need to use
it within the app.js.
So now if you head a bit below right
here we have the app.get get. You can
also add app dot use similar to how we
use the middleware, but you can also use
the routers like this by saying all of
the routes within this router will start
with for/ API sl and then we expose this
entire o routes which we can import at
the top. So now when somebody goes to
for slap API slash off slashsignin they
will hit this signin route right here.
Alongside using this router let's also
add something known as a health check by
adding the app.get and it's nothing more
than just another endpoint which is
going to be called health. It'll once
again have the request and the response
and within it we'll simply say rest.st
status of 200 and we'll send over a JSON
object that'll have a status of okay.
It'll also have a timestamp of the
current date and time. We can even put
it to ISO string by saying new date. ISO
string so it's in a human readable
format. And finally uptime which is
process.up time to define for how long
has our server already been up. And
alongside the health since we just
exposed this new router on the O route
group, we can also create another
endpoint
app.get which is not going to be forward
slash which we have right here above.
It'll rather be forward/appi. And if
somebody tries to go to forward/appi,
we'll say rest.st status 200.json
JSON and we'll send over a message
saying something like acquisition API is
running. Perfect. So now we can test all
of these API routes. So let's actually
go ahead and get these routes tested.
What you can do is first ensure that the
server is running on localhost 3000 by
running mpm rundev. And once it is you
can visit it within the browser and then
manually change the URL path to
something like/halth. But you can only
do that for get requests because what
does it mean to load a website? Think
about it. It means just making a simple
get request over to that endpoint. So if
you want to test get routes, you can do
that with the browser. But as soon as
you have a post, put update or another
type of a route type, then you have to
use something known as an HTTP client.
There are many different clients out
there. There's Postman, Insomnia, but
recently I used the one that I found the
simplest, HTTP. Simply Google HTTP, go
to their web app. Once you're within
HTTP, you can type http col//lohost
3000
and we can test it out. You'll quickly
see that we get a DNS error saying to
check the URL and try again or for local
networks, download the desktop app. So
in this case, let's go ahead and very
quickly install HTTP on our device.
It'll try to open it. If you don't have
it, just quickly install it. We once
again get a DNS error. But this time, if
you just say col 3000, so we're hitting
our local network. You can see that we
indeed get back a response. This is a
response in an HTML format, hello from
acquisitions, because we didn't use
res.json to send the response. We just
use res.end. But if you head over to
forward slash API, in that case you'll
see that we get back a much nicer JSON
format object saying acquisitions API is
running. And if you head over to
forward/halth,
you'll see that we get the timestamp and
the uptime. Everything is working
perfectly. What this also allows us to
do is to make a post request to for
slash 3000 API off signin. And if you
click send, you'll see that we reached
the post sign-in response. So now is our
time to get it implemented. So we can
then test it out using this HTTP client.
So heading back over to our IDE, you can
open up the terminal and install a new
package called JSON web token. JSON web
token or JWT for short is a compact
URLsafe means of representing claims to
be transferred between two parties. In
simple words, we use them to ensure that
our users are signed in and that they
are who they claim to be. The way you
can implement JWTs is by heading over to
source and we can get started by
creating a utility function that'll make
it easier for us to use JWTs across the
entire authentication process. So create
a new file called JWT.js.
And within here you can import JWT from
JSON web token. And let's set up our JWT
secret by saying const JWT secret or
underscore secret is equal to
process.env.jwt
secret. Or if we don't have anything
within our env
here by default by saying something like
your secret key please change in
production. So it's very important that
this is not coming from your local code.
It needs to be coming over from
environment variables. And you can also
define another constant for how long for
that JWT to expire. So JWT expires
in and we can set it to 1D as in one
day. And then we can create this new JWT
token which is basically just an object
that has a couple of different methods
on it which we will define. The first
method will be called sign and this
method accepts a payload
and it'll try to return a signed JSON
web token after it verifies that we are
who we claim to be. So we can open up a
new try and catch block in the catch. If
there's an error, we can use the logger
functionality to log that error saying
something like failed to authenticate
the token and then we can render the
actual error and we can also throw that
error within the application. In the
try, we will actually return a signed
JWT
with the payload that we passed in by
verifying the secret
and passing over as options when this
JWT expires. A second part of this JWT
config besides the sign is the verify.
So after we have assigned JWT, we also
have to verify it with a special token.
I'll also open up a try and catch block.
In the catch, we can do the same thing
that we did before. We will say logger.
Fail to authenticate token and log the
error and then throw that error so we
can catch it. And then in the try, I
will return the verified token. So,
JWT.verify
token and the secret. This will only
work if we have access to our apps or
JWT's secret. And don't forget that we
are already exporting this, so we'll be
able to use it very soon. But just
before we use it, let's also create
helpers for the cookies.
We can do that right here within utils
and create a new file called cookies.js.
And following the same fashion, we can
export a new object called cookies,
which will have a couple of different
methods attached to it. The first one
will be to get the options. It's a
callback function with an automatic
return. An automatic return means that
we don't just put curly braces here.
That's opening up a code function block,
but rather we wrap it with parenthesis.
And that means that we're actually
returning this object. Here we can say
that it is an HTTP only cookie to make
it more secure. Talking about security,
we can define the environment. So in
this case it'll be process.env.node
env has to be equal to production. If it
is then we are in the secure mode. We
will also set the same site origin to
strict as well as a max age of this
token to be 15 * 60 * 1,000. This is 15
minutes. So 15 * 60 seconds * 1,000
milliseconds. Let's also set up a couple
more methods such as the set. How can we
set the cookies? Well, we can set them
by first accepting a couple of things
within parameters. So, we'll be
accepting the response, the name of a
cookie, the value that we want to set to
it, as well as some additional options,
which by default will be set to an empty
object. And then we can say rest.cookie,
pass the name and the value. And
finally, we want to spread dot dot dot
cookies.get
options. and then want to append
additional options to it. If we do
decide to provide some additional ones,
now we want to do a similar thing with a
clear. When we want to clear the
cookies, we will also be accepting all
of these different props. So, rest name
and options. By default, these options
are equal to an empty object. And then
we will say rest.cookies. Instead of
calling rest.cookie, we will call
rest.clear cookie with a specific name,
no value in this case. and we will still
be providing all of the options.
Finally, once we have set it or reset
it, we want to be able to access it. So,
I'll create a get method with a wreck
and a name of that cookie. And we will
simply return w.cookies
under a specific name. Perfect. Now,
before we test this out, we now also
want to install an additional package by
running mpm install zod. Zod is another
one of those very popular backend
packages. It's actually a TypeScript
first schema validation with static type
inference with almost 40,000 stars.
We'll use it to define schemas with
strongly typed validated results. So
let's define how our signup schema
should look like. I'll do it right here
within validations folder by creating a
new file called O.validation.js.
within it we can import in curly braces
z from
zod and then export this schema export
const signup schema is equal to z.object
and here we can define what the schema
needs to have. We can start with
validating the name field by saying name
will be z string and we can trim it. If
you want, you can also provide some
additional parameters like min to define
the minimum amount of characters that it
should have or max something like 255
and then trim typically comes at the
end. Let's also continue doing that for
the email by saying email will be of Z.
with a max of 255.
We will lowerase it and we will trim it
in case people left some extra
characters. I'll also do the password
with Z dot string dot min of about six
max of about 128. That's more than
enough for our password. We will not
lowerase it. Password can contain
uppercase characters and we will not
trim it. Let's not mess with passwords.
And finally, the role of a user. I'll
set that to be equal to Z.enum.
Enum stands for, you know, some of the
options that we can choose such as
either a user or admin. And then
finally, we will set it to default to
user. Finally, we can also have a signin
schema which will be very similar. So
I'll say export const signin schema
Z.Object. It'll have an email of Z.2
to lowercase.trim
and it'll have a password. So if
somebody's trying to sign in, they're
only using their email and password. So
Z.String
min one character. We basically need a
password right here. Perfect. So now we
have those schemas which we're exporting
so we can use them within our
application. And we also have to somehow
properly format all of these errors so
that they can get sent over to the user.
For that I'll go ahead and create
another utility function within utils
and I'll call it format.js.
Within here we can export a new function
called format validation in case we're
getting many errors. So this one will
accept all of the errors and then if
there is no errors or no errors.
We will simply return validation failed.
But if there are some errors and if it's
a type of an array. So say array.is
array. What an interesting way to check
whether something is actually an array.
Then we will map over all of those
errors by saying return error dot issues
map where we get each individual issue
take over its message and once we do we
can join them all together by commas.
Hopefully this makes sense. But if it's
not an array of issues, just one
validation error, we will simply pass it
over. So JSON stringify errors. So no
matter how many there are, we're always
going to present it in a single string
separated by commas. And now we're ready
to write the logic for creating the user
once they sign up. We can do that within
controllers. See where within routes you
define the actual endpoints. Within
controllers, you define what will happen
once those endpoints are reached. So
create a new controller called
o.controller.js.
The whole goal of a controller is to
export a single function that'll do the
job when a specific endpoint is called.
So in this case, I will export a new
function called signup that is going to
be an asynchronous function that accepts
a request and a response. and the next.
I'll talk a bit about what this next
means very soon. And then it does
something. I'll open up a try and catch
block as before. So if something goes
wrong, we can properly log it and handle
it. First things first, I'll turn the
logger on and log an error saying
something like signup error so we know
exactly where it happened. And then we
can provide additional error messages.
And then specifically if the error
dossage is equal to user with this email
already exists in that case we can
return some other status by saying rest
status of 409 which is an exact HTTP
status that says basically what that
message says user with this email
already exists and we'll provide more
information uh to that user but
otherwise we're simply going to say Hey,
take this error and forward it over by
using the next functionality. Next is
typically used when something is
considered a middleware or when a
specific action will be called before or
another action so that it passes the
torch in a way that it'll execute some
logic and then it'll pass it over to the
next function in the chain. It'll make a
bit more sense later on once we put the
sign up where it needs to be. Okay, so
now let's focus on implementing the
logic of the signup. First we want to
validate the data that is coming into
the form or into this endpoint by saying
const validation result is equal to
signup schema dot safe parse and we're
going to pass the rec.body. This rec
body will contain the form data that the
user has typed in when trying to make a
request. If no validation results
success, so if something went wrong, we
will return a ResNet status of 400
alongside the following JSON payload.
It'll have an error of validation failed
and it'll also have details within which
we're going to use our format validation
error utility function and to it we can
say validation result dot error. So the
user will know exactly what went wrong
and what they have to fix. But if
everything went right, we can get access
to the name, email, and the role that
the user has submitted through the form
by dstructuring them from the validation
result data. Then within here, we'll
have to call our O service to actually
create an account. And then later on, we
can use the logger functionality to
simply give us an info message of
something like user registered
successfully. And we can even render
their email. So let's make sure this is
a template string. Finally, we'll
respond with a status of 2011, which
means user created. And I'll pass an
additional JSON payload of message user
registered. And we'll pass over the user
information. For now, we can pass an ID
of one. We're faking it alongside the
user's name, email, and the role that
got created. Okay, so let's test this
out. If I now head back over to my HTTP
client and head over to sign up, if I
make a request, we'll see just a regular
signup response. That's because we
haven't yet hooked up our controller to
our route. So to hook it up, it couldn't
be any simpler. What you need to do is
just remove this callback function
because we no longer need it. And what
you need to instead do is simply call
the signup controller. So you're
basically saying once a user goes to O
API sign up call this function. So now
we will no longer be receiving this
rather we'll get a validation error
invalid input expected object received
undefined and this is perfectly fine.
Imagine that we just submitted a form on
the front end and it was asking us for
an email, a name and a role and we
basically left everything blank. That's
not how it goes. We have to get that
data from the form and actually pass it
through request body. So let's format.
I'll say that this request body it will
have a name of something like email and
a value of contact atjsmastery.com.
Now if I send it, it'll say invalid
input expected string received undefined
invalid input expected string received
undefined as well. It is referring to
our other two pieces of the form and
that is the name which I was also
missing and finally a password. So if I
pass a password of something like 1 2 3
1 2 3 you'll see that we have
successfully apparently right this is
just fake for now registered a new user.
This is looking good but take a look at
our back end. If you take a look at the
logs you can see that the application
was listening and that a new post
request was made. Now in this case we
were not console logging the password or
doing anything with it but we might as
well could have because it got passed in
a plain text format which means that we
need to secure it and we can secure it
by encrypting it. For that there's one
best package that most people use and it
is called brypt. So simply run mpm
install bcrypt and then we'll head over
into another file.
This time we'll actually create our
first service. So if you head over into
source services and create a new file
called o-service.js.
Here we will implement the logic of
hashing our password. So simply say
export const hash password which is
equal to an asynchronous function that
accepts the password in plain text and
then it'll open up a try and cache
block. In the catch, as usual, it'll try
to log an error saying logger.
Invalid password or actually what's
happening here is we have an error
hashing the password
which shouldn't really happen. And then
we can finally throw a new error error
hashing. But if everything goes right,
we can try to just return the hashed
password by calling the await because
hashing it takes some time and is
asynchronous and using the brypt
library. So brypt dot hash you just have
to pass the password and a number called
salt rounds. So how many rounds do you
want to hash it for? Typically the
default is about 10 to 12. So that's
what I'll do. And don't forget to import
brypt at the top by saying import brypt
from brypt. And this is it. This is a
function that will now hash our
password. Once we hash the password, we
also have to create a new user. So let's
create it just below by saying export
const
create user is equal to
an asynchronous function that'll accept
an object which we can automatically
dstructure and get its name, email,
password and the role which by default
will be set to user. Then we can open up
a try and catch block. In the catch we
will simply log the error by saying
logger.
And the error will be all about creating
the user. So say error creating the user
and we will throw that error. And in the
try we'll first see whether that user
already exists in the database by saying
constex existing user is equal to db do.
Select from the table of users where EQ
this is coming from Drizzle ORM
users email matches the one of email
that we're now trying to create the
account for and limit it to one user.
Make sure to import this DB right at the
top coming from config database.js. Also
make sure to import the users model by
saying import in curly braces users
coming from models user modeljs. And I
think we're good. Then if an existing
user exists so if their length is
greater than zero in that case we will
throw a new error saying user already
exists. But if it doesn't, we can start
hashing the password and creating a new
user by saying const password hash is
equal to await hash password. That's the
function that we just created above. And
then we can get access to this new user
by saying const dstructure the array of
the response and get this new user out
of it. And then call await
database.insert.
Insert.
Insert into the users table the
following values.
The values of name, email, password, but
not the password in the plain text
format, but rather a password hash and
then a role. And from this database,
return me the following things. So I'll
say dot
returning id of users do id name of
usersname
email of users.mmail
ro of users roll and created at of
userscreated
at I think you get the idea. Oh I think
I have a typo right here. So let's fix
it. name of users.name
and I will actually put this into
multiple lines so it's much easier to
look at values in one line and returning
in another. Finally, once we create this
new user, we can log it out by saying
logger.info
user with this new email created
successfully and then we want to return
that user from this service. These
services are like just additional
utility functions that we're going to
call within our controllers.
So now let's head over into our
ocontroller.
That's within o.controller.js.
Here I left some space for this o
service. And first we want to call the
create user const user is equal to await
o service. Oh, it looks like we forgot
to export it. So head over to O service
and as you can see we're exporting these
services one by one. So what you can do
instead of calling o service you can
just simply call create user and it
should autoimp import it from services o
service. To create user you can now pass
everything it needs such as the name
email password which now will
automatically get hashed as well as the
role. This password is coming from the
validation data of course. And once we
have the user, we can create their JWT.
So con token is equal to JWT
token.
Make sure to also import this
dot sign pass the ID of user id email of
user.mail and ro of user.
Finally, we can take that token and set
it to the cookies by saying cookies
again import it from our utility cookies
dot set. We created this method on our
own and we want to set the entire
response with the name of token and a
value of this token we just crafted by
using JWT sign. Once we do that, we're
successfully logging this in. And now we
no longer have to fake this user ID. We
now have a real one. User do ID name is
equal to user name. Email is equal to
user.mail and roll is equal to user.
Perfect. And as you know we're calling
this controller right here within
routes. So if you head over into routes
you can see we're calling the signup.
And if you make a request to this
endpoint but with proper data, in this
case I'll be using just a regular text
element. So we can pass a regular JSON
object that looks something like this.
Name of admin, email of
adminjmastery.pro
with a password of 1 2 3 4 5 6 and a
role of admin. You can now send that
request and you'll get back a 2011
saying user registered with a role of
admin. And you can also create another
one maybe Adrian AdrienJSM mastery pro
this time with a role of user and you'll
see that the ID will be incremented. So
this time it'll be saying two and for
each one of these requests you can also
see the response. So if you click right
here you can see all the information
about this response and take a look at
this field called set cookie. You can
see that the cookie actually contains
the actual JWT. So within our
application, we'll know that this user
got authenticated. Amazing job. This
basically handles the majority of the
authentication setup. We have the sign
up and user creation. Now we also got to
figure out how to do sign in and sign
out which are of course essential parts
of every good authentication. So we'll
do that soon. Before we dive into
implementing rest of authentication
features and running DevOps tasks on a
realworld app, I wanted to stop for a
second and show you a tool that I've
been using a lot lately. It's Warp, the
fastest way to build and ship
applications using AI right inside a
single environment. It includes a
terminal, a code editor, and an agent
hub. So instead of memorizing commands
or juggling through different docs, you
can just type in what you want in plain
English and Warp will figure out the
rest. Execute the commands and just get
the job done. With Warp, you can chat
with any AI, all the top models
including Claw, GPT5, and more. And it
helps to plan architecture, explain or
review your codebase. You can also write
and edit code in line with smart
suggestions. You can run multiple AI
agents at once to speed up the results
and automate entire workflows. So typing
something like undo last merge actually
rolls back the changes in seconds. In
short, you describe what you want and
warp does the heavy lifting. And the
best part is the developer experience of
having everything together. A terminal,
a code editor, and agents all in one
place. No need to have three separate
browsers, an editor, and 10 tabs open
just to get some output from AI agents.
And talking about pricing, you can try
it all for free. They have a pretty
generous free plan. But if you need
something bigger, for a limited time,
Warp is offering you, yes, you watching
this video, the Pro plan for only $1,
which is normally 18 bucks per month.
So, click the link down in the
description before it's gone and we'll
use it very soon to 10x our
productivity. Let's dive right in. To
get started, click the link down in the
description and download Warp. Once you
download and log into Warp within your
device, you'll see a pretty empty screen
that allows you to create a new project,
open a repo, or clone it. But the real
magic happens right here at the bottom.
Here you have two different modes, a
terminal and an agent mode. Then you can
select any AI agent you want to use for
this specific task. In this case, I'll
go with auto, which is cloud for sonnet
since it's currently the best option for
coding tasks. Then if you press a
forward slash, you'll be able to see
some options such as adding MCP servers,
adding prompts, rules, and more. Oh, and
there's also a voice input mode. So if
you're more of a vibe coder who prefers
to code by speaking over typing, well,
you can do that, too. Here you also have
some directories. So you can switch
between different repos and attach
additional context from different
folders. So let's cd into the repo we've
been working within its acquisitions.
And immediately warp is asking us
whether we want to optimize it for this
codebase. And for sure I'll select
optimize. It'll do that by indexing this
entire codebase. And it can also create
its own MD file. So I'll say sure go
ahead. As soon as we enter this repo,
you can see that now it tells us the
version of node we're running, where
this project is located. We can see more
git metadata such as the branch and how
many changes we've made. And we can also
attach some additional context. Now,
while it's doing its thing, I'm going to
open up the warp drive on the left side
right here. You can open it up by just
pressing this button at the top under
personal. Here you have different things
such as the rules you can add, the MCP
servers, and the getting started
notebook. And it's not limited just to
personal projects. You can also have
your entire team join you. And while I
was clicking around, you can see how it
actually opened different windows all
within a single environment. This is the
beauty of this developer experience
using Warp. No longer do you have to
have 10 different apps opened up. It's
just one. So, let me go ahead and create
a new team that I'll be using for this
project. I'll call it acquisitions
since that is the name of our app. And
let's start with JSM acquisitions. Once
you create it, you can start adding some
workflows or notebooks right here. Or
you can press this plus icon and then
start with a new prompt within the video
kit down below. I'll give you access to
the complete prompt that you can type
right here. So we can set it up in the
same way. Let's start by giving it a
title of codebase architect explainer.
And then we can give it a description
saying that this is an AI prompt that
studies any codebase and produces a
clear structured explanation of its
architecture and how it works. And then
you can pass the prompt itself which you
can find within the video kit down
below. Now when you create this codebase
architect explainer prompt,
it'll be added to your team or to your
personal project. Then when you head
back over to your terminal and type
forward slash, you'll be able to see the
list of your different prompts. And one
of those is the codebase architect
explainer. As soon as you type it, it'll
automatically select it. And the only
thing you have to do is simply press
enter. So no need to type in long
prompts manually or scout your past chat
GPT history to find them. Now we can
store them all in one place and
re-execute them whenever you need to.
The first thing that it is doing is
reading the files that we have. And as
you can see, immediately it started
explaining what is currently happening
within our codebase.
Of course, you took the time to build
it, so you already know what's
happening. But of course, seeing an
overview of the project structure, for
example, and of all of the folders we
created is always super useful. we get a
complete breakdown of all of the
components that form the application
together and even the data flow of the
application where a client makes a
request. Then we have the express
middleware to the route handler to the
controller to zod to services and
finally we return the response. So this
is perfect in case you want to study it
in more detail or ask it more questions.
I'm just amazed. I mean, it even gives
us a sample request execution that we
can send over to test this API. This is
great. And this is just one single
prompt that I gave it and it immediately
spat all of this back within this a bit
unorthodox terminal like experience. So
like it's not a typical chat and it's
not a terminal either nor it's an editor
but it's all combined into one and that
feels so familiar to me as a developer.
So let's see what else Warp can do
besides just explaining our codebase and
I think to truly test warp to increase
our speed as developers I want to ask it
to implement what was going to be the
next step for me to do manually within
the application. Remember, we just
created a route for the signup and a
signup controller, but we haven't yet
implemented controllers nor services for
sign in and sign out routes. So, let's
ask Warp to do it for us. Now, check
this out. I will run a clear command to
clear what I'm currently seeing within
my terminal window. And then I'll switch
over to the agent mode. And you can
immediately start typing of what you
wanted to do.
I'll give it some background such as you
are a back-end developer
working on an ExpressJS
app with O features.
Your job is to extend the O service and
controller
to support user login and logout. I
think that this even might be enough.
But just to make sure that you can
follow along and see the same exact
output that I'm having. I took a bit of
time to write a bit more detailed prompt
just so I don't have to type it out
manually. I'll give it to you within the
video kit down below. So simply copy it
and paste it right here. I provided it
with a bit more info on what it can do
to make this happen and then saying that
it needs to implement these two
functions. So now simply press enter
like you're running a terminal command
in agent mode. Warp immediately started
warping and told us that it'll help us
extend the authentication service
controller with these two additional
functionalities. It'll do some things on
its own but for some things it's going
to ask us whether it's exactly what we
want. So for example it created this
compare password function and it put it
within services. This is exactly how I
want it to look and this is exactly
where I want it to be. So, I'll gladly
accept the changes by simply pressing
enter. And of course, if you want to
edit something, you can do that by
pressing command E or pressing the edit
button and then edit it and then submit
it and then it'll continue producing the
code leading to the output. Same thing
right here for adding the authenticate
user to the O service. In this case, it
looks like it's actually applying a fix
to the code that I wrote that I didn't
necessarily anticipate, and that is that
I forgot to put the await right here
before DB select. So, this was a crazy
catch by Warp. It didn't only do what I
asked it to do, which is to add
additional features to the O, but it
actually fixed a mistake that I made on
my own. So, I'll definitely apply those
changes as well. And only now it's
starting to extend it. I'll press enter
a couple more times. And let's see what
it comes up with. It's properly
implementing the schemas as well as the
two controller functions, sign in and
sign out. And finally, it'll update the
routes to use the new controller
functions. That one should really be
quick. There we go. It removed all of
these lines and simply imported sign up,
sign in, and sign out and trigger them.
When we reach those endpoints, it'll now
read through all of those files and
check the implementation. and it says
perfect, I've done it. Gives me an
overview of exactly what it
accomplished, which code it added,
and how it updated the routes, and
finally, which features were
implemented.
Finally, also gives me an example of how
I can test it. So, let's test out the
implementation. I've opened up HDPI and
instead of sign up, I'll head over to
sign in to see whether that works. And
now I need to sign in with my email and
password. And before I send it, let's
make sure that our application is
running on localhost 3000. And then we
got user signed in successfully, which
means that the sign-in function has been
implemented. And now we can also try the
sign out. It actually told us what we
have to do. No body required. We just
cleared the cookies by making a post
request to sign out. So if I do that
without passing anything within the
body, it says user signed out
successfully. To be honest, I'm just
blown away by not even the simplicity of
using it, but just by developer
experience of having it all within a
single environment. And of course, since
it's connected to our repo, all the
changes that have been made are
immediately visible within our other
editor as well. That's amazing. So now,
if I switch over back to the terminal, I
can clear it. I can run git add dot git
commit-m and say implement o
and then push it all from within a
single terminal like experience where 5
seconds ago I was speaking with agents.
Oh, and want to see something else? If I
press right here, I can open up the
project explorer and actually browse my
entire application and all of the code.
And I can then hover over that code to
add it as additional context. and I can
have the files on one side and the
terminal and the agent on the other. It
really feels like a full experience. So
with that in mind, authentication is now
done. And in the next lesson, let me
show you how we can secure it.
Now that we've implemented
authentication with proper logging and
monitoring, it's time to make it more
secure. And for that, we'll use Arkjet.
See, a backend without security, rate
limiting, or bot protection isn't just
unfinished, it's also unsafe. Because
these exposed APIs with no safeguards
are left to abuse, spam, and even DDoS
attacks. All it takes is one bad actor
to overwhelm your system and take it
down. Thankfully, ARJet has built-in
features for Noode.js applications that
protect us. ARJet Shield automatically
protects your apps against most common
attacks, including the top 10 most
popular attacks. Then without rate
limiting, a single client can flood your
servers with requests and starve out
real users. But with Arjet, you can
limit the amount of requests that each
user can make. There's also bot
protection so that you can stop
automated scripts that exploit your
endpoints or scrape data or brute force
credentials. And also, you can protect a
signup form by combining all of these
things together. If you think about it,
these attacks aren't edge cases. They're
everyday realities. And making your app
secure is not only smart, but it is
necessary. More than that, it wouldn't
make sense not to make it secure as it
is so simple doing that using Arjet. So,
click the link down in the description
and let's set it up together. Sign up
for free. You can go with GitHub. As you
can see, I already have a couple of
projects that I'm hosting over on ARJet.
So, I'll create a new site right now.
You can name it something like JSM and
we're going to use the acquisitions or
you can call it DevOps as well and
create. Now, immediately you're given a
key that you can copy. So let's do that
first and add it over to your env right
below. You can call it arjet
and add the arjet key equal to this key
right here. And then we can follow the
setup for node and express. So just
click right here and it'll give you
instructions on how you can set it up
within 5 minutes. We have already
installed express. So what we have to do
is install at arjet/node
and add arjet/insspect.
Now it'll be up to you whether you want
to keep using your current IDE or editor
or you want to switch over to warp and
run everything there. Both ways are
totally fine. I'll proceed with warp
just to see how it all works. So I'll
clear everything and install the
necessary packages. Arjet node and arjet
inspect. While that is installing, the
next step is to of course set our key
which we have already done and then add
some configuration rules. So go ahead
and copy this file from the
documentation and then open up your
project explorer. Within here you can
head over to config and create a new
file. Call it arjet.js.
Within this file you can paste what you
just copied. But we don't have to set up
a new app right here. We just need to
set up an instance of arjet with our own
rules. So that's going to be import
arcjet shield detect bot and sliding
window. This is one of the raid limiting
methods. And then we don't need this
spoofedbot or express. While setting up
our instance, we first need to pass over
our key coming from process.env.archjet
key. And then we can start defining the
rules. First of all, we have this shield
rule which protects your app from the
most common attacks such as SQL
injections which is super useful in this
case and I will leave it live. Then we
have the bot detection rule where you
can also say live or you can also use
dry run to log only and then we're
specifying which bots should we allow
because not all bots are necessarily
malicious. Maybe you want some bots
within your application such as for
monitoring and so on. In this case,
we're allowing the search engine bots
such as Google or Bing to crawl our
application so we have good SEO. This is
incredibly important. And we can also
turn in link previews for example for
Slack or Discord. So I'll say category
preview as well. Finally, instead of
using a token bucket rate limiting
algorithm, we're going to use a sliding
window. The way it works is you say
sliding window. And then you can define
the mode of that window. I'll set it to
live. And then we can set the interval
to 2 seconds, which will refill the
sliding window every 2 seconds. And
it'll allow for a max of five requests
per interval. For example, you can
increase this interval to maybe 1
minute. So you allow five requests per
minute. It's totally up to you. Now,
since this is just the configuration
file for ArcJet, in this case, we don't
necessarily need to have this request
right here. We're going to do that later
on from within our application or we'll
add it as part of the middleware. So, I
will just delete this part from here.
And instead, I will simply export AJ as
this new instance of the ARJet
configuration that we've just created.
And then we can move over to creating
security middleware by heading over to
source middleware and creating our first
middleware which will be called security
middleware.js.
Within here we can now import this
instance of arcjet that we created
coming from config/archjet.js
and we can create this security
middleware which is going to be just an
asynchronous function that accepts the
request the response and then the next.
So we can forward it over to the next
function in the chain. That's what the
middleware functions are for. And then
I'll open up a new try and catch block.
In the catch, we can just log the error
by saying console.
A rjet middleware error. And then we can
actually log it right here. And then
we'll also send a reset status of
500.json JSON with the error of internal
server error if something went wrong
with a message saying something went
wrong with security middleware. But if
everything is going well here we can set
up all the different limits and all the
different security measures that we want
Arkjet to implement. So first things
first we got to figure out which user
are we working with. Is that user a
guest, an admin, or a regular user? So
we'll say const ro is equal to wc do
user question mark. roll and by default
we can set it to guest. Maybe they're
unauthorized.
Then I'll create two empty variables of
limit and the message that we want to
display. And then I'll open up a switch
statement that's going to change the
limit and the message depending on the
role. So if the role is set to admin in
that case we're going to set the limit
over to 20 requests
and we'll set the message to be equal to
admin request limit exceeded 20 per
minute slow down. Now we can also add a
break statement right here to end this
case and we can duplicate it two more
times below.
Okay, just like this. For the second
case, we're going to have the user,
we're going to limit them to 10 requests
and then we'll say user limit exceeded
that is about 10 per minute also slow
down. And finally, the guest will be
allowed five requests per minute. So
this automatically tells you how
extensible the security measures that
you implement within your applications
are. You can give specific types of
users more requests. If your API is
paid, maybe you can have different tiers
and then if somebody's paying more, you
can actually charge more money for more
requests per a specific window of time.
After that we can define a new arjet
client by saying ajwith rule and we'll
provide a rule of sliding window and
pass in a mode of live an interval of 1
minute a max which is the limit defined
above and also a name of that rule which
is going to be set to a template string
of roll rate limit. So this is the new
rule that we're applying. And we can
even take it a step further by adding
logging for when Arjet figures out
someone is a bot. So I'll say if
decision this is very important. So if
arjet decides that something is a bot
and this decision is coming from client
protect. So as soon as we try to protect
this request cause decision is equal to
await client.protect not protect this
request that we're trying to make. So if
this decision is denied to make that
request and if the decision reason isbot
obviously then they're a bot then we can
use the logger functionality by
importing the logger at the top. So we
can say something along the lines of
import logger from hashconfig/logger.js.
And we also need to import this sliding
window algorithm by saying import
sliding window coming from at arjet
slash node. So now if we are denying our
request because of a bot then we can use
the logger.warn
feature and we can say something like
bot request blocked and then provide
additional information such as the IP
address of w.ip. IP
user agent of rec.get user agent. So we
know which user agent tried to make that
request and then the path that this
request was trying to be made too. So
that's going to be wreck.path. And then
of course since we have declined them we
can also return a rest. status of 403
with the following JSON message error of
forbidden. We are blocking you. And we
can also add a message saying something
like automated requests are not allowed.
Perfect. It is that easy to implement
bot detection. Now we can duplicate this
request down below including this logger
warn and the return statement. And this
time instead of checking for bots, we
can check if it's denied but if the
reason is shield. So this has to do with
the 10 most common attacks. In this
case, I'll say shield blocked request
and we'll say the same thing. Give me
the IP address, give me the user agent,
give me the path. And in this case, we
can also try to log the method that the
user is trying to make. So that can
either be post, put, update, get and so
on. And finally, I will do it one more
time by once again copying this bot
detection. And instead of checking
whether it's a bot, I'll check whether
it's a rate limit. Remember the limits
above. So in this case, we'll say rate
limit exceeded. And we will log the same
things again. And instead of saying
automated requests are not allowed for
the shield blocks, we can say something
like request blocked by security policy.
And then for the third one, we can say
something like too many requests.
Finally, if we pass all of these if
statements, that means that we are
allowed to make a request. So we can
finally just say next. This is good.
This middleware did what it's supposed
to. It tried to secure the application.
There's nothing to secure it from. So go
ahead and make the request. So now back
within our app.js, we can add another
middleware. Right here above all of our
requests, I'll add the app.use Use
security middleware and make sure to
import it right at the top by saying
import security middleware
coming from hash middleware/security.m
middleware.js.
And I think now you get a better idea of
what these middlewares are. So these are
all the different types of middlewares
that we're injecting in between our
requests that extend our app with some
additional functionality such as this
one that is applying security to our
application. Now to test it out, head
back over to HTTP and let's try to make
a request to for/halth and we can make
it a get request. So if you send it out,
we get a connection refused. That's
fine. Let's not forget that we can
immediately open up a new tab, make sure
that we are in the right repo, and then
just run mpm rundev to spin up our
application. Now, it looks like I have a
typo right here that is in security
middleware. So, if I quickly head over
to security middleware, we can fix it in
one go. It is on line 27. It looks like
I had one extra curly brace. And then we
also have to close this function right
here after we close the catch. Now if we
head back over to the config file of the
arkjet, same thing here. We have one
extra curly brace and back within our
app
we have to properly import security
middleware and we are properly importing
it but we forgot to export it from here.
So just do export default security
middleware and now we are running and
listening on localhost 3000 and as you
can see argjet is listening. So back
within our application, if you try to
make a request to forward/halth, you'll
see that everything is good. And now if
you rapidly make requests to health,
you'll see forbidden too many requests.
This means that you've just successfully
added rate limiting to your application.
And if you head back over within your
terminal, you'll be able to see
something like this. All the infos are
here, but we also have different
warnings from our logging system that
say rate limiting exceeded. We can see
the IP, the path, the service, and the
user agent, in this case, HTTP. Oh, and
if you head over within your ARJet
dashboard, you'll see that at first it
allowed a couple of requests over to
forward/halth. But then as soon as we
hit the rate limit, it starts denying
them, saying that the limit is a max of
five. At the same time, it also checked
out all the different rules that we set
for the rate limiting and bot protection
with specific allowances. But in this
case, we did not break any bot
protections. But just so you know that
you are protected. So if somebody tries
to hack your application, make some
malicious requests or just tries to use
bots to scrape your API, ArcJet will
protect you. That's it. It was super
seamless to add security to our
application. So let's go ahead and
commit it by saying get add dot getit
commit-m secure our API with arjet and
push it. Perfect. Now you know how you
can secure any API that you create in
the future and you can do the same thing
for different frameworks such as Nex.js.
In this lesson, let's dockerize our
application. You've learned a lot about
how Docker works within the crash course
part of this course, but now we'll
actually dockerize our application for
local and production environments so
that you can run it within any
environment and it works for everybody
who's running it. We've already talked
about what dockerization is. So now
let's leverage AI to help us implement
it within this project. In the video kit
down below, I'll provide you with a
special docization prompt. Simply copy
it and paste it within warp and then
let's go through it together. It says
you're a senior DevOps engineer. Your
task is to dockerize my application that
uses a neon database. The setup must
work differently for development and
production. Development environment is
local and we want to use Neon local via
Docker. Configure docker compose to run
neon local proxy alongside my
application. And again, you can learn
more about Neon Local right here. That's
the beauty of these agents. If something
is not working properly, you can just
provide it access to a link so that it
knows how to work with it. Again, your
prompts typically are going to be much
shorter and much less precise than this.
But just to ensure that we have more or
less the same output that we're getting,
I decided to take a bit more time
writing this prompt. So, let's go ahead
and run it. It says, "I'll help you
dockerize your acquisitions application
with neon database support. Let me first
analyze your current project structure
and then create the necessary docking
configuration." Then it walks us through
the tasks that it created for itself.
The first step will be to create a
Docker file for NodeJS application.
Immediately, we get back a Docker file
for Node.js acquisitions application
with a multi-stage build that works for
both production and development. What
it's doing here is creating a new
nonroot user for security. Then we're
changing its ownership and exposing the
port. There's even a health check right
here. And then we repeat the same thing
for development stage. Now you can see
for development right here the command
is mpm rundev. But for production it is
manual. It's just pointing to a source
index.js. Instead of that we can head
back over to our code, head over to
package.json JSON and like we have added
a dev script right here for development.
We can also add a start script for
production that'll simply be node
slource/index.js
or I think it's just source/index.js.
This way we're watching for the changes
in development. But when you run it on a
server, you simply want to run it once
and let it run. Now we can edit this
file provided by warp and simply change
this over to from node source index.js
gs to simply mpm,
start,
and apply changes. And now it'll
continue doing its thing. Now we've
gotten the docker compose file, and it
looks like it has the configuration for
neon local, the environment, volumes,
health check, network, and then the
node.js application right here. It's
using all of the best Docker practices
to create the files that will dockerize
the application for us. So for the time
being, let's go ahead and accept it. We
might need to make some small changes or
adjustments to it later on, but for now
it's perfect. It says that it is right
now on a step three out of eight. And
now it created another docker compose
for production. Note that in here it is
using the neon cloud database whereas
for development it used neon local. Neon
local is a feature by neonb which allows
you to use docker environments to
connect to neon and manage branches
automatically. In simple words, it's a
proxy service that creates a local
interface to your Neon Cloud database.
So, let's go ahead and accept this
production database, too, or at least
the configuration for it. And now, it'll
create the environment configuration
files. Here we got just some more ENVs.
It created them undervvelopment.
So, let's go ahead and accept them. For
sure, we'll have to add our own ENVs
here later on anyway. And it did the
same thing for ENV production. Now it'll
automatically update the package JSON
with docker scripts. Now this is a lot
of docker commands that we have to run
manually. So in this case I won't go
ahead and accept these changes since
this is a DevOps course. What we'll do
instead is together we'll develop a bash
script that will run all of these
different Docker commands in sequence so
that we can automate them which is the
whole goal of DevOps. So for the time
being just go ahead and cancel this file
right here. And now since we canceled it
we can just ask it to continue. So I'll
just type continue and we can go ahead.
Since I cancelled the step of updating
package JSON it's trying to do it again.
So this time instead of canceling I'll
go ahead and manually remove these
additional commands that it added and
just click done. Oh and make sure to
remove the comma after the last command
that you have right here. Oh and the
extra curly brace as well. Perfect. So
now the only thing that we changed is
adding the start script. And now we can
apply the changes. As I said, we'll be
implementing all of these different
Docker scripts within our custom bash
script. So more on that soon. Now it'll
create a simple Docker ignore file which
we can accept. And like any good
engineer, it'll give you a comprehensive
documentation for the entire Docker
setup that it just implemented. And
there we go. Here's a complete docker
setup.md. So we can see all the changes
in markdown. It talks a bit about
development and production environment
more about the prerequisites to set us
all up. It told us what it did, what we
still have to do by adding our env to
make it work, how to start up our
development environment, and exactly
what it will do. Same thing for
production and more on using Neon Local
to run it all together. At the end, it
even provided us with a quick start
checklist so we know exactly what we
have to do to make it work. Let's go
ahead and apply those changes and see
what Warp has to say next. Oh, it looks
like it even provided a quick setup
script that we can run to make our
process easier. I'll definitely go ahead
and accept that. It's also asking us to
run it, but for now, I will just go
ahead and cancel it so we can go through
the changes that it implemented on our
own. The next step is to get all the
necessary env. The first environment
variable we can get is going to come
from Neon. And you can get it by heading
over to the top right, pressing your
profile photo, heading over to account
settings, and then switching over to the
API key section where you can create
your own personal API key. You can give
it a name. In this case, I'll call it
JSM acquisitions.
And click create. Then copy it. Head
back over to your application and go to
env.development. development. You should
be able to see two different files, one
for production, one for development.
Thankfully, Warp left some nice comments
right here saying that this is the
development environment configuration
used when running application with
Docker Compose in development mode with
Neon local proxy. Okay, so we have some
things that we have specified before. We
also have our database configuration
right here. And what you'll have to do
here is put the Neon API key right here
that you just copied as well as a Neon
project ID and a branch. So we just got
the API key. Now to get the additional
things, you can head back over to your
project. Then head over to the settings.
And right here you'll see the project
ID. Simply copy it and paste it over
here under Neon project ID. And the last
thing is the branch name or the branch
ID. This one you can get if you head
over to branch overview and you can copy
this branch ID right here and simply
override the main branch. Oh, and let's
also not forget the arjet key. I believe
that one was previously in just the env.
So you can just copy the arjet key from
here and paste it over at the bottom.
I'll call it arjet and just specify the
arjet key right here. Now a cool trick
or a concept in docker is that instead
of specifying whole env line by line in
a docker compose file you can just point
it out to the respective file. So head
over to docker composedev.yaml
and modify it to point to
enenv.development. So here where we have
our environment variables instead of
that you can simply say env file and
point it to env.development. development
and now it'll get access to all the
environment variables. This was done for
neon local which I will collapse right
now. And then below you have the NodeJS
application and you can repeat the same
thing. So here we also have the
environment and what you can do is say
env
then you can specify the path of the
file by saying
development. So now we have updated our
environment variables for both services
neon local as well as the app. And we
can repeat the same thing for
production. So if you head over to
docker compose.pro
you can also remove the environment
variables and just say envouro
and in this case you don't have to do it
for neon local. Oh but make sure that
this says env file instead of
environment. Perfect. Now, do you
remember how Warp generated a set of
Docker bash script for us not that long
ago? We have it here, but it's pretty
detailed and maybe a bit too long. And
maybe this one differs from the one that
I generated for you. For that reason, I
want to make sure that we have the same
bash script that we can run. So, head
over to the video kit down below, copy
the bash script there, and paste it
right here. You'll notice that this one
is much shorter. And now we can go
through it together, and I can explain
how it works. This is a development
startup script for the acquisitions app
with neon local. This script will start
the application in development mode with
neon local. We already covered most of
these commands during the crash course
part such as echo which is basically you
can think of it like an alert or a
console log in a bash language. It just
says what is happening. Then we have a
check for the development environment
variables and whether they exist. If
they don't, we can just say, "Hey, this
is not found." Then please copy them and
then proceed. Once again, we're checking
if Docker is running. If everything is
good, we're creating a new neon local
directory if it doesn't already exist.
And then we're adding it to get ignored
if it's not already present there.
Finally, we're building and starting
development containers so that the neon
local proxy will create an ephemeral
database branch and application will run
with hot reload enabled. Ephemeral means
that it is a temporary data store that
exists only for a short period of time
and then the data will be lost, shut
down or terminated. Keep in mind the
data here is not persistent. The goal
here is only to hold it for some time
and then application will run with hot
reload enabled. We then run migrations
with Drizzle to make sure to apply the
latest schemas. Then we wait for the
database to be ready. We use a docker
compose command and then we start a
development environment. Finally,
development environment, if everything
went well, should have started on
localhost 5173. So, let's head over to
the package.json here. Let's add a
command to execute our new bash script.
Remember when we had all of the other
commands right here? Now, it's going to
be just one. I'll call it dev docker.
And it'll simply run sh setup docker.sh.
Make sure to have sh at the start or
whatever the name of your sh script is.
Maybe it's going to be something
different. As a matter of fact, just so
we all have it the same, let's actually
create a new folder in the root of our
application and let's call it scripts.
Then move this script over to the
scripts
and simply rename it
to dev.sh.
This will be our docker dev command. So
change it here as well /cripts/dev.sh.
And now before we run this command, make
sure that docker desktop is running and
then run mpm rundev docker. You can see
that it'll start executing the commands
from the bash script such as pulling the
changes from neon local and then
building out our entire application. And
there we go. Just like that, it
successfully built the acquisitions app
and then created the network and two
different containers for our development
environment. No issues and immediately
the app is running within a Docker
containerized environment. Typically,
this would take hours to set up, but
with proper understanding from the crash
course part and a bit of help from AI,
we were able to get it up and running in
a couple of minutes. And immediately we
got a message that our application is
listening on localhost 3000. So let's
just open it up.
And you can indeed see that it is
running. You can visit the root route or
head over to API endpoint where you get
the acquisitions API is running. Even
the health check should work. Perfect.
Of course, testing it with HDPI as well
will do the trick. So if you try to hit
a 3000, you'll get a successful
response. This is not running from our
terminal or from warp terminal. This is
running directly from a docker
container. But now let me show you
something else. Instead of just making a
get request to hello acquisitions, if we
head over to 3000 API sign up and try to
create a new account. So I'll head over
to a text and I'll pass over all the
important information such as the name.
I'll go with something like Adrian
and we are in JSON. So double quot
strings all the way. I'll enter my
email. I'll make it a bit different this
time. Contactjsmastery.com.
And let's not forget about a password as
well. I'll do one, two, three. One, two,
three. Oh, and I forgot to add o before
the sign up. Now the reason we got this
error right here, it's mentioning a
failed query. But basically what it's
saying is, hey, these users don't exist.
I cannot find this table or the schema
for the users. And the reason for that
is we have to reconfigure our database
connection so that it actually points to
neon local for our local development
purposes. You can do that by heading
over to source config database.js.
And then here we'll have to do setup for
neon local by checking if
process.env.node
env is set to development. In that case,
we'll do some further neon config which
you can import from neon database
serverless by fetching the endpoint that
we're trying to reach. And in our case,
we're setting it to well, you can see
what docker config is saying. If you
head over to docker compose dev under
neon, you can see that its port will be
5432.
So we can set the endpoint to http
col/neon-local
col 5432/sql.
We can also set the neon config use
secure
web socket and set it to false. And we
can also set the neon config dot pool
query via fetch and I'll set it to true.
I found these values in the Neon
documentation for setting up the Neon
local environment. And that's it. So now
if you head back and retry the request,
there we go. The user got registered,
which is amazing. ID number three,
adrien contact.jsmastery.com.
This is great because it seems like we
just created our third user and
everything is connected to our original
DB. But if you head back over to Neon
and reload your tables, you'll see there
are only two users that we created
before. So, it seems like our database
is not connected. Or is it? Actually,
we're connected to the local instance
using Neon local docker connection so
that we don't mess with production
databases. This means that all the
changes that you make right here for our
local environment are going to stay
local and it's impossible to break
anything in production exactly how it
should be in real applications. Now, how
would you go about actually dockerizing
this for production? Well, there's
another script that we can add. So if
you head over to root scripts and create
a new script which you can call prod.sh,
you can find this script within the
video kit down below. It's very similar
to the dev script almost exactly the
same, but instead of using neon local,
it's using a regular database
connection. We won't go ahead and run
this right now because it's going to
work in the same way that the dev one
did. But before you're running this
later on, just make sure that your
environment production variables are
also properly set. Here you'll put your
real database URL that before we kept
within our originalv.
And you can also add the arjet keys,
jwts, and so on. And then you're almost
ready to run it. You just need to add it
to your package json by saying proder.
And then you're going to run this
different command sh/scripts
slpro.sh
and you're ready for production. I mean
the fact that we implemented
dockerization to our application so
quickly and in such a simple way is just
crazy. We just needed a good prompt and
we let warp handle the rest. So I'll
definitely go ahead and commit this over
to GitHub right now by running git add
dot getit commit-m
implement dockerization
and get push. Once you understand
theoretical concepts and the reasons why
we do things we do then it's very simple
to turn it into a clear prompt with
detailed steps and warp and its army of
agents will do the rest. It's that
simple. You focus on architecting and AI
will take care of the implementation.
Now that we've dockerized our
application, let's implement all the
user routes and all of the CRUD
functionalities regarding users. We can
start by creating a new file right
within source routes. And then within
routes, we can create a new file called
users.outes.js.
And then as with the o routes we can
create a new router by saying const
routouter is equal to express.outer
and then we can define a router.get to
get all the users. So once somebody
points to this route we'll get a request
and a response and we will just run the
res.end
get slash users. I will duplicate it
three times. And let's not forget to
export this router. Now for the second
one, I'll do a get request, but not to
forward slash users, rather the do
forward slash users slash colon id,
which means that this will give us the
details of a specific user. Then we can
do a put request so that we can modify a
user profile. And we of course have to
know which user we're modifying. So that
again is going to be the ID property,
but this time a put request and finally
a delete request to a specific user ID.
So we want to delete a specific user.
Perfect. So now that we have those
routes, let's head over into our app.js
and alongside getting the o routes, we
can also get the forward slap API slash
users routes. And that's going to refer
to the user routes coming from the file
we just created. Perfect. Now, we'll
also need to create some additional
services to fetch and create all of
those users. So, head over into source
services and create a new service file
called users.services.js.
And we'll need to define the function
that'll give us back all the users.
That'll look something like this. Export
const get all users is equal to an
asynchronous function that will have a
try and catch block. In the catch, we
will of course just use the logger
functionality to log the error something
like error fetching all users or error
getting users. And then we can also
throw this error. And then in the try
we'll actually fetch all users by saying
all users is equal to await db coming
from database.js file dot select. So we
want to select specific fields from the
database. Specifically we want to get it
dot from the users table or the users
model. So simply import that users model
and then within the curly braces you can
pass which field you want to get back
from the users such as id is going to be
users do ID then we can do the name and
the email we can also get the role of
users roll created
is users created at and updated at these
are all the fields that we need and once
we get those users we can simply return
them so I'll say return all users
Perfect. In this case, it's saying that
this local variable is redundant. So,
you don't even have to put it in a
variable. What you can just do is add a
return statement right here because it's
the same thing. We're just returning the
output of this DB select, which are the
users. Perfect. And now we need to
create a new controller for the users.
So, head over to controller and create a
new file called users.controller.
controller.js
and let's create a controller that will
fetch all the users using the service we
just created. Once again, I'll create a
new function export const get all users
is equal to an asynchronous function
that has access to a request a response
and the next function I'll open up a try
and catch block.
In the catch, we will use the logger to
just log the error as before. And then
if there's an error, we can just pass it
over to the next function. But in the
try, we will use the logger to console
log the info that we're just trying to
fetch all users. So getting users dot
dot dot and then we can get all users by
calling and using a weight of this users
service that we created. Or you can just
say get all users and make sure to
import it at the top by importing get
all users from services users. And maybe
we can rename this controller to fetch
all users not to confuse with this
service that we have right here. Once we
fetch them, you can just return a JSON
object with a message of successfully
retrieved users. And you can pass the
users object equal to all users. And you
can also pass the user count which is
going to be all users.length.
Now it might seem like a bit of an
overkill to create a service and then to
use that service within a controller
whereas the controller itself could just
use this logic that we had here. Like we
could have just done this and that would
be a bit easier to get the users. But
keep in mind that as our app scales so
will the controllers and the services
and the routes and everything else. So
allowing all of these to be separate
parts and to breathe independently is
very important for scalability.
Controllers will handle logging,
validation, and more. And the service
part will only handle the database
parts. That's a clear separation of
concerns, which is a must for clean
code. So finally, let's head over into
the routes, which once again are serving
their own purpose. And the only thing
you have to do here now is just fetch
all users because we've already created
a service and the controller for
fetching them. So you're now just saying
whenever a user goes to that's going to
be the API users then simply give me all
the users. Now there's two ways of
running this app. You can either run mpm
rundev to spin it up on localhost 3000
or you can use the other command we
pointed out right here and that is dev
docker. So let's go ahead and run mpm
rundev docker and make sure that you
have your docker app actually running on
your device. Once you have it, you can
run the command and it'll start the
acquisitions app in a development mode.
it'll build it out and do all the Docker
stuff that it does. There we go. Neon
local is running. Acquisitions is up and
it says that it's listening on localhost
3000. So if you head over there, you
should be able to see hello from
acquisitions. But if you head to
localhost 3000 slapi/
users, you'll get back a JSON output
with a message of successfully retrieved
users. And you can see a full users
array getting returned to us right here.
Perfect. Now, for all of the upcoming
services and controllers, you'll have to
do almost the same thing that we did
with this one endpoint right now. And to
be more productive, we'll use AI to
speed up our process and get the job
done. So, let's head back over to Warp.
Warp has a special feature called
project scoped rules that allow you to
create some specific rules just for this
project. Think of some custom guidelines
or contexts different for each project.
If you open up the sidebar, you'll see
rules right here and you can add global
rules or you can add projectbased rules.
So let's initialize a new project. Then
in your finder or file explorer, you can
find the repo of your application and
click open. Now it's asking us would you
like the agent to index this codebase
which will lead to more efficient and
tailored help. I'll say yeah definitely
go ahead and index it. So if you head
over to codebase indexing, you'll be
able to see that it has been
successfully synced. And if you open up
this file, this is the warp md file
which is guidance for this specific
project. Now if you open up this file,
you'll see that warp automatically
generated some custom rules for this
project specifically. It gave it all the
project information and the key
technologies that it needs to know when
running some additional code. So, if
there's some additional info that you'd
want Warp to know when going over your
project and when developing additional
code, you can just add it right here.
For now, we're good. And now, back
within Warp environment, we can pass
this new prompt that'll generate all of
the additional user CRUD services for
us. You can find it in the video kit
down below and just copy and paste it
here. In simple terms, we ask it to
implement all of the other CRUD
services. You don't necessarily have to
be this descriptive but in this case
just so we get the same output I
specifically pointed out that we need a
get user by ID update user and delete
user functions. Then we are also
implementing some validations and some
additional controllers. So let's press
enter and let's let warp do its thing.
It nicely examined the task and split it
into multiple smaller tasks and now
it'll ask us for input as it is
proceeding.
Now, there's this little thing right
here, the two arrows that say
autoimprove all agent actions for this
task. And I'll turn it on because I
believe that it should be able to nicely
generate all the user validations,
controllers, and services. So, let's let
it work and I'll be back in a minute.
And there we go. In about a minute, Warp
has successfully implemented the
complete user crowd functionality for
our Express application. And like a good
guy, he even provided a comprehensive
summary of what has been implemented.
The user service is right here. These
are three separate functions that deal
directly with speaking with the database
to retrieve the user, update them, and
delete them. We also have the validation
to make sure what we pass into the API
is correct. It fixed the authentication
middleware and finally it created the
user controllers which actually use the
services created above and then pass
over to data as API responses.
Perfect. We can even see the API
endpoints right here so that we can test
them out back within our code. It
created a new users.services.js.
If we want to be very specific, we can
rename this file to user.service.js.
I think I misspelled it when I was
giving instructions. And make sure to
also fix the imports for this service
within the user controller. It was
supposed to be users.service.js.
So these services alongside get all
users which we implemented also include
get user by ID which gets a specific
user, update user and then delete user.
And finally, in the controller, we have
all the functions that actually use
those services and return the data. And
most importantly, within the routes,
we're now using those three new
controllers. And each corresponds to its
own endpoint. Oh, and look at that. It
even left some comments so we know
what's happening. For the time being,
I'll just test a simple get request to
see whether we can get the information
for a single user. That's the simplest
way to do right now. We can just copy
the ID, head over to localhost 3000
slappi slash users slash and then pass a
number of your user like two and in this
case we get back the authentication
required no access token provided
message. Now this response is exactly
what we wanted. It means that our
verification is working. You cannot get
user details of another user if you're
not logged in. So, let's head over into
our HTTP client and let's log in. So,
I'll just head over to sign in and I'll
sign in with my details. I think I just
need my email and my password.
If I do that, you'll see that I got
signed in successfully. And then if you
expand these headers, under the headers,
you will see a token right here. What
you need to do here is copy this entire
token and then pass it as a cookie
header in your upcoming responses. So
cookie is equal to this token you just
copied. If you do it that way and then
head over to API users and you try to
retrieve a specific user like the one
with an ID of two and click send. Oh,
let's make sure that it's a get request.
Now the user got successfully retrieved
and the data is back which means that
this is working perfectly. So now if I
wanted to delete this account I would
just have to make a delete request to
it. And if I click send this is perfect.
Check this out. We're getting a 403
forbidden because we're trying to delete
somebody else's account and we don't
have permissions to do that. This works
thanks to warp adding middleware to
every single one of our routes. Check
this out. Now middleware will make more
sense. We always have this controller
function that we're calling once we hit
this endpoint like the delete right
here, right? But before this function is
called, it's calling two pieces of
middleware. One is called authenticate
token which needs to make sure that we
are authenticated and only if we are it
pushes it to the next one in the line
which is require ro. So in this case
we're saying that we need to be an admin
to delete something. If we are then we
can proceed and finally then we delete
it. Now I'm not sure whether I'm
currently logged in with an admin
account or not but let me try to delete
a different account like maybe a user
number one. Nope. For all three of
these, I don't have permissions to do
that. If I try to call all the users, so
I can see which one is the admin. But
even for getting all the users, we don't
have the permissions because to be able
to read the details of all the users,
you have to be an admin. That makes
sense. So for a second, I will just
remove this function from here and
remake the request just so I can see
which user is the admin. Okay, so we
have these three users right here. But
instead of logging into one of these
previous accounts, let's go ahead and
create a new account that also has admin
privileges. So make a new post request
to API o sign up. We don't have to pass
any of the headers, but we do have to
pass a name, which I will call something
along the lines of I am the admin. And
then the email will be adminadmin.pro.
And we can pass some kind of a password.
And don't forget to add a role of admin.
If you do this and send over this
request, you'll be able to see that this
new admin user had been registered.
And in the request, we automatically got
this user's token. This one is more
powerful. So, let's replace the old one
as we're now logged in as the admin. And
now if you try to delete a user, you can
head over to API users and then maybe
like user number one, we want to delete
it. You would make a delete request to
it with the proper headers so that the
user know who is authenticated and we
don't have to pass anything within the
body. So now if you make a request,
you'll see user deleted successfully.
Wonderful. This means that warp
successfully implemented authentication
and rolebased access middleware which is
working perfectly once you understand
how things work and once you know what
you want to implement using AI to do it
for you just feels like a superpower
because it gets done right and it gets
done fast. You might need to tweak it
here and there, but more or less if you
start with a proper scalable nice
codebase like the one that we have right
here with controllers, routes, services,
utils, and more. AI agents will also be
able to make a better job of
implementing additional features. So
with that in mind, let's go ahead and
run get add dot
git commit-m
implement users CRUD and get push.
Perfect. This wasn't necessarily related
with DevOps, but I still wanted to
include a bit of functionalities within
our application so that in the next
lesson we can get back to DevOps this
time in form of testing.
And finally, we are at a part where
we're going to dive into testing.
Testing is one of the crucial parts of
development in general, but specifically
DevOps because you want to ensure that
the entire development process is well
tested and predictable. So, I'll show
you how to use one of the most popular
testing libraries out there called Just.
Believe it or not, it has almost 30
million weekly downloads and installing
it couldn't be any simpler. You just run
mpm install-save-devest.
So, we're adding it as a dev dependency.
Oh, and we'll also add superest which
provides highle abstractions for testing
HTTP while still allowing you to drop
down to the lower level API provided by
the super agent. 7 million weekly
downloads. So, let's install that as a
dev dependency as well by running mpm
install superest-save-dev.
Once that is done, you can head over to
justestjs.io
and head over to getting started. I'll
turn on the dark mode and follow the
first steps. After installing just, you
can head down to additional
configuration. And the first step right
here is to generate a basic
configuration file. And we can do that
by running mpm init just add latest. So
just run this command and we'll have to
answer a couple of questions. First, do
you want to install the create just CLI?
To which I'll say yes, please go ahead.
And then it'll ask you would you like to
use justest when running test script in
package json? I'll say yes to that.
Would you like to use typescript for the
configuration file? That'll be a no. In
this case, we're running a JavaScript
application. Then you can choose between
node and jsdom. In this case, we'll be
testing a node application. Do you want
just to add coverage reports? I'll say
yes to that. Which provider should be
using? In this case, we'll be using v8.
And do we want to automatically clear ma
calls, instances, and context, and
results before every test? I'll say yes
to that. And that will have generated a
justestconfig.mjs.
So you can just open it up by finding it
right here within your project explorer.
and head over into justconfig.mjs.
There's a lot of comments right here for
all the different things that you might
want to turn on. And there's some things
that are not commented out such as clear
mocks, collect coverage, coverage
directory, and so on. It's a very long
file, but as you can see, most of it is
just commented out. One thing I want to
point your attention to is this test
environment variable where it says the
test environment that will be used for
testing. In this case, we want to switch
it over from just environment node to
just node. We'll be running our tests
there. Then you want to head over into
package.json.
And under imports, you can add an import
alias for the source folder as we'll
need that when writing tests. So you can
say hash src/
everything. We'll point it to /source
slash everything. And another thing
we'll have to do is add the test script.
Right now it just says test justest, but
instead we'll say test and provide some
additional options such as node options
is equal to d-experimental-vm
modules and then we'll actually run the
command. The reason why we're doing that
is so that it works with the type of
node applications using the module. So,
new ES6 import statements. That's why we
have to provide this experimental VM
modules. Great. With that said, let's
navigate over to source app.js. And then
within here, we'll need to add some
default middleware logic that will catch
all endpoints that don't exist or are
not defined. We'll use this later on in
the test to check if a request has been
made. It should not throw an error, but
show a meaningful message. So right
below these two routers I'll say app do
use and immediately render a reckon res
because we don't need a path as it'll
act as a catch all route. So immediately
we can just return one thing from it.
That'll be a res status of 404 which
means it doesn't exist or could not be
found. And then we'll return an error
saying route not found. Perfect. We'll
use this later on within our tests. So
let's go ahead and create a new test
folder by heading over into our
acquisitions app. We can create it in
the root of our application and let's do
it via terminal. Make sure that you are
currently in the acquisitions folder and
then run mkder deer which is to make a
directory and call it tests. Within
tests you can then create a new file
which you can call app.test.js.
And within it we can write our first
sample test. So let me teach you how we
do tests in justest. The way you
approach creating tests is always
describing what you're trying to do. So
you can say describe and then define
what you're describing. So in this case
API endpoints and then you can create a
callback function within it. Then you
can describe again what should happen.
In this case, we want to get a forward
slashhealth
route and then as the output of that you
say what should happen. It should return
health status. And then you define a new
asynchronous callback function after
that and make that actual request by
saying const response is equal to await
request to which you pass the app and
that app can be coming right at the top
by importing app from hash srcapp.js
JS and then once you make that request
you can make a get request to
forward/halth and then you can say
expect a response of 200 then we know
that our app is working. Another thing
we can do is expect the response.body to
have property of status which is set to
okay. So I think you can already get how
intuitive writing just tests is. You're
basically saying describe this. It
should do this and we are expecting that
it will do this and I will repeat this
expect two more times. So let me just
paste it below and indent it properly.
For the second time we're expecting the
response body to have a time stamp
because that's also one of the things
that we're returning. Oh, in this case
we're missing the okay at the end. So
let's end it properly here. And finally
for the third time we're expecting it to
have the uptime because that's also
another properties that we have there.
And now we can repeat this inner
describe in case you want to add another
test. This time we can say that you want
to make a get request to slap ai and it
should return an API message. So we're
making a request to forward slash API
expecting a 200 response and then we're
expecting a response body to have
property message
which is equal to a string of
acquisitions and instead of just typing
it out we can head over to the app.js to
see what we're actually responding.
We're responding acquisitions API is
running. So this is the exact thing we
want to see. Perfect. And we can do it
one more time by duplicating it and
saying that we are trying to get a
forward slashnonexistent
route and it should return
404 for nonexistent routes.
So if we make a request to forward
slashnonexistent,
it should return a 404 and we're
expecting the response body to have a
property of error say something like
route not found without an exclamation
mark. Perfect. And this request right
here has to be imported because that's
part of superest. So import request
coming from superest.
save it and open up your terminal or
conveniently enough we're immediately
within our terminal right here since
we're within a warp environment. So just
run mpm run test which will run our
suite of tests. There we go. There's
going to be a lot of stuff right here.
But the most important thing is that our
tests within the app.test.js file all
pass. We get health API and non-existent
all return exactly what we expected that
they will return. So, three out of three
passed. This will also generate the
entire coverage of the whole testing.
And to see it in action, you can head
over to coverage LCOV report and then
open up its index html
in the browser. Of course, not here.
Once you do that, you'll be able to see
something that looks like this, which
tells you exactly how well tested your
code is. The way in which we wrote tests
today, even though they're minimal, is
still the way it works in a real
production environment. Of course, this
is just the beginning. As you continue
building your application, you'll be
writing more advanced tests and making
sure that it's resilient against all the
errors. So, if you'd like me to create a
more detailed course on testing, let me
know in the comments down below. Very
soon, we'll implement this test as part
of a CI/CD pipeline so I can show you
how we can run it automatically as soon
as you push to your application. That's
the beauty of DevOps. So, let's do that
next.
Before we jump into building the
remaining features of the application,
let's take a step back and set up CI/CD
pipelines for linting, testing, and
building our Docker image. We're not
doing this just because it's a DevOps
focused video. It's because in a real
world workflow, you don't leave CI/CD
for the very end. The whole point of the
pipelines is to catch issues early and
ensure your code is reliable as you go.
I've been scrolling through the CI/CD
actions of JSM Pro. That's the repo
behind the jsmastery.com platform. But
it's not just us who are doing these
actions. If you take a look at Nex.js's
official repo, you'll see that they have
run almost 300,000 workflows and some
are running right now like a minute ago.
This means that there's always something
happening within the repo. Setting up
logging, writing tests, dockerizing the
application, putting CI/CD pipelines in
place. That way, every new feature you
add automatically goes through all of
these checks to make sure your
application stays solid. So, let's get
that in place right now. Since in the
crash course part of this course, we
have gone through how CI/CD pipelines
work through a hands-on demo. This time,
we'll approach it a bit differently.
Instead of implementing it yourself,
we'll have an AI agent set up pipelines
by clearly specifying what needs to be
done. So in the video kit down below,
you can find this new prompt. You can
copy it and paste it right here. That's
the mindset you should start adopting
for application development. You focus
on the architecture and let AI handle
the implementation. In this case, we're
asking it to study the codebase and
create three GitHub action workflows.
One for linting and formatting, another
for testing, and a third one for Docker
build and push. So press enter, and
let's warp do its thing. It's asking us
for a permission to create a new repo
called workflows. For sure, we can allow
it to do that. And then it'll start
creating these three workflows. I'll
actually turn on the auto approval
because I believe it should be able to
do it properly. And there we go. In less
than a minute, it created all these
three GitHub action workflows. One for
linting, which will trigger on pushes to
main and staging branches. Running
tests, which will also trigger on pushes
to main and staging. And finally, Docker
pushes. Now, it told us that we need
some secrets to make sure these actually
run. We need a Docker username, a Docker
password, and a test database URL. So,
let me show you how we can get those.
Head over to your browser and head over
to DockerHub.
Then, on the left side, you should be
able to see some settings. Head over to
personal access tokens and generate a
new token. You can give it a
description, something like JSM
acquisitions and generate it. Now, while
keeping this page open, head over to
your GitHub repo. And once you're there,
go ahead and open up the settings.
Then scroll down under security secrets
and variables and search for actions.
Here we'll need to add repository
secrets. So click new repository secret
and give it a name of docker
username.
Then back within our application, you
can find your username right here at the
top and simply paste it right here. Make
sure there's no extra spaces. Let's
repeat the same thing for the docker
password. So say docker
password. And for this one, I will copy
this password right here and paste it
there. Let's add two more. We also need
to add our node environment. In this
case, we'll set it to production
finally. Right. And finally, the last
one is our database URL. So, just create
it, call it database URL. And I believe
that within our originalv file, there
should be this entire database URL. So,
simply copy it and paste it right here.
Now, back within our editor, you can see
this new.github at GitHub folder with a
workflows folder inside of it and three
new actions docker build and push tests
and lint and format. They're all
following a proper YAML configuration
and they would take us quite some time
to write on our own but thankfully it's
much easier using AI once you know what
you want it to do. So let's test it out.
The only thing we have to do is just
make a push over to the main branch.
I'll do that right here by running git
add.get commit-m implement cicd
pipelines and GitHub actions
and get push. As soon as you do this,
head over to your repo and back over to
the actions. And in a matter of seconds,
you'll see that a new workflow has been
cued. So one by one, they will now run.
Linting and formatting, testing, and
then finally docker build and push.
linting failed, which is completely
expected as we haven't properly linted
all of our files. So, it's possible that
we're missing a couple of semicolons.
And the fact that we have just 10
different errors is super good across
all the files. So, you can easily go
ahead and make those fixes. And if you
click over on lint and format check, you
can see exactly what it did. So, it ran
the eslint doccomand and then it figured
out exactly where those issues are. Now
you can totally autofix those as well
and then push again. Our second action
succeeded and this one was running our
tests. Remember that test.yamel file
that we created not that long ago. Well,
thankfully all of those tests have
passed. And at the bottom you can even
see a new artifact generated through
this workflow run. It's a full coverage
report which you can click on and then
it downloads it and then you can pull
the index html to the browser so you can
see the full coverage report. Pretty
cool stuff, right? And finally, there's
the Docker build and push. We could
inspect further why this Docker image
failed, but this is a perfect chance to
take a look at the file itself and maybe
do some debugging. That's the point of
working with AI. Sometimes, like your
co-workers, it'll ship broken code. But
if you're good enough, you can fix it.
Let's see.
Oh, take a look here. We have our
username, which is good, but the image
name is acquisitions. That's not the
image name we've been using so far. If
you head over to Docker, you'll know
that we used Kubernetes demo API as the
image name. So simply replace this one
with that one and save it. And while
we're here, let's also lint our repo by
running mpm run lint. I believe that's
the command that we had right here. And
this will give you all the issues that
we have to fix. Then which other
commands do we have within a package?
JSON. It was lint fix. Yep, this is
exactly what I wanted. So now we can run
mpm run lint fix
and this should autofix most of these
issues. So if you once again run mpm run
lint you'll see that now we have no
issues. And finally we can run mpm run
format to do prettier formatting as
well. Beautiful.
And now we can do another push and see
what our GitHub actions say. get add dot
get commit-m
fix docker GitHub action and linting and
get push. As soon as you do this and
come back to your actions, you'll see
that three new actions will instantly be
cued running one after another. So let's
wait a minute and let's see what they
have to say. And what do we have
regarding the linting? It failed again,
but this time eslint is good. It's just
saying that maybe there's a bit of a
formatting issue with prettier, but I'm
totally okay with that for now. The
other thing I'm more concerned about is
Docker build and push failing. And if we
look into it, you can see that it's once
again complaining about some OOTH token
permissions. Unauthorized access token
has insufficient scopes. Okay, so this
makes me think that this token we
created doesn't have the necessary
permissions to do what it needs to do.
So let's go ahead and create a new one,
but this time we won't make it read
only. We'll give it write permissions.
So I'll call it JSM acquisitions
token. And this time I will make it
read, write, and delete.
And we'll have to copy this password.
Head over into the settings of this repo
under secrets and variables and actions.
And then modify the Docker password,
which is more or less the Docker token.
And then you can paste it right here.
It's asking me to verify and it got
updated. So 30 times the charm. Let's do
another push. This time we don't even
have to do it from the code because we
didn't change anything. Rather we can
change a readme which will trigger
another push. I'll say testing CIC CD
pipelines and commit and push. Now under
actions, you'll see that all three will
be re-triggered. And this time we're
hoping for two out of three. And there
we go. You can see that all the steps of
the build and push Docker image action
have been completed. And this time it is
green. Now, why did we even create this
action in the first place? Well, we did
it so that whenever you make any changes
to your codebase, we automatically
regenerate and repush a new Docker
image. So when you decide to add
Kubernetes to this project, your
Kubernetes clusters will always be
pointing to the right versions of the
code. So with that in mind, you've
successfully added CI/CD pipelines and
GitHub actions to this DevOps
acquisitions API. Great work. So what's
next? Well, so far we've implemented the
controllers, routes, and middleares for
authentication and for the users. Not
yet an acquisitions application, right?
where people can buy some SAS businesses
and sell them and so on. But now it's
all about repeating the same process you
followed so far. Create the listings,
their model, end points, let Warp
generate the rest, test it, and then do
the same for the deals. Check if your
pipelines are running smoothly, make
adjustments, test it again, and keep
iterating. Somewhere along the way after
completing your first MVP, try deploying
with Kubernetes locally as I showed you
during the crash course part of this
course. Then once you're comfortable
doing that, pick a cloud provider and
replicate the same setup using their
clusters instead of mini cube. If you
want a full final codebase with
deployments, click the link down in the
description. It'll give you more info,
most likely pointing to jsmastery.com
where I'll guide you through the rest of
this course in detail. And for an even
deeper dive, including self-hosting
Postgress, learning cloud providers like
AWS, deploying Dockerized applications,
building advanced pipelines, setting up
notifications, and more. The ultimate
backend course, which is not here yet,
but is coming very, very soon, is
exactly what you need. I'll link the
weight list down in the description so
you can join and know as soon as it's
out. For now, I hope this video helped
you understand what DevOps is and gave
you the strategies you can apply to your
own projects to impress recruiters and
land that job. So, with that in mind,
thank you so much for watching and I'll
see you in the next one. Have a
wonderful
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.