TRANSCRIPTEnglish

DevOps from Zero to Hero: Build and Deploy a Production API

5h 0m 9s47,171 words6,987 segmentsEnglish

FULL TRANSCRIPT

0:00

DevOps, the word that makes half of

0:02

YouTube tutorials feel like you don't

0:05

really belong. Like you should stick to

0:07

HTML while the real engineers handle the

0:09

big stuff. But when you try learning it

0:12

yourself, you're slammed with Docker

0:14

commands that look like alien code, YAML

0:17

files that read like broken poetry, and

0:19

the jungle of tools that never seem to

0:21

connect. That's the trap. DevOps gets

0:24

sold as a scary gatekeeping monster. And

0:27

the problem is that no one shows you how

0:30

this puzzle fits together. You either

0:32

get a 10-minute look, it works demo or a

0:36

six-hour death by PowerPoint lecture

0:39

that has nothing to do with the apps you

0:41

actually want to ship. So, welcome to

0:43

the full DevOps course where you'll

0:46

finally see the whole picture step by

0:48

step as we go from fundamentals to a

0:51

productionready API that looks and feels

0:54

like something you'd actually run in the

0:56

real world. Here's what you'll get. A no

0:58

BS intro to what DevOps actually is and

1:01

why it matters. A hands-on Git crash

1:04

course so version control stops feeling

1:06

like black magic. CI/CD pipelines that

1:09

deploy your code automatically. A Docker

1:11

crash course to containerize your apps

1:13

for dev and production. Kubernetes

1:15

deployments like real companies use

1:17

them. Infrastructure as code to spin up

1:20

environments on demand and monitoring,

1:23

logging, and security baked in from the

1:25

start. And of course, we won't end

1:28

there. In the signature JavaScript

1:30

mastery style, you'll then use this

1:32

learn knowledge to build and deploy

1:35

acquisitions, a realworld API for buying

1:38

and selling SAS businesses with tons of

1:41

features like JWTbased authentication

1:44

and authorization. Role- based access

1:46

control for admins and users. User

1:49

management for accounts. Business

1:51

listings to create, update, delete, and

1:53

browse. Deal management to track deals

1:56

from pending to completed. Health

1:58

monitoring of all the endpoints, request

2:00

validation using ZOD, structured logging

2:03

with Winston, completely secured

2:05

endpoints using Arkjet, a painless

2:08

security system for developers that

2:10

protects your app from bots, spam, and

2:12

abuse in real time. For the database,

2:15

we're using Postgress with Neon DB. It's

2:18

serverless, lightning fast, and built

2:20

for modern cloud apps that even work

2:22

locally. Adding on to it, Drizzle RM for

2:26

type- safe database queries, Docker for

2:28

containerizing applications across dev

2:31

and production environments, Kubernetes

2:33

to orchestrate containers at scale,

2:36

justest and superest for automated

2:38

testing and validating application

2:40

behavior, CI/CD pipelines for linting,

2:43

testing, deployment and automation,

2:46

clean absolute imports, eslint and

2:48

prettier for clean maintainable code and

2:51

much more using warp, the fastest way to

2:53

build and ship applications with AI.

2:56

It's an all-in-one environment where you

2:58

run commands, write code, and prompt top

3:02

AI agents in parallel. This isn't just

3:04

another DevOps demo. By the end, you'll

3:07

have hands-on experience and

3:09

productionready backend you can actually

3:11

ship tomorrow. And if you want to dive

3:14

deeper into building scalable

3:16

productionready back-end systems from

3:18

the ground up in a beginner-friendly

3:20

way, check out the upcoming ultimate

3:22

back-end developer course. It's the

3:24

perfect combination of strong

3:26

foundations taking you all the way from

3:28

core networking concepts all the way to

3:30

building and deploying APIs with full

3:33

DevOps and AWS integration. You can

3:36

either join the wait list if it's not

3:37

out or if you're lucky, it's already out

3:39

and you can get started with it right

3:41

away. So, grab your cup of coffee and

3:43

let's ship code like six figure salary

3:46

engineers. By the way, I recently opened

3:49

up channel memberships. So, if you've

3:51

been enjoying these videos and want to

3:52

support the channel, that's one of the

3:54

best ways to do it. In return, you'll

3:56

also get access to extra resources like

3:58

the Figma design files from my

4:00

tutorials, detailed cheat sheets paired

4:02

with this video, and even complete

4:04

ebooks. Plus, as a member, you'll

4:06

actually get a say in what I record next

4:08

for YouTube. It's totally optional, but

4:10

if you've been finding value here and

4:12

you're not quite ready to become pro

4:14

yet, if you want to dive a bit deeper

4:16

while helping me make these full courses

4:17

free, I would be thankful if you joined

4:19

today. Before we start building, let's

4:21

get our environment set up. Because

4:23

here's the thing, DevOps isn't something

4:26

you watch, it's something you do. And to

4:29

actually follow along without getting

4:30

stuck, you'll need the same stack I'm

4:33

using. The good news, it's all free. And

4:36

these are real tools companies are using

4:38

every day. First, make sure you've got

4:40

Node.js installed. That's the core

4:43

runtime we'll be using to build our

4:44

backend API. Then our database. We'll be

4:48

using a cloud Postgress provider called

4:50

Neon DB. This will power up everything

4:52

in our API. So, you'll want an account

4:54

ready before we dive in. Click the link

4:56

down in the description, create an

4:58

account, and you're good to go. Next,

5:00

Arjette. You can build the cleanest API

5:03

in the world, but if it's wide open to

5:05

bots, spam or abuse, it's useless. Arjet

5:09

gives you a real-time protection while

5:11

you build. So security isn't something

5:13

you tackle on later. It's there from day

5:16

one. And finally, warp. This is where

5:19

all the action happens. Running

5:21

commands, shipping code, and even

5:23

prompting AI agents to speed up your

5:25

workflow. It's a modern AI powered

5:27

developer environment you'll never want

5:29

to leave. And right now, their pro plan

5:32

is literally one buck, which makes it

5:35

kind of a no-brainer. So once you've set

5:38

those up, you'll have the exact stack

5:40

I'm using. That way when I say run this

5:42

command, you can run it too with no

5:45

interruptions, no detours, just straight

5:48

into building. Very soon, you'll build

5:50

and deploy your very own production

5:53

ready API. But first, let's dive right

5:56

into the crash course.

6:06

You've heard this term a hundred times,

6:08

but let's be real. DevOps sounds scary

6:11

as hell, right? Is it a job? A tool? A

6:15

secret club for 10x engineers with six

6:18

monitors and a tolerance for drinking

6:20

five Red Bulls? You're not alone.

6:23

Everyone feels like this at first.

6:26

DevOps sounds massive, mysterious, and

6:29

way too complicated. But here's the

6:32

truth. DevOps is simpler than it sounds.

6:36

And in this crash course, I'll break it

6:39

down like we're just chatting over

6:40

coffee. Here's how software used to

6:44

work. You write some code, it runs on

6:47

your laptop, you deploy to a server,

6:49

done. And if you're a vibe coder, you

6:53

might even skip testing entirely, push

6:55

it straight to production, and call it a

6:57

day. That actually works. If your app is

7:00

tiny, and no one cares if it crashes.

7:04

But the second your app grows, traffic

7:06

spikes, and the real money is on the

7:09

line, that's when this vibe ship starts

7:12

sinking. Servers crash, hackers attack,

7:16

thousands of users log in at once, and

7:19

your app dies at 3:00 a.m. Tell me, who

7:23

handles all that chaos? Early on, it was

7:26

the same developers writing features.

7:29

But most devs aren't trained to babysit

7:32

servers or fight off security attacks.

7:35

Their job is to build, not panic at

7:38

midnight. So companies created operation

7:42

teams or ops. They managed servers,

7:45

scaled apps, monitored uptime and

7:48

performance, applied security patches,

7:50

and got woken up at 3:00 a.m. when

7:53

everything broke. Honestly, props to

7:55

them. Ops became the guardians of

7:58

stability, while devs focused on speed.

8:02

So now you've got two camps. Devs who

8:05

ship features fast and ops who keep

8:08

things stable. So what happened? Well, a

8:12

tug of war. Devs tossed code over the

8:15

wall. Ops slow them down, blocked

8:18

releases, or spent weeks cleaning up.

8:22

Result: frustration, silos, and slow

8:26

delivery. So finally, DevOps was the

8:29

solution. Not a tool or a role, a

8:32

culture shift. It's about breaking down

8:35

silos so devs and ops work together.

8:38

Backed by automation, they deliver

8:40

software faster and safer. DevOps is a

8:44

culture of collaboration, a set of

8:47

practices for building, testing, and

8:49

releasing reliably and an automation

8:52

toolbox for deployments, monitoring, and

8:55

infrastructure. Think of your app like a

8:57

restaurant. Devs are the chefs who are

9:00

cooking meals or code, and ops are

9:03

waiters and managers getting meals to

9:06

customers. They handle servers and

9:08

uptime. Without DevOps, chefs would toss

9:11

random dishes onto the counter and hope

9:14

for the best. Chaos. With DevOps, the

9:17

kitchen runs like a welloiled machine.

9:20

Orders flow smoothly, food is

9:23

consistent, and customers are happy. So,

9:26

why not just Vibe Code? I mean, you can

9:29

skip all of this if your project is just

9:32

for fun, but once you have real users,

9:35

especially paying users, vibe coding

9:38

quickly turns into vibe burning. So,

9:41

let's skip the burnout and learn DevOps

9:44

the right way.

9:47

You have a sense of DevOps, but not the

9:50

full picture yet. You might be thinking,

9:53

I get that DevOps is a culture, but how

9:56

do I learn it? And what does it look

9:58

like day-to-day? If you've ever Googled

10:00

or chat GPT, if that's even a word, the

10:03

term DevOps, you've definitely come

10:06

across the famous infinity loop diagram.

10:09

You know the one where arrows chase each

10:12

other endlessly labeled plan, code,

10:15

build, test, release, deploy, operate,

10:18

and monitor. That loop is not just a

10:21

conference graphic even though it looks

10:23

that way. It is the process of how an

10:25

idea becomes software in users hands and

10:29

then gets better with feedback. Dev and

10:32

ops are connected the entire time. So,

10:35

let me show you what each stage means in

10:37

practice. We start with a plan. Imagine

10:41

you're building a multi-million dollar

10:43

SAS application. Doesn't hurt to

10:45

imagine, right? Before a single line of

10:47

code, you decide what you want to build,

10:50

when to ship, who owns that, and how you

10:53

will measure success. Page speed, user

10:56

growth, revenue, and make it traceable,

10:59

not sticky notes that'll disappear. Use

11:02

tools that the team will actually keep

11:04

up to date. Be it Jira, Linear, GitHub

11:07

projects, or just notion, the tool is

11:10

less important than the discipline. A

11:13

plan without action is just a dream. So

11:16

make it explicit and visible. Once we're

11:18

done with planning, we move over to

11:21

code. Code quality matters as much as

11:24

code output. So write clean, modular,

11:27

and testable code that others can

11:29

extend. Use Git with reviews, branch

11:33

rules, or automated checks, ship

11:35

readable code, and not hacky solutions

11:38

that only you can understand. And now we

11:40

enter the build phase. Because when you

11:43

finish writing source code, it's still

11:45

just raw text files. Those files cannot

11:48

always be executed directly in

11:50

production. They often need to get

11:53

compiled or transpiled. Dependencies

11:55

need to get installed. It needs to get

11:57

bundled or packaged like in Docker. And

12:00

we need to check for linting or security

12:02

mistakes. The build step is all about

12:06

preparing code so it can actually run an

12:09

artifact, a ready to run package. Think

12:12

of it like baking dough into bread. You

12:15

can't eat raw dough, which is the source

12:18

code. So you can bake it build into

12:21

something edible, an artifact. In

12:23

DevOps, this process is automated for

12:26

consistency. You build Docker images,

12:29

run containers through make files, and

12:32

build them every time you make a commit.

12:34

You'll learn all of that in the next

12:36

lesson. And then we get to testing. Of

12:38

course, you wouldn't want untested code

12:41

to reach production. The test stage

12:44

exists to catch problems early when

12:46

they're still cheap and easy to fix.

12:49

Automated tests run as part of the CI

12:51

pipeline, covering everything from basic

12:54

unit tests to integration tests,

12:56

end-to-end workflows, and security

12:58

scans.

13:00

For instance, let's say you're building

13:02

a payments feature. Unit tests verify

13:05

the math for totals. Integration tests

13:08

ensure the checkout flow works with a

13:10

database. And end-to-end tests simulate

13:13

a customer actually completing a

13:15

purchase.

13:16

Security tests scan for vulnerable

13:18

dependencies or insecure patterns. And

13:21

by the time the code passes all of these

13:23

stages, the team can be confident that

13:26

it behaves as expected and won't break

13:29

the system when deployed. All of this

13:31

runs in automation pipelines. So when

13:33

all the tests pass, the build is finally

13:37

marked as ready for release. This

13:40

doesn't mean it's already running in

13:42

production. It means that the artifact

13:44

has been approved and cued for

13:46

deployment. Think of this as moving from

13:48

dev to ops. In practice, the release

13:52

stage involves versioning, tagging

13:54

artifacts, and pushing them into a

13:56

release repository like Docker Hub,

13:59

Nexus, or an internal store. This

14:02

ensures the exact same build that was

14:04

tested will be the one deployed later

14:06

on. No surprises or mismatches. And then

14:10

comes the big moment, deployment. In

14:12

traditional setups, someone might have

14:14

to log into a server at 2 a.m. to run

14:17

some manual scripts. But in DevOps,

14:20

deployments are automated and

14:22

repeatable. Pipelines handle the

14:23

process, and tools like Kubernetes

14:26

orchestrate deployments at scale. But

14:28

more on that soon. For now, understand

14:31

that you have to use these tools to

14:33

build pipelines for automated

14:35

deployments that scale up and down based

14:38

on defined needs. For example, when

14:41

deploying a new version of a web app,

14:43

Kubernetes might spin up new pods with

14:46

the updated code while gradually phasing

14:48

out the old ones, a strategy known as

14:51

rolling deployment. This ensures a

14:54

minimal downtime and a smooth user

14:56

experience. Some teams even practice

14:59

blue green deployments or canary

15:01

releases to test new versions with a

15:04

small subset of users before rolling out

15:06

widely. And once deployed, the

15:08

application is live. But the work isn't

15:11

over. You have to operate. See, the

15:15

operate stage is all about ensuring the

15:18

system continues to run reliably under

15:21

real world conditions.

15:23

This includes monitoring server health,

15:25

scaling resources when traffic spikes,

15:28

applying security patches, and managing

15:31

infrastructure configurations. Think

15:33

about an e-commerce platform on a Black

15:35

Friday. At 2 p.m. traffic might

15:38

skyrocket, requiring the system to scale

15:40

horizontally across multiple servers. At

15:44

2 a.m., when traffic is low, resources

15:47

can be scaled down to save costs. DevOps

15:50

teams ensure that the system is

15:52

resilient, stable, and performant at all

15:55

times. So, finally, we have the

15:58

monitoring. Here teams gather data about

16:01

the systems performance. Uptime, error

16:04

rates, and business metrics. Monitoring

16:07

tools like Prometheus, Graphana, Data

16:10

Dog, New Relic, or Century act like CCTV

16:13

cameras for your app. They let you see

16:16

not only technical metrics like CPU

16:19

usage, latency or error logs, but also

16:22

business outcomes like orders processed,

16:25

signups completed, and revenue

16:27

generated. For instance, if your new

16:30

checkout flow increases card

16:31

abandonment, monitoring will catch it.

16:34

That insight then loops back into the

16:36

plan stage where the team can adjust

16:39

priorities and improve the product and

16:41

then the cycle starts again. That's the

16:44

beauty of DevOps. It never really stops.

16:47

Each stage feeds the next, forming a

16:50

continuous loop of building, testing,

16:53

delivering, operating, and improving.

16:56

And it's not just a set of tools or a

16:58

job title. It's a culture of constant

17:01

learning and iteration. But wait, does

17:04

that mean that as a DevOps engineer,

17:06

you're expected to handle everything in

17:08

this cycle? Are you supposed to code

17:11

like a developer, test like a QA, deploy

17:15

like ops, and monitor like SRRES? That's

17:18

a great question. No, you're not

17:20

supposed to do everything. This is one

17:22

of the biggest misconceptions about

17:24

DevOps. If DevOps is a culture, a way of

17:27

working where dev, ops, QA, and security

17:31

work closely together, then a DevOps

17:34

engineer is not a superhum who replaces

17:37

all those roles. Instead, your role is

17:40

all about bridging the gap between

17:42

teams, automating the boring or manual

17:45

work so teams can move faster, and

17:49

setting up tools and practices that make

17:51

collaboration smoother. See, developers

17:54

still write features. QA still ensures

17:57

quality. Ops still keeps server running,

18:00

but as a DevOps engineer, you make sure

18:04

that all these moving parts are

18:06

connected and running without chaos. So,

18:09

what does a DevOps engineer actually do?

18:11

You're not here to replace developers,

18:14

QA, or ops. You're here to connect all

18:17

the dots and keep things running

18:19

smoothly. So let's run through the

18:22

entire life cycle once again, but from

18:25

your point of view. When it comes to the

18:26

planning phase, developers and product

18:29

managers decide what to build. You make

18:33

sure that planning tools like Jira,

18:35

Confluence, or notion are wired into

18:37

your pipeline. For example, when someone

18:40

closes a Jira ticket, it should link

18:43

straight to a commit or a deployment

18:45

log. In the coding phase, you're not

18:47

writing every feature, but you set the

18:50

rules of the road. Branching strategies,

18:53

code review requirements, autolinting,

18:55

and security scans all run

18:57

automatically. So code quality stays

19:00

high. Then in the build phase is where

19:02

raw code becomes something that can

19:05

actually run anywhere like a Docker

19:07

image. Your job is to set up pipelines.

19:11

So this happens on every commit

19:13

consistently with zero manual steps.

19:16

When it comes to testing, QA owns it,

19:19

but you make it effortless. You

19:21

integrate unit tests, integration tests,

19:24

and security scans into the pipeline so

19:26

bugs get caught early before they even

19:29

reach production. And then for the

19:30

release and deploy phases, this is where

19:33

you shine. Instead of someone sshing

19:35

into a server at midnight, you automate

19:38

deployments with Kubernetes. Terraform

19:41

or Helm. In the operation phase, the ops

19:44

teams keep everything alive, but you

19:46

help them codify the infrastructure with

19:49

Terraform or cloud for set scaling rules

19:52

and make systems highly available. An

19:54

example of this would be spinning up a

19:56

new AWS cluster with a single config

19:59

file instead of spending hours in the

20:02

console. And finally, in the monitor and

20:04

feedback phase, once the code is live,

20:07

you keep watch. Metrics, logs,

20:10

dashboards, and alerts feed straight to

20:13

Slack or Teams. If something breaks or

20:16

slows down, you know it before customers

20:18

do. So, no, you're not coding the app

20:21

and writing tests and running servers

20:24

all by yourself. You're the air traffic

20:27

controller. Developers fly the planes or

20:30

build the features. Ops is the ground

20:33

crew. QA is the safety check. And you're

20:36

in the tower keeping everything smooth

20:39

and safe.

20:43

Let's dive into the heartbeat of DevOps.

20:46

CIC CD. But what does that actually

20:49

mean? Well, picture a kitchen in a

20:52

restaurant. The chef chops the

20:54

vegetables. The assistant cooks them.

20:56

The manager inspects the dish. And the

20:59

waiter finally serves it to the table.

21:01

If every step was manual, the service

21:04

would be slow. In the same way, we

21:06

manually go through different stages

21:08

such as coding, building, testing, and

21:12

deployment. Pretty manual, right? Now,

21:15

imagine a conveyor belt moving the dish

21:18

automatically from one step to the

21:20

other. That's a pipeline. It's an

21:22

automated conveyor belt for software.

21:24

So, let's break it down. CI stands for

21:28

continuous integration. Every time a

21:30

developer pushes code, tests run

21:33

automatically. And if something breaks,

21:35

the pipeline stops and you fix it before

21:38

moving forward. CD stands for continuous

21:41

deployment where once tests pass, the

21:44

app is automatically deployed to staging

21:47

or production. No late night manual

21:49

deployments. Your job is to write these

21:52

pipelines using tools like GitHub

21:55

actions, GitLab CI, Jenkins, or

21:58

CircleCI. You'll define each step from

22:01

build, test, release to deploy. Back in

22:05

the day, running apps was messy. You'd

22:07

install them directly on servers, and if

22:10

one needed version 18 of Node.js and

22:13

another one needed version 20, you'd

22:15

have conflicts everywhere. Containers

22:19

solve this by packaging apps with

22:21

everything they need. The most popular

22:23

tool here is Docker. Now imagine not

22:27

just one container, but hundreds of them

22:29

for microservices, background jobs, and

22:32

databases. Someone needs to decide where

22:36

do these containers run? How many copies

22:38

do we need right now? What happens if

22:41

one crashes? And how do they securely

22:44

talk to each other? That's called

22:46

orchestration. And the go-to tool here

22:48

is Kubernetes, which acts like the

22:51

conductor of an orchestra, coordinating

22:53

containers, so everything runs smoothly

22:56

and scales automatically. Traditionally,

22:58

people clicked around in a cloud

23:00

dashboard to create servers and

23:02

networks. That's like building IKEA

23:04

furniture without instructions. Slow,

23:07

errorprone, and almost impossible to

23:10

recreate. With infrastructure as code or

23:13

ISC for short, you describe your entire

23:16

setup. Servers, networks, databases, all

23:19

in code. You store it in Git, review

23:22

changes, and recreate environments any

23:25

time. Tools like Terraform or AWS Cloud

23:28

Form make your infrastructure

23:30

predictable and repeatable. So if

23:33

something goes down, you just rerun your

23:36

code to rebuild it from scratch. And

23:38

almost no company runs their own

23:40

physical service anymore. Instead, they

23:43

rent them from cloud providers like AWS,

23:46

which are Amazon's web services, Azure

23:49

from Microsoft, or GCP, which is Google

23:52

Cloud Platform. You don't need to master

23:55

all three. Focus on one primary

23:57

provider. AWS is the most common. And

24:01

get comfortable deploying apps, setting

24:03

up networks, and managing storage. Once

24:06

you know one well, switching to another

24:08

is easier. Like learning to drive one

24:11

car and then trying out a different

24:12

brand. And once your app is live, you

24:15

need visibility. Monitoring and logging

24:17

tools show what's happening in real

24:19

time. Not just server performance, but

24:22

also user behavior and errors. You've

24:25

seen me use Sentry in some of my

24:27

projects to track errors and

24:29

performance. That's a great starting

24:30

point. Some DevOps teams use different

24:33

tools like Prometheus and Graphfana for

24:35

custom metrics and dashboards, Elk Stack

24:38

or Data Dog for centralized logs, or

24:41

Post Hawk for product analytics and

24:43

funnels. With the right setup, you know

24:45

about problems before your users do. And

24:48

finally, scripting ties everything

24:51

together. A DevOps engineer should be

24:53

comfortable with at least Bash or Shell

24:56

scripting and one programming language

24:58

like Python or JavaScript. Why? Because

25:01

automation is the heart of DevOps. And

25:04

if you're onboarding 10 new developers

25:06

and need to set up their accounts and

25:08

permissions, don't just click through

25:10

dashboards 50 times. Write a script once

25:13

and automate the whole process.

25:15

I know it's a lot, but if you take it

25:18

step by step from Git, pipeline,

25:22

containers, ISC, cloud monitoring, and

25:25

scripting, you'll go from confused to

25:27

confident pretty fast. And in this

25:30

video, we'll skim the surface of each

25:33

one of these topics in a

25:34

beginnerfriendly way. And if you'd like

25:36

some deep dives on advanced DevOps

25:38

concepts, drop a comment down below and

25:41

I'll make it happen. Imagine you're

25:42

working on a coding project and you make

25:44

a mistake that breaks everything. Your

25:47

boss would most likely fire you if you

25:49

were even able to get a job in the first

25:50

place. Without Git, you'd have no easy

25:53

way to go back and undo the changes.

25:55

You're toasted. Git is the industry

25:58

standard. Most companies, team, and

26:00

open- source projects use Git. So,

26:03

naturally, every job description

26:05

mentions it. Learning Git isn't just a

26:07

nice to have. It's your get good or get

26:10

out moment. It's a must for any serious

26:12

developer wanting to land a job. So,

26:15

what is Git and why is it so popular?

26:17

Git is a distributed version control

26:20

system. Sounds fancy, right? Well, let's

26:23

break it down. The version control part

26:25

helps you track and manage code changes

26:27

over time. While distributed means that

26:30

every developer's computer has a

26:32

complete copy of the codebase, including

26:35

its entire history of changes and

26:37

information about who changed what and

26:39

when, allowing you to get blame someone.

26:42

Hopefully, people won't blame you. But

26:44

do you really need it? Can you code

26:47

without using it? Well, of course you

26:50

can, but then your workflow would look

26:52

something like this. You start coding

26:55

your project in a folder named my

26:57

project. And as you make progress, you

26:59

worry about losing your work. So you

27:01

create copies, my project v1, v2, v3,

27:06

and so on. Then your colleague asks you

27:08

for the latest version. You zip up my

27:11

project v3 and email it over. They made

27:14

some changes and sent it back as my

27:16

project v3 johns changes.zip. zip.

27:19

Meanwhile, you've continued to work. So

27:21

now you have my project V4. You then

27:24

need to manually compare J's changes

27:26

with your V4 and create a V5

27:29

incorporating everyone's work. And then

27:31

a week later, you realize you

27:33

accidentally removed a crucial feature

27:35

in V2. You dig through your old folders

27:38

trying to figure out what changed

27:40

between versions. Now imagine doing this

27:43

with 10 developers, each working on

27:45

different features. It's a recipe for

27:47

chaos, lost work, and countless hours

27:50

wasted on a version management system

27:52

instead of actual coding. Git solves all

27:56

of these problems and more. It tracks

27:58

every change automatically, allows

28:00

multiple people to work on the same

28:02

project seamlessly, and lets you easily

28:05

navigate through your project's history.

28:07

No more final version v2 final, really

28:10

final zip files. Git does all of this

28:13

for you, but in a much more powerful and

28:15

organized way. To get started, you need

28:18

Git installed. Whether on Windows, Mac,

28:20

or Linux, it's just two clicks away.

28:23

Google download Git and get it for your

28:26

operating system. Once Git is installed,

28:28

open up your terminal. Nowadays, I

28:30

prefer using a terminal built into my

28:32

IDE. First things first, let's check

28:34

whether you've installed Git properly.

28:36

run git d- version and you'll get back

28:40

the version that is installed on your

28:42

device. Next, you need to configure git

28:45

to work with your name and email. This

28:47

is just to track who made the changes in

28:49

the project so your colleagues know who

28:50

to blame. Here's the command. git

28:52

config- global user.name and then in

28:57

single quotes put in your name. Once you

28:59

do that, you can repeat the same

29:00

command, but this time instead of

29:02

changing user.name, we'll change user.

29:06

And here you can enter your email. Press

29:08

enter. And that's it. You're all set up.

29:11

Now, let's talk about repositories. A

29:13

repo or a repository is where Git tracks

29:16

everything in your project. Think of it

29:18

like a folder that stores all the

29:20

versions of your code. Simply put, if a

29:22

folder is being tracked by Git, we call

29:24

it a repo. Now, let's create a new

29:27

repository. In your terminal, type git

29:30

in it and press enter. As you can see,

29:33

git has just initialized a new

29:34

repository. On top of the success

29:36

message, we can also see a warning. In

29:38

previous times, the default name of a

29:40

branch has been master. But nowadays,

29:42

you'll see main used much more

29:44

frequently as the name for the primary

29:46

branch. So, let's immediately fix it by

29:48

configuring the initial branch name. You

29:50

can copy this command right here. And at

29:52

the end, you can just say main.

29:55

Now, considering that we have just

29:57

changed the initial configuration

29:58

settings, we have to create a new

30:00

folder. create a new one called

30:02

something like mastering git. Open it

30:04

within your editor and then rerun git in

30:07

it. As you can see here and here, now

30:11

we're in the main branch. That means

30:13

that git has initialized an empty

30:15

repository. You won't see any changes

30:17

yet in your folder, but a hidden.git

30:20

folder has been created inside your

30:22

directory. You don't need to touch this

30:24

folder. Git handles everything inside

30:26

from commit history, branches you'll

30:29

make, remote repos, and more. Most of

30:31

the time, Git will already come

30:33

pre-initialized by the framework or

30:35

library that you use to set up your

30:37

project with. That's how integrated Git

30:39

is into every developer's life. So now

30:42

that we have this main right here, what

30:44

does that exactly mean? Well, main is

30:47

the default branch name of your repo

30:49

created by Git. Every time you

30:51

initialize git, this branch will be

30:53

automatically created for you. I'll

30:55

teach you more about Git branches soon,

30:57

but for now, know that a branch is

30:59

nothing but a parallel version of your

31:01

project. All right, let's add some files

31:04

and track changes. I'll create a new

31:06

file called hello.js.

31:09

And you can see how smart WebStorm is.

31:11

It automatically asks me whether I want

31:13

to add it to Git. But for now, I'll

31:15

cancel that because I want to explain

31:17

everything manually. Let's make it

31:19

simply run a console.log that prints

31:21

hello get. Alongside this file, let's

31:24

create another new file and I'll call it

31:27

readme.md.

31:28

In here, we can do something similar and

31:30

say hello get. And now run git status.

31:36

Git will tell you that you're currently

31:38

on the main branch, that there are no

31:40

commits yet, and that there are two

31:42

unttracked files, one of which is a

31:44

markdown document. So to track it, use

31:48

git add readme.md.

31:52

After adding a file, we need to commit

31:54

it. Committing in git is like taking a

31:56

snapshot of your project at a certain

31:58

point. Think of it as creating a whole

32:00

new copy of your folder and telling git

32:02

to remember when you did it at what

32:05

time. So in the future, if anything

32:07

happens, you'll time travel to this

32:09

folder with the commit name you specify

32:11

to git and see what you had in there.

32:14

It's essential to commit your changes

32:16

regularly. Regular commits help you keep

32:18

track of your progress and make it

32:20

easier to revert to previous versions if

32:22

you break something. You can commit by

32:24

running get commit-m

32:27

which stands for message and then in

32:29

single quoted strings you can add that

32:31

message. For example, add readme.md

32:35

file. There we go. Congrats. You just

32:38

created a checkpoint in your project's

32:40

history. Now let's try running git

32:42

status again to see what it shows. As

32:45

you can notice that other file hello.js

32:48

is still there. It's not tracked. We

32:51

asked git to track only the readme file.

32:54

To track this file or other files that

32:56

you may create, we'll have to run a

32:58

similar command. It'd be too much work

33:00

to commit each file individually.

33:02

Thankfully, we have a command that

33:04

commits all the files we've created or

33:06

modified that Git is not tracking yet.

33:09

To see this in action, let's create

33:11

another file test.js

33:14

and let's add a simple console log that

33:16

simply console logs a string of test.

33:19

Now to track both files and commit them

33:21

in a single commit action, we can do

33:23

that by running git add dot. The dot

33:27

after git add tells git to add all files

33:30

created, modified or deleted to the git

33:33

tracking. Next, as usual, we can specify

33:35

the commit name for this tracked version

33:38

by using git commit-m

33:41

add hello and test files.

33:45

There we go. So now you can see that all

33:47

of these files are tracked. And since

33:49

I'm using webstorm, it also has a hidden

33:52

ideid folder. So it added it to tracking

33:54

as well, which I'm okay with. Well done.

33:57

Now to see the history of all commits

33:59

we've created, we can use a new command

34:01

get log. And there we have it, our git

34:04

history. It contains a commit ID or a

34:07

hash automatically created by git, the

34:09

author we specified when using git

34:12

config, a timestamp, and the commit

34:14

message we provided. Great. But how do

34:17

we switch to an older commit and restore

34:19

it? Let's say the commit add hello and

34:22

test files introduces some buggy code

34:24

and we want to restore our project to a

34:26

previous version without these files.

34:28

our brain would immediately suggest

34:30

deleting those files entirely or

34:32

clearing up their code. And if you do

34:34

that, you'll most likely break your

34:36

production because other files depend on

34:38

those files. So instead of deleting them

34:40

manually to restore to the first version

34:42

where we had only committed the readme

34:44

file, we can use a new command. First,

34:47

you have to copy the commit hash. Yours

34:49

is going to be different from mine. So

34:51

make sure to copy yours. I'll get this

34:53

one first that says add readme file. and

34:56

I'll press copy. Then you have to exit

34:59

this git log by pressing the Q letter on

35:02

the keyboard. And then you can use a

35:03

command get checkout and then you can

35:06

provide a hash of a specific commit or a

35:08

branch you want to check out too. Now

35:11

press enter. Okay, something happened.

35:14

First of all, our two files are gone.

35:17

Detached head experimental changes.

35:19

What's happening? Well, in git there is

35:21

a concept of a head which refers to the

35:24

pointer pointing to the latest commit

35:26

you've created. When we created our

35:28

second commit, our head shifted from

35:30

readme commit to the latest add hello

35:33

and test files commit. But when we ran

35:35

get checkout command, we moved the head

35:38

to the previous older commit. That's why

35:40

we got this detached head warning. It's

35:42

a state where the head pointer no longer

35:45

points to the latest branch commit. And

35:47

the rest of this message tells you that

35:49

you can create a new branch off of this

35:51

commit. But don't worry, your files are

35:54

still somewhere. When you use a git

35:56

checkout command, you're simply viewing

35:58

the repository state as it was at the

36:01

time of a specific commit. Like right

36:03

now, we're viewing a snapshot of your

36:05

codebase at a previous moment in time

36:07

when we only had a readme.md file. The

36:10

beauty of this is that all the logs and

36:12

files, whether created or modified,

36:15

remain untouched. The get checkout

36:17

command won't delete any logs or

36:19

history, so you can safely explore past

36:22

states without worry. But what if you

36:24

actually want to discard changes made

36:26

after that commit? Maybe you want to

36:28

quickly roll back to a stable state

36:30

after an issue hits production, tidy up

36:32

messy commits to look more professional

36:34

or undo a bad push you regret making.

36:36

Perhaps you've been experimenting with a

36:38

refactor that didn't pan out, or you

36:40

need to recover from a messy merge

36:42

conflict. Thankfully, Git provides a few

36:45

commands that'll help you in these

36:46

scenarios and I'll teach you how all of

36:49

that works very soon. So, just keep

36:51

watching and we'll dive into these more

36:53

advanced commands that are really going

36:55

to help you well fix a broken

36:57

production. Now, to go back to a current

36:59

state, which is often called the head

37:01

state, you simply have to run get

37:03

checkout main. And there we go. Previous

37:06

head position was at the hash of this

37:08

checkout. And now you switch the branch

37:10

to main. You can see the same thing

37:12

happen right here on the bottom right or

37:14

the top left depending where your

37:16

branching is. And if you made any

37:18

changes while in the detached head state

37:20

and you want to discard them, you can do

37:22

the same thing. Get checkout dash if

37:25

that means force and then get back to

37:27

main. In this case, we're good. We're

37:29

already on main. And that's it. You

37:31

already know more about git than most

37:33

developers do. Of course, we'll dive

37:36

deeper into advanced use cases and tips

37:38

and tricks soon, but now let's talk

37:41

about GitHub and how it differs from

37:43

Git. Git is a tool you use to track

37:47

changes. Whereas GitHub is a cloud

37:50

platform that allows you to store your

37:52

Git repositories online and collaborate

37:54

with others. To push your local project

37:57

to GitHub, you'll need to link your

37:59

repository to a remote. But what's a

38:02

remote? Well, there are two types of

38:04

repositories. Local repository is a

38:07

version of a project that exists on your

38:09

own machine, laptop, or whatever else

38:12

you use where you do your developer

38:14

work. When you initialize a repo using

38:17

git init, you create a local repo in

38:19

your folder. Changes you make there are

38:22

private until you push them to a remote

38:24

repository.

38:26

So a remote repo is a version of a

38:29

project stored on a server like GitHub,

38:32

GitLab or Bitbucket. It's used to share

38:34

code between collaborators and keep

38:37

project versions in sync across

38:39

different users computers. When

38:41

collaborating with a team, you'll have

38:43

two kinds of repos. Everyone in the team

38:46

will have a local repository on their

38:48

machine and there will also be this one

38:51

common remote repo from which everyone

38:53

will sync their local repository

38:55

versions. Now head over to github.com

38:58

and create an account if you don't

38:59

already have one. Once you're in, press

39:02

the little plus icon on the top right

39:04

and select new repository. Enter a

39:07

repository name such as mastering git.

39:11

Choose whether you want to make it

39:12

public or private. Leave the add readme

39:15

file checkbox unticked and click create

39:19

repository.

39:21

This is a remote repository. Here you

39:24

can see your repository's origin. Copy

39:27

it. When you clone a repository from

39:29

GitHub, Git automatically names the

39:32

remote repository as origin by default.

39:34

It's basically an alias for the remote

39:37

repositories URL. Now, our goal is to

39:40

link our local repository to the remote

39:42

origin. If you haven't yet switched the

39:44

default master branch name to main, you

39:47

can do that by running git branch- m

39:50

main and this will change the branch

39:53

name to main which is a standard

39:55

practice nowadays. And now we are ready

39:57

to link our local repo to a remote

39:59

origin. You have to run a command get

40:02

remote add origin and then you have to

40:05

paste the link to the origin that you

40:07

just copied and press enter. And a good

40:11

thing to know is that you can have

40:12

multiple remote repositories. You just

40:15

have to rerun the command and change the

40:18

origin name to something else. Of

40:21

course, that's the name of your choice.

40:22

And then you can also update the new

40:24

URL. But in most cases, you'll be fine

40:27

with just one remote repo. Finally, to

40:30

push your local commits to GitHub, use

40:32

get push-

40:35

origin main. And remember, we used

40:38

origin here to refer to the remote

40:40

repository instead of typing the full

40:42

URL.

40:44

So, press enter. And there we go. This

40:47

worked. If anything with git goes wrong,

40:49

typically it goes wrong at this point

40:52

when you're trying to push to a remote

40:54

repo. So, if you don't see what I'm

40:57

seeing right here, and instead you got

40:59

some error, typically all of these

41:01

errors are very easily resolvable. I

41:04

would just recommend copying the error

41:05

message, pasting it in Google, and then

41:07

fixing it right there and then. But in

41:10

this case, we're good. And now, if you

41:12

go back to your GitHub repository and

41:15

reload, boom, your code is now online

41:19

for the world or your team to see. And

41:22

okay, okay, you might have already known

41:25

this. For some of you, that's about as

41:27

far as you've gone with Git. create a

41:30

repo, push your changes, and call it a

41:32

day. But Git has so much more to offer,

41:36

especially when you're working within a

41:38

team. So now, let's take things up a

41:41

notch and dive into branching and

41:43

merging. This is where Git truly shines.

41:47

Branches in Git allow you to create

41:49

different versions of your project, like

41:52

making a copy of a project at a specific

41:54

moment in time.

41:56

Whatever changes you make in this copied

41:59

version won't affect the original. The

42:02

main project or branch stays untouched

42:05

while you experiment, modify, or add new

42:08

features in the copied branch. If

42:10

everything works out, you can later

42:12

merge your changes back into the

42:14

original project. If not, no worries.

42:16

The original remains safe and unchanged.

42:20

When working in a team, using separate

42:22

branches for different features or bug

42:25

fixes is essential. It allows you and

42:28

your team to work independently on

42:30

different parts of the code without

42:32

causing conflicts or errors, ensuring

42:35

everyone can focus on their own tasks.

42:38

At the start, you'll have one default

42:40

branch called main. To create a new

42:43

branch, run get branch and then type a

42:46

branch name.

42:48

This will create a new branch. And if

42:51

you want to switch to this newly created

42:53

branch, then run get checkout and then

42:56

enter the branch name you want to check

42:58

out too. And there we go. Switch to

43:00

branch branch name. Now, if you want to

43:03

go back to main, just run get checkout

43:06

main. There we go. And here's a little

43:09

pro tip. There is a shortcut to create a

43:12

new branch and immediately move to it.

43:15

To do that, run git checkout with a

43:18

-ashb flag and then enter a branch name

43:22

such as feature branch. Of course, this

43:25

branch name and feature branch are just

43:27

dummy names. Make sure your branch name

43:30

is short and explains which changes will

43:32

you be making on that branch. For more

43:35

tips on how to properly name your

43:36

branches, you can download the git

43:38

reference guide. So, let's create and

43:40

move to this feature branch in one

43:42

command. There we go. And what I'm about

43:45

to say next is very important. So keep

43:48

it in mind. When you create a new

43:50

branch, it'll be based on the branch

43:53

you're currently on. So if you're on the

43:56

main branch and run the command, the new

43:59

branch will contain the code from the

44:01

main branch at that point in time.

44:04

However, if you're on a different branch

44:06

with different code, the new branch will

44:09

inherit that code instead. So to ensure

44:12

you're creating the new branch from the

44:14

correct starting point, you should

44:16

either first switch to the branch you

44:18

want to base the new one on or run this

44:21

command get branch. Then you can enter a

44:25

new branch name

44:28

and then the next thing can be the

44:30

source branch. So if you do it this way

44:33

and replace the new branch name and the

44:35

source branch with the names of actual

44:37

branches, then it'll create a new branch

44:40

from another specific branch. So if you

44:42

run this command, you can directly

44:44

create and switch to a branch based on

44:47

any other branch without needing to

44:49

check out to it first. For now, I'll

44:51

remove that. And let's say that we want

44:53

to go into our code and implement this

44:55

feature we're working on. Let's say that

44:58

in our case the only feature we want to

45:00

do is to modify the readme. So below

45:03

hello git I'll say I'm adding this from

45:07

feature dash branch. There we go.

45:10

Feature implemented. If only it was this

45:13

easy. And you can see that our IDE

45:15

immediately highlighted this readme file

45:18

in blue indicating that it has some

45:20

changes. Now we need to add it commit it

45:23

and push it. This time instead of saying

45:26

get add readme md let's just use the get

45:29

add dot which is a command that you'll

45:31

use much more often. Next we need to

45:33

commit the changes with git commit-m

45:36

and then we have to add a commit

45:38

message. So this is the perfect time to

45:41

learn how to write a proper commit

45:44

message. A quality commit message is

45:47

written in the imperative mood. a

45:49

grammatical mood that sounds like you're

45:52

giving a command

45:54

like improve mobile responsiveness or

45:57

add AB testing. When writing your commit

46:00

message, make it answer this question.

46:04

If applied to the codebase, this commit

46:07

will and then fill in the blank like

46:10

this commit will upgrade the packages or

46:14

this commit will fix thread allocation.

46:17

And why do we do this? Well, because it

46:20

answers the question, what will happen

46:23

when I merge the branch containing this

46:25

commit? It will add AB testing. For

46:28

example, be direct and eliminate filler

46:32

words. For example, let's use modify

46:35

readme. In this case, it's short, sweet,

46:38

and in an imperative mood. And press

46:41

enter. There we go. We've just made get

46:43

aware of our commit. Now that you know

46:45

how to write better commits, let's take

46:48

a moment and check out our remote

46:49

repository. What do you think? Will it

46:52

have the latest commit we made? Let's

46:54

reload it. And it's the same. It doesn't

46:57

contain our newly created feature

46:59

branch. Do you know why? It's because

47:02

the changes we made are in the local

47:04

repository, which has not yet been

47:06

synced with the remote repo. To see

47:09

those changes, first you'll have to

47:11

publish your local branch. And you can

47:13

do that using the git push d- set dash

47:18

upstream origin and then the name of the

47:21

branch. In this case, feature dash

47:24

branch and press enter.

47:28

There we go. An upstream branch is a

47:31

remote branch that your local branch

47:33

tracks. When you set an upstream branch

47:36

using set upstream, you're essentially

47:38

linking your local branch to a branch on

47:40

a remote repo. Through this command, you

47:43

push a local feature branch to the

47:45

origin remote repository and then you

47:48

set the upstream branch for your local

47:51

feature branch to track origin feature

47:55

branch. Alternatively, you can also use

47:58

get push- u origin feature- branch or

48:04

the name of your branch. Of course, both

48:07

set upstream and -ash u establish a

48:10

tracking relationship between your local

48:12

branch and the remote branch. This way,

48:15

in the future, if you want to push

48:17

something from your local branch to your

48:19

remote branch, you simply have to run

48:21

get push. That's it. At this moment for

48:24

us, everything is up to date. But as you

48:27

make future changes, you don't have to

48:29

rerun set upstream or -ashu. You only

48:32

have to add it, commit it, and push it.

48:35

That's it. And if somebody else made

48:37

changes to your remote branch, either

48:40

directly or by merging some other

48:42

changes into it, you have to make your

48:44

local branch up to date with the remote

48:46

branch. And you do that by using the get

48:49

pull command. There we go. It's already

48:52

up to date in this case. This command

48:54

fetches changes from the remote repo and

48:57

merges them into your local repo for

48:59

that branch. There are all and the story

49:02

doesn't stop there. Git has plenty of

49:04

advanced features like merge conflicts,

49:07

reset, revert, stash, cherrypick, and

49:09

more. And if you want to truly master

49:11

Git, I've got a complete crash course

49:13

waiting for you on YouTube, totally

49:15

free. You'll find the link down in the

49:17

description. But for now, what you've

49:18

learned so far is enough to kick off

49:20

your DevOps journey. Though, it's

49:22

definitely just the beginning. Make sure

49:24

to keep digging deeper into Git as you

49:27

grow. You'll soon use this knowledge to

49:29

build pipelines that automatically

49:30

build, test, and deploy code to staging

49:33

or production servers. And guess what?

49:36

That entire process begins with a simple

49:39

Git push. Git also plays a huge role in

49:43

managing infrastructure with tools like

49:46

Terraform or Pulumi. Your cloud setup

49:49

whether it's virtual machines, databases

49:51

or networks, lives inside a TF or ayaml

49:55

files in git repository. Change a line,

49:58

commit and push, and your entire

50:00

infrastructure updates automatically.

50:03

All in all, in DevOps, you might not

50:06

always write the application logic

50:07

yourself, but you'll constantly review

50:10

PRs, check configurations, and maintain

50:13

secure workflows. GitHub or GitLab pull

50:15

requests will become your daily

50:17

workbench. And now you know exactly how

50:20

they work. And once you've mastered Git,

50:22

you're ready for the next big leap of

50:24

building pipelines that take your

50:26

commits and turn them into live runnable

50:29

applications. So, let's dive into that

50:31

next.

50:35

CI/CD pipelines. What are those? Well, a

50:39

pipeline is just a set of automated

50:41

steps that takes your code from the

50:44

moment you push it all the way to

50:46

production. Instead of manually running

50:48

mpm install, mpm test, docker build, and

50:52

cubectl apply. Every single time, your

50:56

pipeline does it for you. Over the

50:58

years, the industry has developed a

51:01

bunch of tools for managing pipelines.

51:03

Some of the most popular ones are

51:05

Jenkins, which is the OG, highly

51:08

customizable but self-hosted and heavy.

51:11

GitLab CI/CD, which is built right into

51:13

GitLab, great if you have your repos

51:16

there. CircleCI, TravisCCI, and Azure

51:20

DevOps if you're on the Microsoft

51:22

ecosystem. They all automate, build,

51:25

test, and deploy. But most devs today

51:28

prefer GitHub actions because it's

51:30

deeply integrated with where your code

51:32

already lives. I prefer it because it's

51:35

built right into GitHub which means no

51:37

extra setup or third party login. It's

51:39

event driven which means that it

51:41

triggers on push PRs or cron jobs. So in

51:45

simple words when you push your pipeline

51:47

runs. It has a massive library of

51:49

pre-built actions which means that you

51:51

can just grab community workflows for

51:53

testing docker deployments and more. and

51:56

it has version controlled YAML configs.

51:59

This is where the automation lives right

52:02

within your repo. So, it's easy to audit

52:04

and reuse. In other words, it's super

52:07

simple to use and that's why I

52:09

personally use it and recommend you do

52:11

too. At its core, a GitHub actions

52:13

pipeline is called a workflow. Workflows

52:17

live in your repo inside the

52:19

folder.github/workflows.

52:22

And each workflow is just a YAML file

52:25

defining triggers, what events start the

52:28

workflow like pushing code or opening a

52:31

PR, jobs which are a set of tasks each

52:34

running on its own virtual machine, and

52:37

the steps which are actual commands or

52:39

actions executed in a job. So what is

52:42

YAML? Well, YAML stands for YAML aim

52:46

markup language. Funny name, right? It

52:49

basically means that YAML is not about

52:51

complicated tags like HTML or XML.

52:55

Instead, YAML is designed to be human

52:58

friendly and machine readable. Compare

53:00

that to JSON or XML. YAML is cleaner,

53:03

which is why Kubernetes, Docker Compose,

53:06

and GitHub actions use it for configs.

53:08

So, to truly master creating pipelines

53:11

with GitHub actions, you need to

53:13

understand YAML syntax rules. Unlike

53:16

other file formats, YAML is whitespace

53:19

sensitive, which means that indentation

53:21

is everything. You have things like key

53:24

value pairs like language is Python. For

53:27

indentation, there's no debate here. It

53:29

only accepts spaces, no tabs. You can

53:32

also have lists, nested structures where

53:34

you can combine mappings and lists,

53:37

scalers for strings, numbers, and

53:39

booleans, and even things like

53:41

multi-line blocks, and anchors and

53:43

aliases where you can reuse different

53:45

configs. You don't need every detail

53:47

right now. Just focus on writing clean

53:49

YAML. And similar to any other language,

53:52

YAML also has specific keywords and

53:55

syntax. So, let me walk you through them

53:57

real quick. Some of the core actions

53:59

include the keyword name which is used

54:01

to define the workflow or the job title

54:04

on which defines triggers such as push,

54:07

PR or schedule. Jobs to define jobs,

54:11

their OS and steps. Steps which are

54:14

sequential commands or actions run which

54:18

includes shell commands to execute. uses

54:20

if you want to use pre-built actions

54:23

with where you can pass params to

54:25

actions env to set environment variables

54:28

and needs to make one job depend on the

54:31

other list can go on and on but these

54:34

are some special keywords that matter

54:36

most in GitHub actions YAML files and

54:39

the list could go on and on and that's

54:41

why I prepared a complete YAML and

54:44

GitHub actions CI/CD cheat sheet it's

54:47

packed with so many useful commands And

54:49

hey, if you'd like to buy me a coffee

54:51

and support me in making more

54:52

highquality YouTube videos like this

54:54

one, plus get a detailed cheat sheet on

54:56

YAML and CI/CD pipelines, you can join

54:59

our new channel membership. No pressure

55:01

at all. The link should be just

55:03

somewhere below this video. All right,

55:05

let's keep going.

55:08

For now, let's move forward and put what

55:10

we've learned into action by creating

55:13

our first GitHub actions pipeline using

55:16

YAML. I'll start with creating a new

55:19

GitHub repo. Let's call it something

55:21

like DevOps pipelines. Create a new repo

55:24

and then go ahead and clone it locally

55:27

by copying this URL and then just clone

55:30

it. In this case, I'll be using WebStorm

55:33

to clone it automatically. Once you're

55:35

in, you'll have an empty repo. So, what

55:37

you can do is initialize a project by

55:40

running mpm init-y.

55:43

And this will give you a new package

55:45

json with a new node project. Alongside

55:48

it, go ahead and create a new file

55:51

called index.js

55:53

which will act as the starting point of

55:54

our application.

55:56

And in there, add a new console log

56:00

saying hello DevOps. We can also add

56:03

maybe an additional console log saying

56:06

something along the lines of I'm

56:08

learning CI/CD using GitHub actions.

56:14

Perfect.

56:16

And what we can do is also create

56:18

another file called test.js.

56:21

Within it, I'll add a console log when

56:24

we start the tests. So I'll say starting

56:27

tests. And I'll also add a timeout right

56:31

here that'll wait for 3 seconds. So say

56:34

console.log

56:36

waiting 3 seconds. And of course it'll

56:39

take 3,000 milliseconds which is 3

56:42

seconds. And then I'll add another

56:45

console log saying something like tests

56:47

complete. Now we can add both of these

56:50

files as scripts within package JSON. So

56:54

right here instead of test I'll simply

56:56

add a run script which will run node

57:00

index.js and I'll also add a test script

57:04

which will also run the test.js

57:07

file. Now we want to create a special

57:09

folder right here in the root of our

57:12

application. Create a new folder that's

57:14

called.github.

57:17

And within GitHub create a new folder

57:21

called workflows.

57:23

Within workflows, you can see how my IDE

57:25

automatically recognizes that I might

57:27

want to do a GitHub workflow, which

57:29

would create a workflow. file, but in

57:32

this case, we can create it manually by

57:34

creating a new pipeline.

57:38

And we can start creating it. First, it

57:40

needs a normal human readable name. So,

57:42

I'll say name is CI pipeline, which will

57:45

make it easier to identify this pipeline

57:47

in GitHub actions tab as you create more

57:49

pipelines.

57:51

Then it needs an on property which will

57:54

define the trigger event. What action in

57:57

GitHub should start this pipeline.

58:00

In this case, we can say that it'll be a

58:02

push action. So whenever somebody pushes

58:05

code to the repo and then we only want

58:08

to restrict it when changes are pushed

58:11

to the main branch. So I'll say branches

58:13

is going to be set to an array of main.

58:18

So if you push to a feature branch,

58:20

nothing happens. Only when you merge

58:21

into main, which is our production ready

58:23

branch, the pipeline will run. This will

58:26

prevent the unnecessary builds for every

58:28

branch and ensure that main always stays

58:31

tested. After that, we can define the

58:33

jobs. A job is a group of steps that run

58:37

together. You can think of it like a

58:39

container in which multiple actions are

58:41

executed in sequence. In this case, we

58:45

will name this job build and we have to

58:48

define what it will run on. So in this

58:52

case, we'll tell it to use runs on the

58:55

latest fresh YUbuntu latest machine. Why

59:00

Ubuntu? Because it's fast, lightweight,

59:03

and widely supported. Then inside a job,

59:06

you have to define different steps.

59:09

These are the actual instructions to be

59:11

executed one after another.

59:14

Each step has a name. In this case,

59:17

we'll call it checkout code. And then

59:21

you can say what it uses.

59:23

So in this case, you can say uses

59:26

actions/checkout

59:29

atv5.

59:30

This means that we're using a pre-built

59:33

GitHub action such as this one right

59:35

here, GitHub actions checkout. And this

59:38

is the action we're using that checks

59:40

out our repository under the GitHub

59:42

workspace so our workflow can access it.

59:45

Now we can add some more steps such as

59:48

another one with a name of set up

59:52

Node.js JS which will also use so uses

59:56

another predefined action of actions

59:59

forward slash setup- node at v3

60:04

with and here you can define the node

60:06

version that you want to set it up with

60:08

in this case let's do 16 after that we

60:11

want to run some dependencies so I will

60:14

create another package that'll have a

60:18

name of install dependencies and the

60:21

only thing that it'll is it'll run mpm

60:24

install and after that we'll have

60:26

another one which will run the tests. So

60:29

it'll be a step of name run tests

60:35

and it'll run mpm test. This is

60:38

typically done using just or some other

60:40

testing framework. But for now we'll

60:41

just run our manual test file. Now as

60:43

you can see we have some yellow squiggly

60:45

lines right here saying that we have

60:47

some issues with our setup. And that's

60:49

because I used tabs for indentation

60:52

instead of using spaces. So after fixing

60:55

the indentation, it should look

60:56

something like this. You should have two

60:58

spaces for each indentation point. We

61:01

have jobs, then the build, then we have

61:03

runs on, and then steps is directly

61:06

below runs on. Two spaces indented after

61:09

build. Perfect. Now let's go ahead and

61:12

add and commit those changes to GitHub

61:15

by running git add dot getit commit-m

61:19

feature add ci pipeline and then get

61:24

push.

61:26

Now if you head back over to your GitHub

61:28

repo, you'll be able to see your latest

61:30

changes. And now if you head over to the

61:32

actions tab, you'll be able to see your

61:34

newly created pipeline right here.

61:36

expand it and you'll see all the job

61:39

details if you enter the build such as

61:41

setting up the job, setting up Node.js,

61:44

installing dependencies, and then even

61:46

running tests. And you can see the

61:48

results of those tests right here.

61:50

Starting, completed, waiting 3 seconds,

61:52

and the job finally is done. Now, this

61:55

pipeline will run on every single main

61:59

push. I can prove that to you by heading

62:02

over to the code and adding a readme

62:04

which will automatically push a new

62:06

readme.md file to main. So I'll say

62:09

testing

62:11

the CI pipeline

62:14

and commit and push. Since a push has

62:17

been made, if you head over to actions,

62:19

you'll see that a new action run

62:22

automatically based off of my commit

62:24

which pushed over to main. It'll rerun

62:27

the tests for the entire application and

62:29

make sure that everything is good. From

62:31

here on, you can add additional test

62:33

coverage checks, deploy to staging your

62:36

production, or find actions from the

62:38

GitHub marketplace where there are so

62:40

many different actions that you can

62:41

immediately use within your application.

62:43

So, in the build and deploy stage, this

62:45

allows you to see your CI/CD pipelines,

62:48

which enforce code quality, passing

62:50

tests, and shipping inside Docker

62:52

container. This was just one simple

62:55

example. Based off of this, you can

62:57

create more workflows or reuse existing

63:00

workflows from the community. Later on

63:02

in the build and deploy stage of this

63:03

project, you'll actually set up real

63:06

CI/CD pipelines for our acquisitions

63:09

application, ensuring that your

63:10

formatting is consistent, your tests

63:13

pass successfully, and everything runs

63:15

smoothly within a Docker container.

63:17

Speaking of which, let's dive into

63:19

Docker next.

63:23

It works on my machine. Have you ever

63:26

heard or said this? Or it works on

63:28

Windows but not Mac OS. Or have you ever

63:31

struggled with juggling different NodeJS

63:33

versions for different projects? This is

63:36

why Docker was created in 2013. And it's

63:39

not just a tool to solve compatibility

63:41

issues. It's a critical skill required

63:44

for the highest paying jobs as the

63:46

surveys find Docker to be the most

63:48

popular tool used by 57% of professional

63:52

developers. If you don't learn it now,

63:54

you significantly lower your chances of

63:57

landing a job. You can think of Docker

63:59

as a lunchbox for our application. In

64:02

the lunchbox, we pack not just the main

64:05

dish, which is our code, but also all

64:07

the specific ingredients or dependencies

64:10

it needs to taste just right. Now, this

64:14

special lunchbox is also magical. It

64:17

doesn't matter where we want to eat, at

64:20

our desk, a colleagueu's desk, or have a

64:23

little picnic. No matter the environment

64:26

or different computers, wherever we open

64:28

the lunchbox, everything is set up just

64:31

like it is in our kitchen. It ensures

64:33

consistency, portability, and prevents

64:36

us from overlooking any key ingredients,

64:39

making sure our code runs smoothly in

64:42

any environment without surprises.

64:44

Technically, that's what Docker is. It's

64:47

a platform that enables the development,

64:50

packaging, and execution of applications

64:53

in a unified environment. By clearly

64:56

specifying our applications requirements

64:58

such as NOJJS versions and necessary

65:01

packages, Docker generates a

65:03

self-contained box that includes its own

65:06

operating system and all the components

65:09

essential for running our application.

65:12

This box acts like a separate computer

65:15

virtually providing the operating system

65:17

runtimes and everything required for our

65:20

application to run smoothly. But why

65:23

should we bother using Docker at all?

65:27

Big shots like eBay, Spotify, Washington

65:30

Post, Yelp, and Uber noticed that using

65:33

Docker made their apps better and faster

65:36

in terms of both development and

65:38

deployment. Uber, for example, said in

65:42

their study that Docker helped them

65:44

onboard new developers in minutes

65:46

instead of weeks. So, what are some of

65:49

the most common things that Docker helps

65:52

with? First of all, consistency across

65:55

environments. Docker ensures that our

65:58

app runs the same on my computer, your

66:01

computer, and your boss's computer. No

66:04

more it works on my machine drama. It

66:07

also means everyone uses the same

66:09

commands to run the app, no matter what

66:12

computer they're using. Since

66:14

downloading services like Node.js isn't

66:17

the same on Linux, Windows, or Mac OS,

66:20

developers usually have to deal with

66:22

different operating systems. Docker

66:25

takes care of all of that for us. This

66:27

keeps everyone on the same page, reduces

66:30

confusion, and boosts collaboration,

66:33

making our app development and

66:35

deployment faster. The second thing is

66:37

isolation. Docker maintains a clear

66:40

boundary between our app and its

66:43

dependencies. So we'll have no more

66:45

clashes between applications much like

66:48

neatly partitioned lunchbox compartments

66:50

for veggies, fruits, and bread. This

66:53

improves security, simplifies debugging,

66:55

and makes development process smoother.

66:58

Next thing is portability. Docker lets

67:01

us easily move our applications between

67:03

different stages like from development

67:05

to testing or testing to production.

67:08

It's like packaging your app in a

67:10

lunchbox that can be moved around

67:12

without any hassle. Docker containers

67:15

are also lightweight and share the host

67:18

system resources making them more

67:20

efficient than any traditional virtual

67:22

machines. This efficiency translates to

67:26

faster application start times and

67:28

reduced resource usage. It also helps

67:31

with version control as just like we

67:33

track versions of our code using Git,

67:35

Docker helps us track versions of our

67:38

application. It's like having a rewind

67:40

button for our app so we can return to a

67:43

previous version if something goes

67:45

wrong. Talking about scalability, Docker

67:48

makes it easy to handle more users by

67:51

creating copies of our application when

67:53

needed. It's like having multiple copies

67:55

of a restaurant menu. When there are

67:57

more customers, each menu serves one

68:00

table. And finally, DevOps integration.

68:03

Docker bridges the gap between

68:05

development and operations, streamlining

68:08

the workflow from coding to deployment.

68:11

This integration ensures that the

68:13

software is developed, tested, and

68:16

deployed efficiently with continuous

68:18

feedback and collaboration. How does

68:21

Docker work? There are two most

68:24

important concepts in Docker. images and

68:27

containers and the entire workflow

68:29

revolves around them. Let's start with

68:33

images. A Docker image is a lightweight

68:36

standalone executable package that

68:39

includes everything needed to run a

68:42

piece of software, including the code,

68:44

runtimes like Noode.js, libraries,

68:47

system tools, and even the operating

68:49

system. Think of a Docker image as a

68:52

recipe for our application. It not only

68:56

lists the ingredients being code and

68:58

libraries, but also provides the

69:00

instructions such as runtime and system

69:02

tools to create a specific meal, meaning

69:06

to run our application.

69:08

And we would want to run this image

69:10

somewhere, right? And that's where

69:12

containers come in. A Docker container

69:15

is a runnable instance of a Docker

69:17

image. It represents the execution

69:20

environment for a specific application,

69:22

including its code, runtime, system

69:24

tools, and libraries included in the

69:27

Docker image.

69:29

A container takes everything specified

69:31

in the image, and follows its

69:33

instructions by executing necessary

69:35

commands, downloading packages, and

69:38

setting things up to run our

69:39

application. Once again, imagine having

69:42

a recipe for a delicious cake. The

69:45

recipe being the Docker image. Now when

69:48

we actually bake the ingredients, we can

69:50

serve it as a cake, right? The baked

69:52

cake is like a docker container. It's

69:55

the real thing created from the recipe.

69:58

Just like we can have multiple servings

70:00

of the same meal from a single recipe or

70:03

multiple documents created from a single

70:05

database schema. We can run multiple

70:08

containers from a single image. That's

70:10

what makes Docker the best. We create

70:13

one image and get as many instances as

70:15

we want from it in form of containers.

70:18

Now, if you dive deeper into Docker,

70:20

you'll also hear people talk about

70:22

volumes. A Docker volume is a persistent

70:26

data storage mechanism that allows data

70:29

to be shared between a Docker container

70:32

and the host machine, which is usually a

70:34

computer or a server or even among

70:37

multiple containers. It ensures data

70:40

durability and persistence even if the

70:43

container is stopped or removed. Think

70:45

of it as a shared folder or a storage

70:48

compartment that exists outside the

70:51

container. The next concept is Docker

70:53

network. It's a communication channel

70:56

that enables different Docker containers

70:58

to talk to each other or with the

71:00

external world. It creates connectivity,

71:03

allowing containers to share information

71:05

and services while maintaining

71:08

isolation. Think of a Docker network as

71:11

a big restaurant kitchen. In a large

71:14

kitchen being the host, you have

71:16

different cooking stations or

71:18

containers, each focused on a specific

71:20

meal. Meal being our application. Each

71:23

cooking station or a container is like a

71:26

chef working independently on a meal.

71:29

Now imagine a system of order tickets or

71:33

a Docker network connecting all of these

71:35

cooking stations together. Chefs can

71:38

communicate, ask for ingredients or

71:40

share recipes seamlessly.

71:43

Even though each station or a container

71:46

has its own space and focus, the

71:48

communication system or the Docker

71:50

network enables them to collaborate

71:52

efficiently. They share information

71:55

without interfering with each other's

71:57

cooking process. I hope it makes sense

72:00

but don't worry if it doesn't. We'll

72:02

explore it together in the demo. So

72:04

moving on, the Docker workflow is

72:07

distributed into three parts. Docker

72:09

client, Docker host aka Docker Damon,

72:14

and Docker registry aka Docker Hub. The

72:17

Docker client is the user interface for

72:20

interacting with Docker. It's the tool

72:22

we use to give Docker commands. We issue

72:26

commands to the Docker client via the

72:28

command line or a graphical user

72:30

interface instructing it to build, run,

72:33

or manage images or containers. Think of

72:36

the Docker client as the chef giving

72:38

instructions to the kitchen staff. The

72:41

Docker host or Docker Damon is the

72:44

background process responsible for

72:46

managing containers on the host system.

72:49

It listens for Docker client commands,

72:52

creates and manages containers, builds

72:54

images, and handles other Docker related

72:57

tasks. Imagine the Docker host as the

73:00

master chef overseeing the kitchen

73:02

carrying out instructions given by the

73:05

chef or the Docker client. Finally, the

73:08

Docker registry aka Docker Hub is a

73:12

centralized repository of Docker images.

73:15

It hosts both public and private

73:17

registries or packages. Docker is to

73:20

Docker Hub what git is to GitHub. In a

73:23

nutshell, Docker images are stored in

73:26

these registries. And when you run a

73:28

container, Docker may pull the required

73:31

image from the registry if it's

73:33

unavailable locally. To return to our

73:36

cooking analogy, think of Docker

73:38

registry as a cookbook or recipe

73:40

library. It's like a popular cookbook

73:42

store where you can find and share

73:44

different recipes. In this case, Docker

73:47

images.

73:49

In essence, the Docker client is the

73:52

command center where we issue

73:54

instructions. The Docker host then

73:56

executes these instructions and manages

73:58

containers. And the Docker registry

74:01

serves as a centralized storage for

74:03

sharing and distributing images.

74:07

Using Docker is super simple. All you

74:10

have to do is click the link in the

74:12

description, download Docker Desktop for

74:15

your own operating system, and that will

74:18

help you containerize your application

74:20

in the easiest way possible.

74:23

It'll definitely take some time to

74:25

download, but once you're there, you can

74:28

accept the recommended settings and sign

74:30

up. Once you're in, on the left side,

74:33

you can see the links to containers,

74:35

which display the containers we've made,

74:38

images, which shows the images we've

74:41

built, and volumes, which shows the

74:44

shared volumes we have created for our

74:47

containers, and other beta features like

74:50

builds, dev environments, and docker

74:52

code. Now, return to the browser and

74:56

Google Docker Hub. The first result will

74:59

surely be hub.docker.com.

75:03

Then open it up. Go to explore and you

75:06

can see all of the public images created

75:08

so far by developers worldwide. from

75:12

official images by verified publishers

75:15

to sponsored open-source ones covering

75:17

everything from operating system images

75:19

like Ubuntu languages like Python and

75:22

Golang, databases like Reddis,

75:25

Postgress, for MongoDB, MySQL,

75:29

runtimes like Noode.js to even Hello

75:32

World Docker image and also the old

75:34

peeps like WordPress and PHP. Almost

75:38

everything that you need is right here.

75:41

But how do we create our own Docker

75:44

images? Easy peasy. Creating a Docker

75:47

image starts from a special file called

75:50

Docker file. It's a set of instructions

75:53

telling Docker how to build an image for

75:55

your application. There are some

75:58

specific instructions and keywords we

76:00

use to tell Docker what we want through

76:02

the Docker file. Think of it as Docker

76:05

syntax or language to specify exactly

76:09

what we want. Here are some of the

76:11

commands from specifies the base image

76:14

to use for the new image. It's like

76:17

picking a starting kitchen that already

76:19

has some basic tools and ingredients.

76:21

Work deer sets the working directory for

76:24

the following instructions. It's like

76:26

deciding where in the kitchen you want

76:28

to do all your cooking. Copy copies the

76:32

files or directories from the build

76:34

context to the image. It's like bringing

76:37

in your recipe, ingredients, and any

76:39

special tools into your chosen cooking

76:41

spot. Run executes commands in the shell

76:45

during image build. It's like doing

76:47

specific steps of your recipe, such as

76:49

mixing ingredients.

76:52

Expose informs Docker that the container

76:54

will listen on specified network ports

76:57

at runtime. It's like saying, "I'm going

77:00

to use this specific part of the kitchen

77:01

to serve the food."

77:04

Env sets environment variables during

77:06

the build process. You can think of that

77:08

as setting the kitchen environment such

77:11

as deciding whether it's a busy

77:12

restaurant or a quiet home kitchen. Arg

77:16

defines built time variables. It's like

77:19

having a note that you can change before

77:21

you start cooking, like deciding if you

77:23

want to use fresh or frozen ingredients.

77:26

Volume creates a mount point for

77:29

externally mounted volumes. Essentially

77:32

specifying a location inside your

77:34

container where you can connect external

77:37

storage. It's like leaving a designated

77:39

space in your kitchen for someone to

77:41

bring in extra supplies if needed. CMD

77:44

provides default command to execute when

77:47

the container starts. It's like

77:49

specifying what dish you want to make

77:51

when someone orders from your menu.

77:53

Entry point specifies the default

77:56

executable to be run when the container

77:58

starts. It's like having a default dish

78:01

on your menu that people will get unless

78:03

they specifically ask for something

78:05

else. And you might wonder, isn't entry

78:08

point the same as cmd? Well, not really.

78:12

In simple terms, both cmd and entry

78:16

point are instructions in Docker for

78:18

defining the default command to run when

78:21

a container starts.

78:23

The key difference is that cmd is more

78:26

flexible and can be overridden when

78:29

running the container while entry point

78:31

defines the main command that cannot be

78:34

easily overridden. Think of cmd as

78:36

providing a default which can be changed

78:40

and entry point as setting a fixed

78:42

starting point for your container. If

78:44

both are used, cmd arguments will be

78:47

passed to entry point. And this are the

78:50

most used keywords when creating a

78:52

docker file. I have also prepared a list

78:55

of other options you can use in docker

78:57

files. You can think of it as a complete

78:59

guide and a cheat sheet you can refer to

79:02

when using docker. The link is in the

79:04

description. But now, let's actually use

79:07

some of these commands in practice.

79:09

Let's try to run one of the images

79:11

listed in the Docker Hub to see how that

79:14

works. Let's choose one of the operating

79:16

system images as an example. Let's go

79:19

for Ubuntu. On the right side of the

79:22

details of the image, you'll see a

79:23

command. Copy it and try executing it in

79:26

your terminal. But before we paste it,

79:29

first create a new empty folder on our

79:31

desktop called docker_course

79:35

and then drag and drop it to our empty

79:37

Visual Studio Code window. Open up your

79:40

empty terminal and paste the command

79:43

docker pool Ubuntu. It's going to do it

79:46

using the default tag latest and it's

79:49

going to take some time to pull it. As

79:51

you can see, it's working. Docker

79:53

initially checks if there are any images

79:56

with that name on our machine. If not,

79:59

it searches for the Docker hub, finds

80:01

the image, and automatically installs it

80:03

on our machine. Now, if we go back to

80:06

Docker Desktop, we'll immediately see an

80:08

Ubuntu image right here under images. To

80:12

confirm that we actually installed a

80:14

whole different operating system, we can

80:16

run a command that executes the image.

80:19

Do you know how that process is called?

80:22

creating a container. So let's run

80:25

docker run dashit

80:28

for interactive and then Ubuntu and

80:32

press enter. After you run this command,

80:35

head over to docker desktop and if you

80:38

go to containers, you'll see a new

80:40

container based off of the Ubuntu image.

80:43

Coming back to our terminal, you'll see

80:45

something different. If you've ever

80:47

tried Ubuntu before, you'll notice that

80:49

this terminal looks exactly like the

80:51

Ubuntu command line. Let's test out some

80:54

of the commands. ls for list. We have cd

80:59

home to move to our home directory. MK

81:02

dear, which is going to create a new

81:03

directory called hello. We can once

81:06

again ls

81:08

cd into hello to navigate to it. We can

81:11

create a new hello-ubuntu.txt.

81:15

txt.

81:17

We can ls to check it out if it's there.

81:20

And it is. We have just used different

81:22

Ubuntu commands right here within our

81:25

terminal. Amazing, isn't it? We are

81:28

running an entirely different operating

81:30

system simply by executing a Docker

81:33

image within a Docker container. For

81:36

now, let's kill this terminal by

81:38

pressing this trash icon and navigate

81:40

back to our Docker desktop. Now a bigger

81:44

question awaits. How do we create our

81:47

own Docker images? We can start from a

81:50

super simple Docker app that says hello

81:54

world. Let's create a new folder called

81:57

hello-ashd

81:59

docker. Within it, we can create a

82:02

simple hello.js

82:04

file. And we can type something like

82:07

console.log log hello

82:11

docker.

82:12

Then comes the interesting part. Next,

82:15

we'll create a docker file. Yep, it's

82:18

just Docker file like this. No dots, no

82:22

extensions. VS Code might prompt you to

82:25

install a Docker extension. And if it

82:27

does, just go ahead and install it. Now,

82:30

let's figure out what goes into the

82:32

Docker file. Do you remember the special

82:35

Docker syntax we talked about earlier?

82:38

Well, let's put it to use. First, we

82:40

have to select the base image to run the

82:42

app. We want to run a JavaScript file so

82:45

we can use the node runtime from the

82:47

Docker Hub. We'll use this one with an

82:50

Alpine version. It's a lightweight

82:52

version of Linux. So, we can type

82:55

something like from node 20-p.

83:02

Next, we want to set the working

83:03

directory to forward slapp. This is the

83:06

directory where commands will be run.

83:09

And then forward/ app is a standard

83:11

convention. So we can type work there

83:14

and then type forward slash app.

83:18

Next we can write copy dot dot like

83:21

this. This will copy everything from our

83:24

current directory to the docker image.

83:26

The first dot is the current directory

83:30

on our machine and the second dot is the

83:33

path to the current directory within the

83:35

container. Next, we have to specify the

83:38

command to run the app. In this case,

83:41

cmd node hello.js

83:45

will do the trick.

83:47

And now that we have created our Docker

83:49

file, let's move into the folder where

83:52

the Docker file is located by opening up

83:54

the terminal and then running cdello-d.

84:00

Inside of here, let's type docker build-

84:04

t and t stands for the tag, which is

84:07

optional. And if no tag is provided, it

84:10

defaults to the latest tag. And finally,

84:12

the path to the docker file. And in this

84:15

case, that's hello-ashdocer

84:17

dot because we're right there. And press

84:20

enter.

84:22

It's building it. And I think it

84:25

succeeded. Great. To verify that the

84:28

image has been created or not, we can

84:30

run a command docker images. And you can

84:34

see that we have two images, Ubuntu as

84:37

well as hello docker created 16 seconds

84:40

ago. Now, if you're a more visual

84:43

person, you can also visit Docker

84:45

desktop. Here, if you head to images,

84:48

you can see all of the images we have

84:50

created so far.

84:52

Now that we have our image, let's run or

84:55

containerize it to see what happens. So,

84:58

if we go back, we can run docker run

85:01

hello-doer.

85:04

There we have it, an excellent console

85:07

log.

85:09

If we go back to Docker desktop and then

85:12

open up that container

85:14

and navigate inside of the files,

85:18

you'll see a lot of different files and

85:20

folders, but there is one special file

85:22

here. Want to make a guess? Yes, it's

85:25

app which we created in Docker file.

85:28

Moving inside it, we can see that it

85:30

contains two of the same files we have

85:32

in our application. Docker file and

85:34

Hello.js exact replica. Also, if we want

85:38

to open up our application in shell

85:40

mode, similar to what we did with

85:42

Ubuntu, we have to run docker run it

85:47

hello-d.

85:50

This puts us directly within the

85:53

operating system and then you can simply

85:55

run node hello.js

85:58

to see the same output.

86:02

We can also publish the images we have

86:04

created on docker. But before that,

86:06

let's build something a bit more complex

86:08

than the simple hello world and then

86:11

let's publish it to the Docker Hub.

86:14

Which means that now we're diving into

86:16

the real deal, dockerizing ReactJS

86:19

applications. Let's dockerize our first

86:22

React application. I'm going to do that

86:24

by quickly spinning up a simple React

86:27

project by running the command mpm

86:30

create vit latest and then react- docker

86:35

as the folder name. If you press enter,

86:38

it's going to ask you which flavor of

86:40

JavaScript you want. In this case, let's

86:42

go with React. Sure, we can use

86:44

TypeScript. And we can now cd into

86:47

React-doccker.

86:49

And we won't run any mpm install or mpm

86:52

rundev because the dependencies will be

86:55

installed within our dockerized

86:57

container. So with that said now if we

87:00

clear it we are within react docker and

87:03

you can see our new react application

87:05

right here. So as the last time you

87:08

already know the drill we need to create

87:09

a new file called docker file. As you

87:13

can see, it automatically gets this

87:14

icon. And it's going to be quite similar

87:16

to the original Docker file that we had,

87:19

but this time I want to go into more

87:21

depth about each of these commands so

87:24

you know exactly what they do. And

87:26

because of that, below this course, you

87:29

can find a complete Docker file for our

87:32

React Docker application. Copy it and

87:35

paste it here. Once you do that, you

87:37

should be able to see something that

87:39

looks like this. It seems like there's a

87:41

lot of stuff, but there really isn't.

87:44

It's just a couple of commands, but I

87:46

wanted to take my time to deeply explain

87:49

all of the commands we're using right

87:51

here. So, let's go over all of that

87:54

together. First, we need to set the base

87:57

image to create the image for React app.

88:00

And we are setting it up from Node 20

88:03

Alpine. It's just a version 20 of Node.

88:06

You can use any other version you want.

88:08

And in these courses, I want to teach

88:10

you how to think for yourself, not

88:12

necessarily just replicate what I'm

88:14

doing here. So, if you hover over the

88:17

command, you can see exactly what it

88:19

does. Set the base image to use for

88:22

subsequent instructions. From must be

88:24

the first instruction in a Docker file.

88:27

And you can see a couple of examples.

88:28

You can use a from base image or you can

88:31

even add a tag or a digest. In this

88:34

case, we're adding a tag of a specific

88:36

version, but it's not necessary. And if

88:39

you click online documentation, you can

88:41

find even more instructions on exactly

88:44

how you can use this command. Next, we

88:46

have to play with permissions a bit.

88:48

Now, I know that these couple of

88:50

commands could be a bit confusing, but

88:52

we're doing it to protect our new

88:54

container from bad actors and users

88:57

wanting to do something bad with it. So

88:59

because of that we create a new user

89:02

with permissions only to run the app.

89:05

The s is used to create a system user

89:08

and g is used to add that user to a

89:12

group. This is done to avoid running the

89:15

app as a root user that has access to

89:17

everything. That way any vulnerability

89:19

in the app can be exploited to gain

89:22

access to the host system. This is

89:24

definitely not mandatory, but it's

89:26

definitely a good practice to run the

89:28

app as a non-root user, which is exactly

89:30

what we're doing here. We're creating a

89:32

system user, adding it to the user

89:34

group, and then we set the user to run

89:37

the app user app, and you can see more

89:40

information about right here. Set the

89:42

username to use when running the image.

89:45

Next, we set the working directory to

89:47

forward slash app. And then we copy the

89:50

package JSON and package log JSON to the

89:53

working directory. This is done before

89:56

copying the rest of the files to take

89:58

advantage of Docker's cache. If the

90:01

package JSON and package log JSON files

90:03

haven't changed, Docker will use the

90:05

cache dependencies. So copy files or

90:09

folders from source to destination in

90:12

the images file system. So first you

90:15

specify what you want to copy from the

90:16

source and then you provide a path where

90:19

you want to paste it to.

90:21

Next, sometimes the ownership of the

90:23

files in the working system is changed

90:25

to root and thus the app can't access

90:28

the files and throws an error eax

90:31

permission denied. To avoid this, change

90:34

the ownership of the files to the root

90:36

user. So we're just changing it back

90:38

from what we did above. Then we change

90:40

the ownership of the app directory to

90:42

the app user by running a new command in

90:46

this case chown where we specify which

90:49

user and group and directory we're

90:51

changing the access to and then we

90:53

change the user back to the app user and

90:56

once again if these commands are not

90:57

100% clear no worries. This is just

91:00

about playing with user permissions to

91:02

not let bad actors play with our

91:04

container. Finally, we install

91:06

dependencies. copy the rest of the files

91:08

to the working directory. Expose the

91:10

port 5173 to tell Docker that the

91:13

container listens on that specified

91:15

network and then we run the app. If you

91:17

want to learn about any of these

91:19

commands, hover over it. You can already

91:21

get a lot of info and then go to online

91:23

documentation if you need even more.

91:26

With that said, that is our Docker file.

91:29

Another great practice that we can do is

91:31

just go right here and create another

91:33

file similar to.git ignore. This time

91:37

it's called docker ignore. And here you

91:41

can add node_modules

91:43

just to simply exclude it from docker

91:45

because we don't really need it in our

91:47

node modules on our github. We don't

91:49

need it anywhere, not even in docker.

91:52

Docker is playing with our package json

91:54

and package lock json and then rebuilds

91:56

it when it needs to. Now finally once we

92:00

have our docker file we are ready to

92:02

build it. We can do that by opening up a

92:05

new terminal, navigating to React

92:07

Docker, and we can build it by running

92:10

the command docker build- t for tag,

92:13

which we can leave as default. React-

92:16

Docker, which is the name of the image,

92:18

and then dot to indicate that it's in

92:21

the current directory. And finally,

92:23

press enter. This is going to build out

92:26

the image, but we already know that an

92:29

image is not too much on its own. To use

92:32

the image, we have to actually run it.

92:34

So, let's run it by running the command

92:36

docker run react-doccker

92:40

and press enter. As you can see, it

92:42

built out all of the packages needed to

92:44

run our app and it seems to be running

92:47

on localhost 5173.

92:49

But if we open it up, it looks like the

92:52

site isn't showing even though we

92:54

specified thatexpose endpoint right here

92:57

saying that we're listening on 5173.

93:01

So why is it not working? Well, first we

93:04

need to understand that expose does only

93:07

one job and it's to inform docker that

93:10

the container should listen to that

93:12

specific exposed port in runtime. That

93:15

does make sense. But then why didn't

93:17

work? Well, it's because we know on

93:19

which port the docker container will

93:21

listen to. Docker knows it and so does

93:24

the container. But someone is missing

93:26

that information. Any guesses? Well,

93:29

it's the host is the main computer we're

93:32

using to run it. As we know, containers

93:35

run in isolated environments and by

93:38

default, they don't expose their ports

93:40

to the host machine or anyone else. This

93:43

means that even if a process inside the

93:45

container is listening on a specific

93:47

port, the port is not accessible from

93:50

outside the container. And to make our

93:53

host machine aware of that, we have to

93:55

utilize a concept known as port mapping.

93:59

It's a concept in Docker that allows us

94:01

to map ports between the Docker

94:03

container and the host machine. It's

94:05

exactly what we want to do. So to do

94:09

that, let's kill our entire terminal by

94:11

pressing this trash icon. Reopen it. Reg

94:15

to React-docker

94:18

and let's run the same command docker

94:20

run. And then we're going to add a P

94:23

flag right here and say map 5173 in our

94:26

container to 5173 on our host machine.

94:30

And then specify which image do we want

94:33

to run and press enter. Now as you can

94:36

see it seems to be good. But if I run

94:39

it, same things happens again. It's not

94:42

Docker's fold, but it's something that

94:45

we missed. It's a Vit. If you read the

94:48

logs right here, it's gonna say use-host

94:52

to expose. So, we have to expose that

94:55

port for vit 2. So, let's modify our

94:58

package json by going right here and

95:02

adding the d-host

95:05

to expose our dev environment. And now

95:08

again we'll have to stop everything,

95:10

kill our terminal, reopen it, ren to

95:14

React Docker, and then run the image

95:16

again. Which makes you wonder, wouldn't

95:19

it be great if Docker does it on its own

95:22

whenever we make some file changes? And

95:24

the answer is yes, definitely. And

95:27

Docker heard us. Later in the course,

95:30

I'll teach you how to use the latest

95:32

Docker features that allow us to

95:34

automatically build images and save us

95:37

from all of this hassle. But I first

95:39

want to teach you how to do it manually

95:41

to understand how cool Docker Compose

95:44

is, which I'm going to teach you later

95:45

on. So, let's just rerun the same

95:48

command.

95:50

And now we get an error. This means that

95:53

something is already connected to that

95:55

port. And this indeed is true. If you

95:59

check out our containers or images, we

96:02

have accumulated a large number of

96:03

images. So let's do a quick practice on

96:06

how to clear out all of our images or

96:09

containers.

96:10

Back in our terminal, we can run a

96:12

command docker ps, which is going to

96:16

give us a list of all of the current

96:18

containers alongside their ids, images,

96:21

created status, and more, as well as on

96:24

which ports are they listening on. This

96:26

is for all the active running

96:28

containers. And if you want to get

96:30

absolutely all containers, we can run

96:32

docker ps- a. And here you can see

96:36

absolutely all containers that we have.

96:39

That's a lot. Now the question is how do

96:41

we stop a specific container? Well, we

96:44

can stop it by running docker stop and

96:47

then use the name or the ID of a

96:50

specific container. You can use the

96:52

first three digits of the container ID

96:54

or you can use the entire name. So let's

96:56

use C3D

96:59

C3D. And if you get back the same

97:02

command, it means that it successfully

97:04

stopped it. If we go back to containers,

97:07

you can see that the C3D is no longer

97:10

running. But now let's say we have

97:12

accumulated a large number of

97:13

containers,

97:15

which we indeed have both the images and

97:17

containers. So, how can we get rid of

97:20

all of the inactive containers we have

97:22

created so far? Well, we can do that by

97:24

running docker container prune. If you

97:28

run that, it's going to say this will

97:30

remove all stopped containers. So, let's

97:32

press y. And that's fine. We only had

97:35

one that was stopped that we manually

97:37

stopped and it pruned it. But you can

97:40

also use the command docker rm to remove

97:43

a specific container by name or its ID.

97:47

So let's try with this one. A A7 docker

97:51

rm aa7 and press enter. Here we get a

97:56

response saying that we cannot stop a

97:58

running container. Of course you could

98:00

always use the d-force and that's going

98:03

to kill it. We can verify right here.

98:06

These commands are great and it's always

98:09

great to know your way around the CLI.

98:11

But nowadays we also have Docker Desktop

98:14

which allows us to play with it within a

98:17

graphical user interface which makes

98:19

things so much simpler. You can simply

98:21

use the stop action to stop the

98:23

container or you can use the delete

98:25

action to delete a container. It is that

98:28

easy. Similarly, you can do that for

98:31

images by selecting it and deleting all

98:34

images. And you can follow my process of

98:36

deleting everything. Right now, I just

98:38

want to ensure that we have a clean

98:40

working environment before we build out

98:42

our React example one more time. And

98:45

while we're here, if you have any

98:47

volumes, feel free to delete those as

98:49

well. There we go.

98:52

So, moving back, we want to first build

98:55

out our image. And now, let's repeat how

98:57

to do that. You simply have to run

99:00

docker build-asht the name of the image

99:03

and then dot. This is going to build out

99:06

the image. After you do that, we have to

99:09

run it with port mapping included. So

99:11

that's going to be docker run-b

99:15

map the ports and then the name of the

99:17

container you want to run and press

99:20

enter. It's going to run it. And you can

99:23

see a bit of a difference. Right now

99:24

here it's exposed to the network. And if

99:26

you try to run localhost 5173,

99:30

you can see that this time it actually

99:32

works.

99:35

That's great. But now if we go back to

99:37

our code, go to source app and change

99:41

this vit and react to something like

99:45

docker is awesome and save it.

99:51

Back on our local host, you can see that

99:53

it didn't make any changes.

99:56

That's very unfortunate. We hope that

99:58

this container could somehow stay up to

100:01

date with what we are developing.

100:04

Otherwise, it would be such a pain to

100:06

constantly rebuild containers with new

100:08

changes. This happens because when we

100:11

build the Docker image and run the

100:13

container, the code is then copied into

100:16

that container. You can see all the

100:18

files right here and they're not going

100:20

to change. So, even if you go right here

100:22

to app and then source and then app tsx,

100:26

rightclick it and click edit file,

100:28

you'll be able to see that here it still

100:31

says vit plus react.

100:34

So, what can we do? Well, we'll have to

100:37

further adjust our command. So, let's

100:39

simply stop our active container so we

100:42

can then rerun a new one on the same

100:44

port. Let's go back to our Visual Studio

100:46

Code. Clear it. Make sure that you're in

100:49

the React-docker folder and we need to

100:52

run the same command. Then we have to

100:55

also add a string sign dollar sign pwd

101:00

close it and then say col/app

101:04

and close it like so.

101:06

It seems a bit complicated, doesn't it?

101:09

What this means is that we tell docker

101:11

to mount the current working directory

101:14

where we run the docker run command into

101:16

the app directory inside the container.

101:20

This effectively means that our local

101:23

code is linked to the container and any

101:26

changes we make locally will be

101:28

immediately reflected inside the running

101:30

container. This tiny pwd represents the

101:34

current working directory over here. It

101:36

executes in the runtime to provide the

101:38

current working directory path. And V, V

101:42

stands for volume. That's because we're

101:44

creating a volume that's going to keep

101:45

track of all of those changes. Remember

101:47

that we talked about volumes before.

101:49

They try to ensure that we always have

101:52

our data stored somewhere. But before

101:54

you go ahead and press enter, there is

101:56

one more additional flag that we have to

101:58

add to this command. And that is yet

102:01

another dashv but this time forward

102:04

slapp slode

102:07

modules. Why are we doing this? Well, we

102:11

have to create a new volume for the node

102:13

modules directory within the container.

102:16

We do this to ensure that the volume

102:18

mount is available in its container. So

102:21

now when we run the container, it will

102:24

use the existing node modules from the

102:26

named volume and any changes to the

102:28

dependencies won't require a reinstall

102:31

when starting the container. This is

102:34

particularly useful in development

102:36

scenarios where you frequently start and

102:38

stop containers during code changes. So

102:41

let's run it. It's running on localhost

102:44

5173.

102:46

Docker is indeed awesome. But now the

102:49

question is if we change it, what's

102:52

going to happen? So we go here and say

102:55

something like Docker is awesome, but

102:57

also add a couple of whales at the end.

103:00

Press save. And then you can see PM V

103:04

update source app tsx. And now if we run

103:07

it, we have a couple of whales right

103:10

here. There we go. So whenever you

103:12

change something, you'll see the result

103:14

instantly in the UI. That's amazing.

103:18

And even if we go back to our Docker

103:20

desktop, you can see that now we have a

103:22

volume that keeps track of these

103:24

changes. And if you go under containers,

103:27

go to our active container, go to files,

103:31

and then let's go to app source app tsx

103:36

and edit. You can see that the changes

103:38

are also reflected right here. So that's

103:41

it. You have successfully learned how to

103:44

dockerize a front-end application. Not

103:48

many developers can do that. But you,

103:50

you are just getting started.

103:54

Now that we have created our Docker

103:56

image, let me teach you how to publish

103:58

it. We can do that using the command

104:00

line. So let's go right here, kill our

104:04

current terminal, reopen it, and cd into

104:07

React Docker.

104:09

Next, we can run docker login. And if

104:13

you already logged in with docker

104:15

desktop, it should automatically

104:17

authenticate you. Next, we can publish

104:19

our image using this command. Docker tag

104:23

react-doccker.

104:26

Then you need to add your username and

104:28

the name of the image.

104:31

You can find your username by going to

104:33

docker desktop, clicking on the icon on

104:35

top right, and then copying it from

104:37

there. In my case, it's JavaScript

104:40

mastery. And then I'm going to do

104:42

forward slreact-docker.

104:45

It's okay if we don't provide any tag

104:47

right here as the default tag is going

104:49

to be colon latest. Also, don't forget

104:52

that below this course, I provided a

104:54

complete list of all of the commands

104:56

including different tag commands to help

104:59

you get started with Docker anytime,

105:01

anywhere.

105:02

So, check them out and try running some

105:04

of them. Finally, let's publish our

105:07

image. And now we have to run docker

105:10

push javascript mastery or in this case

105:13

your username/react-doccker.

105:17

And this is going to actually push it to

105:20

our docker hub. There we go. Now if you

105:24

go back to docker desktop, you can see

105:26

that we have a JavaScript master react

105:28

docker image that is now actually pushed

105:31

on the hub. And you can also check it

105:33

out right here by going to localhub

105:35

images. And then you can see JavaScript

105:38

mastery has one latest image. And

105:41

another cool thing you can do is go to

105:44

hub.docker.com

105:46

where you can find your image published

105:49

under repositories and then check out

105:52

your account right here and you'll be

105:54

able to see your React Docker image

105:56

right here live on DockerHub. And now

106:00

other people can run this image as well

106:02

and containerize their applications by

106:04

using it. How cool is that? And that's

106:08

all it is to it. You have successfully

106:10

published your first Docker image. But

106:14

now that you know the basis, let's find

106:16

a more efficient way of dockerizing our

106:19

applications. Oh yeah, developers are

106:22

lazy. So writing and running all of

106:25

these commands for building images and

106:28

containers and then mapping them to host

106:30

is just too much to do. But it's not the

106:33

only way. We can improve or automate

106:36

this process with Docker Compose and run

106:40

everything our application needs to run

106:42

through Docker using one small single

106:45

command. Yes, we can use a single

106:48

straightforward command to run the

106:50

entire application.

106:52

So, say hi to Docker Compose. It's a

106:56

tool that allows us to define and manage

106:59

multicontainer Docker applications. It

107:02

uses a YAML file to configure the

107:05

services, networks, and volumes for your

107:08

application, enabling us to run and

107:10

scale the entire application with a

107:13

single command. We don't have to run 10

107:16

commands separately to run 10 containers

107:18

for one application. Thanks to Docker

107:21

Compose, we can list all the information

107:23

needed to run these 10 containers or

107:26

more in a single file and then run only

107:29

one command that automatically triggers

107:32

running the rest of the containers. In

107:34

simple words, Docker Compose is like a

107:38

chef's recipe for preparing multiple

107:40

meals in a single dinner. It allows us

107:43

to define and manage the entire cooking

107:45

process for recipes in one go,

107:48

specifying ingredients, cooking

107:50

instructions, and how different parts of

107:53

the meal should interact. With Docker

107:56

Compose, we can serve up our entire

107:58

culinary experience with just one

108:01

command.

108:03

And while we can manually create these

108:06

files on our own and set things up,

108:08

Docker also provides us with a CLI that

108:12

generates these files for us. It's

108:14

called Docker Init. Using Docker Init,

108:17

we initialize our application with all

108:20

the files needed to dockerize it by

108:22

specifying our tech choices.

108:25

So let's go ahead and create another VIT

108:28

project which we can use to test out the

108:30

features of docker compose and docker

108:33

init. We can open up a terminal and then

108:36

run mpm create vit add latest. In this

108:40

case we can call it vit- project and

108:44

press enter.

108:46

It's going to ask us a couple of

108:47

questions. It can be a react typescript

108:49

application. We can cd into it. And

108:53

please make sure that you are in the

108:55

Docker course, meaning in the root of

108:57

our folder. So it needs to create it

109:00

right next to React. If you were in

109:02

React before when you run this command,

109:04

it's going to create it inside of it. If

109:06

that's the case, delete it and just

109:08

navigate to Docker course and then run

109:10

the command. Now we can cd into V

109:14

project and we can learn how to use

109:17

Docker in it. It's so simple. You simply

109:20

run Docker in it. That's all there is to

109:23

it. And it's going to ask you many

109:25

questions based off which it's going to

109:27

generate a perfect YAML file for you. So

109:30

what application platform are we

109:32

planning on using? In this case, it's

109:34

going to be node. So you can simply

109:36

press enter. What version? You can just

109:39

press enter one more time to verify what

109:41

they're saying in parenthesis. 20 is

109:44

fine with us. MPM is good. Do we want to

109:48

use mpm run build? No, actually uh in

109:51

this case we're going to say no and

109:53

we're going to say mpm rundev. That's

109:56

what we want to use. And the server is

109:58

going to be 5173.

110:00

And that's it.

110:03

We can see that this has generated three

110:06

new files for us. The docker file which

110:09

we already know a lot about. This one

110:11

has some specific details in it. But you

110:13

can see that again it's based off of the

110:15

same thing.

110:17

It starts from a specific version, sets

110:19

up the environment variables, sets up

110:21

the working directory and run some

110:23

commands. We also have a docker ignore

110:26

where we can ignore some additional

110:28

files. And then there's this new file

110:31

compose.yaml.

110:34

While all of these files are important

110:36

with using docker compose, yaml is the

110:38

most important one. And you can read all

110:41

of these comments, but for now I just

110:43

want to delete them to show you what it

110:45

is comprised of.

110:47

We simply define which services we want

110:50

our apps or containers to use. We have a

110:53

server app where we build the context,

110:55

specify environment variables, and

110:57

specify the ports. Of course, these can

111:00

get much more complicated in case you

111:02

have multiple services you want to run,

111:04

which is exactly what I want to teach

111:06

you right now. Here, they were even kind

111:09

enough to provide an example of how you

111:11

would do that with running a complete

111:14

Postgress database. So you can specify

111:16

the database, database image, and

111:18

additional commands you can run. But

111:21

more on that later. We're going to

111:23

approach it from scratch. For now, we

111:25

can leave this empty compos yaml. And

111:28

first, let's focus on just the regular

111:30

Docker file. In this case, we can

111:33

replace this Docker file with the one we

111:35

have in our React Docker application.

111:38

So, copy this one right here and paste

111:40

it into this new one. This one we

111:43

already know what it is doing.

111:46

Now moving inside of the YAML file here,

111:50

we can rename the server into web as

111:53

that's a common practice for running web

111:56

applications and not servers.

111:59

We can also remove environment variables

112:01

as we're not using any

112:04

and we can leave the port. Finally, we

112:07

need to add the volumes for our web

112:09

service. So we can save volumes.

112:12

Make sure to provide a space here and

112:14

then a dash. And that's going to be

112:16

colon slapp

112:19

and another dash/app/node

112:22

modules.

112:24

Does this ring a bell? It's similar what

112:27

we have done before manually by using

112:29

the docker build command, but now we're

112:32

doing it all with this compose yaml

112:34

file. And now all we have to do is run a

112:37

new command. docker compose up and press

112:42

enter. And as you can see, we get a

112:46

permission denied. You never want to see

112:48

this. If you're in Windows, maybe you're

112:50

used to seeing this every day. In which

112:53

case, you simply have to close Visual

112:55

Studio Code, rightclick it, and then

112:58

press run as administrator. That should

113:01

give you all the necessary permissions.

113:03

On Mac OS or Linux, you can simply add

113:06

sudo before the command.

113:09

Then it's going to ask you for your

113:10

password and it's going to rerun it with

113:12

admin privileges. So let's press enter.

113:16

And the process started. It's building

113:19

it out. Now let's debug this further. We

113:23

get the same response we've gotten

113:24

before. Hm. What could this be? Port is

113:28

already allocated. Oh yeah, we forgot to

113:32

delete or close our container that we

113:34

used for previous React application. So

113:37

now we know the easy way to do it. We

113:39

simply go here, we select it and we can

113:43

stop it or delete it. Once it is

113:46

stopped, we can go back and then simply

113:50

rerun the command. I want to lead you

113:52

through all of the commands together,

113:54

even if the failed ones, just so you can

113:56

get the feel of how you would debug or

113:58

reapproach specific aspects once you

114:01

encounter errors. That's what being a

114:03

real developer is. getting stuck,

114:06

resolving the obstacle, and getting

114:08

things done. And finally, let's run the

114:11

command. It's running it. And if we go

114:15

to localhost 5173, ah, the same thing as

114:18

before. Any guesses? The answer is that

114:22

we once again forgot to add the d-host

114:26

to our vit dev script right here. So if

114:29

we add it stop our application from

114:32

running by pressing Ctrl C, this is

114:34

going to gracefully stop it. The cool

114:38

thing about Docker Compose is that it's

114:40

also stopping the container that it spun

114:42

up. And now that we have canceled our

114:45

action, we can try to rerun it with

114:47

pseudo Docker Compose up, but this time

114:49

with host included. And press enter.

114:53

It's going to rebuild it. And if we open

114:56

it up now, it works. By now, you should

115:00

have a solid understanding of how to

115:02

containerize applications within Docker.

115:05

Of course, you can take this further by

115:07

experimenting with dockerizing projects

115:09

like MER or Nex.js apps. And if you want

115:12

to deepen your Docker skills and see how

115:14

it applies across different types of

115:16

projects, check out my full Docker

115:19

course on YouTube. It's completely free

115:21

and the link is down in the description.

115:23

And if everything makes sense so far,

115:25

trust me, you're on the right track with

115:27

DevOps. You'll soon be applying this

115:30

knowledge to containerize production

115:32

ready applications and connect them with

115:34

CI/CD pipelines. I'll cover all of that

115:36

here. So, let's just keep going.

115:40

In the previous lesson, you learned how

115:43

to build and run containers with Docker.

115:45

You saw firsthand how containers package

115:48

your applications into neat portable

115:51

boxes that can run anywhere. That's

115:54

already super powerful, but everything

115:56

feels amazing until we start getting

115:59

more users. Suddenly, there are too many

116:02

requests for a single container to

116:05

handle. That container has a ceiling and

116:09

if it dies, your app dies with it.

116:12

That's where Kubernetes comes in.

116:15

Kubernetes is a container orchestration

116:17

platform. Its job to schedule, scale,

116:22

self-heal, and load balance containers

116:25

across machines so your app stays up.

116:28

So, one container isn't enough. Well, a

116:32

single process handles only a limited

116:34

number of concurrent requests before CPU

116:37

and memory become bottlenecks. Sure, you

116:40

can tune and scale vertically, but

116:43

there's always going to be a ceiling and

116:45

a single point of failure. And you can't

116:48

bet on one thing for life. If it goes

116:50

down, your application goes down and so

116:53

do you. So, you need replicas and an

116:56

automated way to place and manage them.

116:59

That is Kubernetes. often abbreviated as

117:03

K8s with the eight representing the

117:06

eight letters between K and S is an

117:08

open- source container orchestration

117:10

platform. At its core, Kubernetes helps

117:13

you run your app across multiple nodes,

117:16

scale replicas up and down based on

117:18

demand, restart unhealthy containers

117:21

automatically, and distribute traffic

117:24

across replicas, all while rolling out

117:26

updates without downtime. Docker gives

117:29

you containers. Kubernetes decides how,

117:32

where, and when they run. You can think

117:34

of Kubernetes as the operating system

117:37

for your container. Without it, you'd be

117:39

manually starting and stopping

117:41

containers, keeping track of IP

117:43

addresses, restarting crashed apps, and

117:46

scaling things up or down by hand. But

117:49

to really understand Kubernetes, let's

117:52

break it down into its building blocks.

117:55

First, we have the cluster. A cluster is

117:58

a group of machines either physical or

118:01

virtual that work together as one single

118:04

system. In Kubernetes, a cluster is made

118:07

up of a control plane which decides,

118:11

schedules, reconciles and monitors

118:14

health. And then we have worker nodes,

118:17

physical or virtual machines where your

118:20

containers actually run. Each worker

118:23

node runs the cubullet, an agent that

118:26

communicates with the control plane and

118:28

a container runtime like docker. There

118:30

is also something known as a cube proxy

118:33

that handles networking and routing

118:35

inside the cluster for every node. So

118:38

when you tell kubernetes run three

118:41

copies of my node.js app, the control

118:43

plane decides where these containers go

118:46

and the worker nodes actually run them.

118:48

Next up, we have pods. In Kubernetes,

118:52

you don't run containers directly.

118:55

Instead, containers are wrapped in

118:57

something called a pod. A pod is the

119:00

smallest deployable unit in Kubernetes.

119:03

There's usually one container per pod,

119:05

and each pod gets its own IP address.

119:08

So, when you deploy your app, Kubernetes

119:11

runs it inside a pod. You never interact

119:14

with containers directly in Kubernetes.

119:16

you only interact with pods. And you can

119:19

even run multiple pods by specifying

119:22

something known as a replica set. A

119:24

replica set ensures a specified number

119:27

of pods are always running. If you say,

119:30

"I want three replicas," Kubernetes

119:33

makes sure three pods are running at all

119:36

times. And if one pod dies, Kubernetes

119:39

spins up a new one automatically. This

119:42

is where scaling comes in. You don't

119:44

manually start containers. You just

119:46

declare how many replicas you want and

119:49

then comes the deployment.

119:51

Deployment is the higher level object

119:54

that manages replica sets. It allows you

119:57

to define updates to your application.

119:59

Kubernetes can do a rolling update,

120:02

gradually replace old pods with new ones

120:05

so your users never see downtime. It

120:07

handles all of these and ensures reality

120:09

always matches the desired state. If one

120:12

pod crashes, it creates a new one. So

120:15

instead of saying, "Go ahead and run

120:17

these containers," you say, "Here's my

120:20

app. Here's the image. Here's how many

120:22

replicas I need. Manage it for me." But

120:26

there's one issue. Pods are temporary.

120:29

They come and go. And each time they get

120:32

a new IP. So how do users or other pods

120:35

connect to them? They connect using

120:38

something called a service. A service is

120:41

a stable endpoint which is a permanent

120:43

IP or DNS name that automatically routes

120:47

traffic to the available pods behind it

120:49

and it also load balances requests among

120:52

multiple replicas. Think of a service as

120:55

the reception desk. You don't care which

120:57

employee helps you as long as someone

121:00

does. And apps often need configuration

121:03

and credentials. So that's why we have

121:05

config maps and secrets. Config maps

121:09

store configuration data, for example,

121:11

like a database URL. And secrets store

121:14

sensitive data like passwords or API

121:16

keys. Kubernetes injects these into pods

121:20

securely without baking them into your

121:23

Docker image. All of this sounds good,

121:25

right? But how do the external users

121:27

access this? Welcome, Ingress. Ingress

121:30

is like a smart router that exposes HTTP

121:34

and HTTPS routes to the outside world.

121:37

For example, it can map api.mmyapp.com

121:41

to your backend service. And similarly

121:43

to Docker, Kubernetes also has volumes.

121:46

Since containers are ephemeral, meaning

121:49

data is lost if restarted, Kubernetes

121:52

provides volumes for persistent storage.

121:55

That's what you need to know for now. If

121:57

you want a separate advanced deep dive

121:59

on Kubernetes, let me know down in the

122:01

comments and I'll record one for you.

122:04

And yeah, I think this goes without

122:05

saying, but you should never do anything

122:07

directly on production. Before touching

122:10

production clusters, you can create

122:12

local clusters that allow you to

122:14

experiment safely. They simulate a full

122:16

Kubernetes environment without risking

122:19

your production apps or cloud costs.

122:21

Miniube is the most popular choice. It's

122:24

lightweight and runs a single node

122:26

cluster inside a virtual machine or

122:29

container. When I say mini cube runs a

122:31

single node cluster, I mean that that

122:33

cluster has only one node that acts as

122:36

both the control plane and worker node.

122:39

And this is different from production

122:41

where clusters are usually multi-node

122:43

which means that control plane nodes

122:46

manage the cluster like the API server

122:48

or theuler and worker nodes run your

122:51

application workloads containing pods

122:53

and containers. So with a single node

122:55

cluster both roles live on the same

122:58

machine which is perfect for development

123:00

and testing. There are also some

123:02

alternatives like kind kubernetes in

123:04

docker which runs clusters inside docker

123:07

containers. It's great for CI/CD

123:10

pipelines and automated testing and K3S

123:13

a lightweight Kubernetes distribution

123:15

good for internet of things or resource

123:17

constraint machines but I still

123:19

recommend mini cube for learning because

123:21

it gives you all the Kubernetes

123:22

components locally including the API

123:25

serverul and cubullet so you can see

123:28

production-like behavior. So let's get

123:30

our hands dirty on creating your very

123:32

first local cluster and running

123:34

Kubernetes.

123:37

Let's dive right into the Kubernetes

123:39

demo. First things first, we'll create a

123:42

new repo. So I'll call it Kubernetes

123:45

demo and create. Now you can copy this

123:49

URL. So we can clone the repo within our

123:51

IDE. I'll do it using WebStorm. So I

123:54

simply need to provide the URL right

123:56

here and click clone. If you're using a

123:58

terminal, you can just say get clone and

124:00

then paste the URL. Once you're within

124:03

it, we can run mpm init-y

124:06

to initialize a new node application.

124:09

It'll start with only package json. So

124:12

while we're here, let's also install

124:14

express by running mpm install express.

124:17

Now after that, we'll have to install a

124:20

CLI for running Kubernetes. And if you

124:23

head over to Kubernetes documentation,

124:26

head over to tasks and then install

124:28

tools, you'll see the cubectl

124:30

installation on Linux, Mac OS or

124:33

Windows. So just proceed with your

124:35

operating system. I am on Mac OS Apple

124:37

Silicon. So I'll simply copy this girl

124:40

command, head back within our console

124:42

and type this. It'll download it and set

124:45

it up. But of course, the setup for

124:46

Windows is a bit different. So pause

124:48

this video right here and install

124:50

Cubectl. Once you've installed it, you

124:52

can run cubectl version-client

124:56

and you'll be able to see a version

124:58

right here. After that, you'll also need

125:00

to install mini cube, which is local

125:03

Kubernetes focusing on making it easy to

125:05

learn and develop for Kubernetes. And

125:08

here you can choose your operating

125:09

system, the architecture, and just copy

125:12

the installation command. Once again, I

125:14

will paste it for my device. You can

125:16

clone it for yours. Let's wait until it

125:18

gets installed. It might ask you the

125:20

password as well. So just type it in and

125:22

press enter and you'll be done. Once it

125:26

is done, you can run mini cube version

125:28

to see whether it has been installed

125:29

properly. If you see something like

125:31

this, we're good. With mini cube and

125:34

cubectl installed, we are ready to

125:36

create our index.js file, the starting

125:39

point of our express application. So

125:42

just go ahead and create a new file

125:45

called index.js.

125:46

And within it you can import express

125:49

from express you can initialize a new

125:53

app and define a port. It can be 3000

125:57

8080 5,000 or anything. After that you

126:01

can create a new empty route that's

126:03

going to be a forward slashhome route.

126:06

You can open up a request and a response

126:08

for this route and then you can send

126:10

some kind of an output like a message of

126:13

hello world. Of course, to be able to

126:15

reach this endpoint, we need to turn the

126:18

server on and make it listen on a

126:20

specific port. And we can also give it a

126:22

console log saying something like

126:24

example app listening on port. And then

126:27

we define the port right here. Now,

126:29

while we're here, we can also provide a

126:31

bit more information. So when somebody

126:33

tries to make get requests to our API

126:36

alongside passing the message of hello

126:38

world instead of it actually we'll say

126:41

something like hello from a container

126:45

because we'll be running this within a

126:46

cube container we'll also define a

126:50

service which will be hello-ash node

126:54

then I will also give it a pod which

126:57

will be process.env.pod_name

126:59

pod_name

127:01

or we can make it unknown if we don't

127:03

have a pod within our environment

127:05

variables and finally we can define the

127:08

time this is going to be a new date dot

127:12

to ISO string so we return it in a human

127:14

readable format alongside this home

127:16

route we can also add some basic health

127:19

endpoints for the probes we can do that

127:21

by saying app.get get and then listen on

127:24

something like ready Z where we'll also

127:27

have a request and response and we'll

127:30

just response with a status of 200 and

127:33

send a message of ready. If we get this

127:36

message that means that we are running

127:38

and another very important endpoint that

127:41

we need to have to make this work is

127:43

this app.get health Z. This one can send

127:47

a response of okay. Both of these two

127:50

endpoints are needed for Kubernetes to

127:52

know that our app is alive and well. So

127:56

make sure to have it and then we can

127:58

head over to our package JSON. Let's

128:00

change the type of this application from

128:02

common.js to module. So we can use ES6

128:06

imports and exports. And then let's also

128:09

add a new dev script that'll run node

128:12

with a watch flag. So whenever we make

128:14

any changes, the terminal will be

128:16

restarted. And we want to run this

128:18

index.js file. And after adding this dev

128:21

script, also make sure to add a start

128:24

script. This one will be even simpler.

128:26

It'll be just node index.js. In this

128:30

case, we don't have to add the watch

128:32

flag because in deployment, we won't be

128:34

making any changes that we'll have to

128:36

listen for or watch for. Rather, we'll

128:38

just run the finished server. Once you

128:41

do that, we want to start doing the

128:42

Docker setup. Since we've already

128:45

watched a crash course on Docker and

128:47

since later on in the build and deploy

128:49

part we will dockerize that application

128:51

for now I'll provide you with the files

128:53

needed to make it work. So first things

128:56

first we need a docker file which you

128:58

can create by just creating a docker

129:00

file and then within it within the video

129:03

kit down below you can copy and paste

129:05

the Kubernetes demo docker file or feel

129:08

free to pause this video and type it

129:10

out. We're first defining a base image

129:12

of an operating system that we want to

129:14

run and setting a working directory.

129:17

Then we're installing all the

129:19

dependencies separately for caching,

129:21

copying the app source, and then running

129:23

the app as a nonroot user. After that,

129:26

we will also need a docker compose file.

129:29

So create a new file called

129:31

docker-mpose.yaml.

129:35

And once again, I will provide this over

129:37

within the video kit down below or feel

129:40

free to write it by hand, but make sure

129:42

to use the two spaces to do the

129:44

indentation and not the tabs. Here we

129:47

have the services where we're defining

129:49

the API service. And within it, we have

129:52

a build where we want to build this

129:54

docker file with a specific container

129:56

image running on port 3000 with a node

130:00

env.

130:01

We give it access to different volumes

130:03

and a command of mpm rundev. I will also

130:07

define a getit ignore so that it knows

130:10

what not to push over to GitHub. It's

130:12

going to be node modules of course. And

130:15

we also want to do another one which

130:17

will be docker ignore which will include

130:20

node modules. mpm-debug.log.

130:26

So all the log files

130:28

docker filed

130:31

docker ignore.git

130:34

and dogggetit ignore. We don't need

130:36

those within docker. And now we are

130:38

ready to create a new docker image and

130:41

run a docker container in the foreground

130:43

which means in an attached mode. You can

130:45

use it this way when you've changed your

130:47

docker file or code and want to rebuild

130:50

it and run it in one go. Basically this

130:53

is your go-to command in development

130:55

mode. So open up your terminal and

130:58

simply type docker compose up- d-build

131:04

and press enter.

131:06

And if you run it, you'll see that it

131:08

says no configuration file provided but

131:10

we have our docker compose here or wait

131:13

it is docker compon rather we want to

131:17

make it compose.

131:19

So, if we fix this snipo, which you most

131:21

likely didn't have, and we rerun this

131:24

docker compose up build command, you'll

131:27

see that it'll start building, but it's

131:29

having trouble finding the Docker file.

131:31

And it looks like I misspelled that one,

131:33

too. It's the Docker filer. So, let's go

131:36

ahead and rename it to Docker file.

131:39

Hopefully, you had both of these, right?

131:41

And now, if I rerun the Docker Compose

131:44

up build command, and there we go. Our

131:47

Kubernetes demo is now running on port

131:50

3000. If you head over in your browser

131:53

and head over to localhost 3000, you'll

131:56

be able to see a JSON output that we're

131:58

sending over from our application. Now,

132:00

we can publish this Docker image so we

132:02

can refer to it when deploying

132:03

Kubernetes clusters. To publish our

132:06

Docker image, you can head over to the

132:08

Docker Hub. Just Google for it and

132:10

you'll be able to find it. Then, if you

132:12

head over to your profile, you'll be

132:14

able to find your username. So just copy

132:17

it. Once you get your username, you can

132:19

head over to your terminal. I'll open up

132:22

a new one because this one is running

132:24

the server. So now we can use this

132:26

username and the repo name to add a tag

132:30

to it by saying docker tag kubernetes

132:34

demo api or maybe it's something

132:36

different for you. Add latest. Then put

132:39

your username. For me, it's GSM mastery

132:41

pro slash Kubernetes demo API add latest

132:47

and press enter. This will apply a tag.

132:50

Once the tag is applied, you can then

132:52

run docker push jsmastery pro or your

132:56

name in this case forward slash the name

132:58

of the repo or the container at latest

133:01

and press enter. This will push this

133:04

image to docker hub. As soon as this is

133:06

done, you'll be able to head back to

133:08

your Docker Hub and under repositories,

133:10

you'll be able to see your first repo.

133:12

So, just reload and there we go. It's

133:15

pushed right here. Okay, dockerization

133:18

is done. But now is the time to make our

133:21

Docker image horizontally scalable so

133:23

that we don't just depend on the

133:25

resources of a single container. This

133:27

means that we can actually start working

133:29

on Kubernetes.

133:31

First, I'll create a new directory

133:33

called K8S, which is an abbreviation for

133:37

Kubernetes. And within it, I will create

133:40

a new file called deployment.yaml.

133:44

Now, Kubernetes is declarative, which

133:47

means that you describe what you want,

133:49

not how to do it. You do this using YAML

133:52

files. Remember those? So within this

133:54

YAML file, we can declare absolutely

133:57

everything from how many copies of the

133:59

app we want called replicas to which

134:02

docker image we want to use and which

134:05

ports to listen on. Everything can be

134:07

done right here. Now I'll share this

134:09

full deployment with you in the video

134:11

kit down below. So simply copy it and

134:13

paste it right here and we can go

134:15

through it together. First we define the

134:18

API version. Then we add a bit of

134:20

metadata about the name of our app. And

134:23

then the most important part is the

134:25

specification. Here you define how many

134:27

replicas of the app you want. In this

134:30

case, I said two. You add some labels

134:33

and then you further define the

134:35

specification of those containers. You

134:37

give each container a name and the image

134:40

to run off of. You also define on which

134:43

port they'll be running. You can pass

134:45

some additional environment variables

134:47

and attach different amounts of

134:48

resources to these containers and then

134:51

you can provide some additional

134:52

information. This is how you configure

134:54

Kubernetes. But after that, we also have

134:57

to provide network access to our pods.

135:00

So within the K8s folder, create a new

135:04

file called service.yaml.

135:07

I'll also pass this service file within

135:09

the video kit down below. So simply

135:11

paste it here. And the same as before,

135:13

we have to define our API version, the

135:16

kind, in this case, it's a service, pass

135:19

some additional metadata, and select

135:21

which app we want to connect with and on

135:23

which board it is and which type of a

135:25

protocol we want to use. And now is the

135:27

time to deploy it all locally. So, first

135:29

things first, we'll use the CLI that we

135:32

installed at the beginning of this

135:33

lesson. It's mini cube, so I'll just say

135:35

mini cube start. You'll get back the

135:38

output of what's happening such as

135:40

whether the host cubeled and API server

135:43

are running and the cube config should

135:45

be configured. It might take some time

135:47

to run it properly for the first time.

135:49

Done. Mini cube has now been configured.

135:51

And once it is done, you can run cubectl

135:55

get nodes. In the console, you'll see

135:57

that mini cube is running a control

136:00

plane. To check if your cluster is

136:03

running, you can run cubectl cluster

136:05

info. And here you can see more info

136:07

about the control plane. If the cluster

136:09

is healthy, you'll see something like

136:11

this where you can see the port where

136:13

it's running without running mini cube.

136:15

If you directly try to run this cubectl

136:18

commands, you won't get anything as

136:20

there is no cluster because mini cube is

136:23

a tool that sets up a local Kubernetes

136:25

cluster on a laptop. And without mini

136:28

cube start, there's no cluster running.

136:30

So cubectl can't connect anywhere. And

136:34

finally, we are ready to deploy these

136:36

files service and deployment.yamel. You

136:39

can do that in two separate lines by

136:42

deploying each one individually by

136:43

saying cubectl apply-f and then target

136:47

the path to each one of these files. Or

136:49

you can do it in a shorter single

136:51

command by running cubectl apply-f

136:56

k8s which is targeting the entire folder

136:59

containing both of these files. If you

137:02

do that for you, it should say that both

137:04

got configured and through this process

137:07

the Kubernetes API server will read

137:10

those YAML files. Deployment will create

137:13

pods via replica set and the service

137:15

will set up network routing to the pods.

137:18

Exactly what we learned about not that

137:20

long ago. So now we can get access to

137:22

the pod information to check the pod

137:25

status by running cubectl get pods-w.

137:30

You should be able to see two right

137:32

here. I was doing some additional

137:33

testing so I have four but essentially

137:35

you should see Kubernetes demo API two

137:38

times because we span up two replicas of

137:41

our Docker image. You can also get

137:44

access to the services by running

137:46

cubectl get services and you can see

137:50

those services running right here. And

137:53

finally it's time to test out the

137:55

application. Mini cube thankfully

137:57

provides us with a very simple way to

138:00

access your service and that is by

138:02

running mini cube service and then you

138:05

have to provide the name to your service

138:08

and that name was provided for us right

138:10

here when we ran cubectl get services in

138:14

this case it's referring to the

138:15

kubernetes demo API service so copy

138:18

whatever you have right here and run

138:22

minicube

138:23

service and then the service name. If

138:27

you do it correctly, you'll immediately

138:29

see that Kubernetes will open up a

138:31

service for you in your default browser.

138:33

Thankfully, it did it for me as well.

138:35

I'll zoom it out just a bit. And you can

138:38

see that our server is now live. If I

138:41

zoom it out even more, I want you to pay

138:43

attention to one thing, and that is the

138:46

pod Kubernetes demo API. And then it has

138:50

a specific ID. Now, if you reload it a

138:53

couple of times, oh, it looks like I'll

138:56

have to pretty print it every time. Or I

138:58

can be smart and just install a JSON

139:00

viewer pro extension from the Chrome web

139:02

store and add it to my Chrome or Arc,

139:05

which would give me a more beautiful

139:06

tree view of the data. So, if you now

139:09

reload it a couple of times, you can see

139:11

that the last part of the pod ID will

139:13

change, which essentially means that

139:15

you're making requests to two different

139:18

servers. This simple pod change shows

139:22

that Kubernetes can automatically

139:24

replace, replicate, and rebalance

139:27

workloads across the cluster, letting

139:29

the system scale up or down without

139:32

intervention. So if one container or pod

139:35

goes down for whatever reason, the other

139:38

one is up here and ready to serve your

139:40

users. This is huge. So what's happening

139:44

behind the scenes? Well, the API server

139:47

receives your YAML file. The scheduler

139:50

assigns pods to nodes. Cublet starts

139:54

containers inside pods and then the

139:56

service ensures that network is running

139:59

to these pods. But that's a lot of

140:01

things that we had to go through. You

140:03

had to create a Docker image, push it

140:05

over to Docker Hub, start mini cube, do

140:08

a Kubernetes deployment, get listed

140:10

pods, get services, and finally test it

140:13

out. a lot of actions that you would

140:15

have to repeat again and again. But

140:18

instead of doing that, you can simply

140:20

write a bash script and execute it

140:22

whenever you want to deploy your app. So

140:25

create a new file and call it deploy.sh.

140:30

Let's write it together. First you can

140:32

say set e, which means that we want to

140:35

run this script with bash and we want to

140:37

exit it automatically if it fails. Then

140:40

you want to define the name of your API.

140:43

For me, this means that it is a

140:45

Kubernetes demo API which is how we

140:49

called it before. You can also define

140:52

your username which in this case is JS

140:54

mastery pro or for you it's going to be

140:56

your name. And finally, you can provide

140:58

your docker image which is going to be a

141:00

combination of your username. So in bash

141:04

you use a dollar sign to use a variable.

141:07

So forward slash usernameward slashname

141:11

at latest. Then we're ready to build the

141:14

docker image. And while running this

141:16

script we can also put out some console

141:18

logs. And in bash you do it with echo.

141:21

So you can say something like building

141:24

docker image dot dot dot and then you

141:28

can run docker build- t and use the

141:32

dollar sign image variable which we

141:34

created above and then put a dot here to

141:36

build it right here. Then you want to

141:38

push that image over to the docker hub.

141:41

So let's say that using an echo command

141:44

something like pushing image

141:48

to docker hub and you can do that by

141:50

saying docker push dollar sign image.

141:54

Then you want to apply your kubernetes

141:56

deployments and services to yaml

141:58

configs. So we can also add an echo for

142:00

that saying something like applying

142:03

kubernetes manifests which are basically

142:06

just yaml files. So you can run cubectl

142:10

apply -f and then point it over to

142:13

k8s/deployment.yaml

142:17

and you can duplicate it for the

142:19

service.yamel file as well. Here we're

142:22

applying our Kubernetes deployments in

142:24

service yaml configs. Finally we want to

142:27

get all the pods. So we can say

142:29

something along the lines of getting

142:32

pods dot dot dot and run cubectl

142:37

get pods. Then we want to do a similar

142:39

thing for getting the services. So we

142:42

can say getting services by running

142:43

cubectl

142:45

get services. And once we list those

142:48

services we can get a service with the

142:51

name that we have declared above. So we

142:53

can say something like echo fetching the

142:57

main service and you can do it by

142:59

running cubectl get services and then

143:02

you can provide a name of the service.

143:04

So that's a name dash service like this.

143:07

And finally to test it out we can stop

143:10

mini cube from running and basically

143:12

stop everything else that is running.

143:15

The mini cube has to be stopped by

143:17

running mini cube stop. While that is

143:19

happening, open up Docker Desktop and

143:22

delete the Docker images from desktop.

143:25

So everything that has to do with

143:26

Kubernetes demo, the most important one

143:28

to delete is the one with your username

143:30

before it. So just go ahead and delete

143:33

it. Now you can head back over to your

143:36

terminal and run mini cube start to

143:39

restart this local mini cube service

143:41

that allows us to run local Kubernetes

143:44

deployments. And then when it starts, we

143:46

will simply run this single script

143:49

instead of running all the commands that

143:50

we previously ran. There we go. It is

143:52

done. And now just run mpm run deploy.

143:56

Oh, but let's make sure to add this

143:58

deploy script to the package json by

144:01

saying deploy. And to run it, you can

144:03

say sh deploy.sh.

144:07

So now if you run this command you'll

144:10

see that one by one it'll say first

144:13

building the docker image then pushing

144:16

images to docker hub then getting pods

144:19

applying kubernetes manifests

144:21

and finally getting the services and

144:24

listing the last service and this

144:26

basically tells you everything is ready

144:28

to run this service. So you can copy the

144:30

service name from here.

144:33

That's Kubernetes demo API service and

144:37

run minicube

144:39

service and then paste the name of the

144:41

service.

144:43

If you press enter, you'll see that

144:45

it'll be running on this port and you

144:47

can see the message from a new deployed

144:49

app on your computer. The pod changes

144:51

every now and then and your app is

144:53

running in two containers and you never

144:55

know from which container your request

144:56

is going to be executed. Kubernetes will

144:59

handle all of it for you. And if you

145:01

want to scale the app further, just head

145:03

over to your deployment YAML and change

145:05

the number of replicas and you can

145:07

immediately scale the app within seconds

145:09

by rerunning the deployment script. I

145:12

know it might feel overwhelming right

145:14

now, but with consistent practice, it'll

145:16

all start to click. The best part is you

145:19

can keep repeating and testing all these

145:21

commands in Mini Cube without breaking

145:24

anything. Once you feel confident you

145:26

have mastered the basics, that's when

145:28

you can step up and move to the cloud.

145:31

I've already covered a lot about

145:32

Kubernetes in the context of DevOps

145:34

here. But if you'd like me to create a

145:36

dedicated deep dive video focusing

145:38

solely on Kubernetes, drop a comment

145:41

down below and I'll make it happen for

145:43

you soon. And if you're looking for a

145:45

complete resource that takes you from

145:46

start to finish with plenty of real

145:48

examples, deeper foundational knowledge,

145:51

and a true understanding of how

145:53

everything works, then my ultimate

145:55

back-end course is made for you. YouTube

145:58

videos will give you a solid surface

146:00

level foundation, but with our courses,

146:03

you'll develop the mindset of a senior

146:05

developer. So, click the link down in

146:07

the description and I'll see you inside.

146:10

And if the course is not out yet and you

146:12

really want to dive deeper into

146:13

Kubernetes, you can check out our

146:16

Kubernetes reference guide and an ebook.

146:18

It's a part of this new YouTube

146:20

membership thing that I'm doing. So if

146:22

you want to support the channel, that's

146:23

a very easy way to do it. Anyways,

146:26

amazing job on learning Kubernetes in

146:28

this part of the course. But now we move

146:31

forward. Great job.

146:36

So far, you've seen how Docker makes

146:38

apps portable and how Kubernetes makes

146:41

them scalable and resilient. But here's

146:44

the real question. Where do these

146:46

clusters actually run? On servers, of

146:50

course. And those servers could be

146:52

anything from physical machines in a

146:54

data center to virtual machines in AWS,

146:57

GCP, or Azure.

147:00

Either way, before your first

147:02

deployment, someone has to spin up

147:04

compute instances, configure networking,

147:07

VPCs, subnets, firewalls, and load

147:09

balancers, set up storage, volumes,

147:12

databases, and backups, and install

147:14

runtimes and dependencies.

147:16

Traditionally, all this was manual work.

147:20

Click through dashboards, SSH into

147:23

servers, run ad hoc scripts. That's

147:26

slow, errorprone, and impossible to

147:29

scale.

147:30

This is where infrastructure as code or

147:33

ISC for short changes everything. ISC

147:37

means managing infrastructure servers,

147:40

networks and databases with code instead

147:42

of manual setup. Think of it like

147:45

writing blueprints for your entire

147:47

infrastructure. So instead of saying,

147:49

"Hey Ops team, can you create these

147:51

three servers and hook them up with a

147:53

load balancer?" You just write code that

147:56

looks something like this. resource AWS

147:59

instance web AEI ID instance type and

148:03

the count. Run this and you get exactly

148:05

those three servers. Change count to

148:07

five and two more are provisioned. Your

148:10

infrastructure immediately becomes

148:13

version control because you can track

148:15

changes in git, testable so you can

148:17

verify configs before deployment.

148:19

Reproducible because you can rebuild

148:21

those environments instantly and

148:24

sharable as you can onboard new

148:26

engineers fast. So, IA is all about

148:30

consistency, speed, scalability, and

148:33

collaboration. And you don't need to

148:35

master every provider service like AWS

148:38

cloud for, Azure resource manager or so

148:41

on. The industry prefers cloud agnostic

148:45

tools that work anywhere. One of those

148:47

is Terraform by Hashi Cororp. You simply

148:51

define infra in HCL and deploy across

148:54

AWS, Azure, GCP and more with a single

148:58

workflow. One script could create a

149:00

Kubernetes cluster in AWS, a database in

149:04

Azure, and storage in GCP. There's also

149:07

Helm, a package manager for Kubernetes

149:10

that turns messy YAML files into

149:12

reusable configurable templates. So if

149:16

you're starting out, simply focus on

149:17

Terraform as a generalpurpose ISC for

149:21

all clouds, Helm for managing Kubernetes

149:24

deployments, and then cloud specific

149:26

tools later if you go deep on one

149:28

platform. This skill set will make you

149:30

valuable anywhere and keep you out of

149:32

that vendor lock so Bezos can no longer

149:35

control what you do.

149:38

Over the last couple of years, you've

149:40

built a solid DevOps foundation. You

149:44

started with what DevOps really is and

149:46

why it matters. Then moved into hands-on

149:49

skills that every DevOps engineer needs.

149:52

Version control for smooth

149:54

collaboration. CI/CD pipelines to

149:56

automate testing and deployments. Docker

149:59

to package and run applications

150:01

consistently. Kubernetes to orchestrate

150:04

containers, scale apps, and manage

150:06

clusters. And then ISC to define and

150:09

deploy infrastructure reliably. With

150:12

these fundamentals, you now understand

150:15

how modern software goes from code to

150:18

production, automated, scalable, and

150:21

secure. Now, it's just practice building

150:25

real projects and layering on advanced

150:28

tools as you go. And speaking of

150:30

projects, let's put everything together

150:33

into action. It's time to actually build

150:36

and deploy a scalable API using all the

150:40

DevOps practices you've learned so far.

150:42

That means that this is not the end.

150:45

It's where the real learning actually

150:47

begins.

150:50

All right, we're finally about to jump

150:53

into the big part of this video,

150:55

building and deploying our production

150:57

ready API. And just a quick reminder

151:00

before we continue, DevOps is all about

151:02

doing and not watching. So that means

151:05

that you got to have the right stack of

151:07

tools set up. As I mentioned before, for

151:09

the database, we'll be using Neon. So go

151:12

ahead and click the link down in the

151:13

description and click start for free.

151:16

For security, we'll be using ArcJet. Bot

151:19

detection, raid limiting, email

151:21

validation, attack protection, you name

151:23

it, and we get painless security. and

151:25

then Warp, where we'll be running all of

151:28

our commands, shipping code, and even

151:30

automating tasks with AI. If you're not

151:33

using it, you'll constantly feel a step

151:35

behind. And don't forget, the Pro plan

151:38

is still just $1, which gives you

151:40

unlimited workflows and faster builds.

151:42

Once again, the link is in the

151:44

description. So, if you haven't yet set

151:46

these up, pause for a second, get them

151:48

done, and then come back. Once you've

151:50

got all three, you'll be able to follow

151:52

along seamlessly as we build and deploy

151:55

the API. So now the first step is to

151:58

create a new GitHub repo. You can call

152:01

it acquisitions. We need to create a

152:03

repo as soon we'll need to implement

152:05

CI/CD pipelines. So better to create a

152:08

repo from the start. You can just click

152:09

create repository and then clone it

152:11

locally on your device by copying this

152:14

link and cloning it locally onto your

152:16

system. In this case, I'll be using

152:19

WebStorm. So, you can just go ahead and

152:20

click clone repository. Paste the URL

152:24

and just click clone.

152:26

You can do it normally using git. Once

152:28

you're there, we need to initialize a

152:30

new NodeJS project. So, just run ls to

152:34

make sure you're in the right repo. I'll

152:36

go ahead and expand my terminal as we'll

152:38

be spending quite a lot of time within

152:40

the terminal, but later on we'll be

152:41

using warp. And I'll run mpm init. And

152:44

you can also add the dash y to just

152:47

press enter to all the default options.

152:49

Just like that, you'll be able to see a

152:51

new package json which is the root of

152:54

our application. Then you can install

152:57

express by running mpm install express

153:00

as well as environment

153:03

variables. Once that is done, you'll see

153:06

that you'll get a package json. And I'm

153:09

currently hiding the node modules folder

153:11

as I don't typically want to go into it.

153:13

But it is important that we exclude it

153:16

from git so it doesn't get pushed over

153:19

to GitHub. To do that, you can add a new

153:22

file called.git

153:24

ignore and then you can just say

153:26

node_modules

153:27

to exclude it from being pushed over to

153:30

GitHub. After that, you want to head

153:31

over into the package JSON and modify

153:33

this application to use the ES6 import

153:36

system, which you can do by adding an

153:38

additional type property or key and

153:43

setting the value to module. This refers

153:46

to ES6 plus modules. Oh, look, type is

153:49

already here below. So, we just want to

153:52

switch it over from CommonJS to module.

153:55

Next, you want to create a new file in

153:57

the root of the directory and you can

153:59

call it index.js. This will be the

154:02

starting point of our application.

154:04

Within it, simply import express from

154:07

express. Initialize a new application by

154:10

setting it equal to the call of the

154:12

express library. Then set the port equal

154:15

to either process.env.port

154:18

if it exists or by default we can make

154:20

it 3000. You can also do 8080 5,000 or

154:25

any other number. Finally, we want to

154:26

make the app listen on that port. And

154:29

once it starts listening, we want to

154:31

simply put a console log out saying

154:35

listening on port. Now to run this

154:37

application, we need to add an

154:39

additional script within a package.json

154:42

under scripts. For now, I will remove

154:45

this test script and instead of it add a

154:48

dev script that when ran will run node

154:53

d-watch index.js.

154:57

Now this d-watch flag tells Node.js to

155:01

automatically restart your program

155:03

whenever a file changes in your project

155:06

directory. Very important. And now if

155:08

you go ahead and run mpm rundev, you'll

155:11

see that it'll say node watch index.js.

155:13

GS listening on localhost 3000. Perfect.

155:16

And you can stop it from running by

155:18

pressing Ctrl + C. And instead of just

155:21

checking out this project locally and

155:22

calling it a day by having a single file

155:25

from which all of it is running, I

155:28

actually want to teach you how to create

155:30

a proper production level file and

155:33

folder structure. So let's start by

155:36

creating a new folder which you can call

155:38

SRC or source. Within the source folder,

155:42

we want to create a new app.js

155:46

file. Then, right next to the app, still

155:49

within the source folder, we'll create

155:51

another file, which is going to be

155:53

called index.js.

155:55

And there's going to be a third one

155:57

called server.js.

155:59

All three of these files will have their

156:01

own purpose. The app file is all about

156:05

setting up that Express application with

156:07

the right middleware, whereas the

156:09

server.js JS is all about running that

156:11

server, implementing some logging and

156:14

everything else to make sure that the

156:15

server is running properly and then

156:17

index is just like a starting point. Now

156:19

let's create a couple of other folders

156:21

within the source folder. The first one

156:24

I'll create will be called config. This

156:27

is a folder for all different kinds of

156:28

configurations.

156:30

Then we have controllers which is also

156:32

within the source folder. As a matter of

156:34

fact, every new folder we create will be

156:37

within the source folder because the

156:39

source is basically our entire

156:40

application. Now, when speaking of

156:42

controllers, that has a lot to do with

156:44

the model view controller paradigm in

156:47

developing backend applications. That's

156:49

something we'll dive much deeper into

156:51

within our backend course. Alongside

156:53

controller, we also have the middleware.

156:56

So, create a new folder called

156:57

middleware. Middleware are functions

156:59

that are run before or after some other

157:02

functions that our app does. Maybe

157:05

logging functions, so whenever a request

157:06

is made, you can see what happened. Or

157:09

maybe authentication or verification

157:11

actions to make sure that when somebody

157:13

tries to perform a specific API action,

157:16

the middleware checks whether that user

157:18

has the permissions to do so. After

157:20

that, another folder that we'll create

157:22

will be a models folder defining how our

157:25

database schemas and models look like.

157:29

Next, we'll also have a routes folder

157:32

defining our API routes. And you can see

157:35

how in my IDE, each one of these folders

157:38

has their own icon because these folder

157:40

names are actually a convention which a

157:43

lot of developers use. After routes, we

157:46

can also create services. After

157:48

services, we can create utils for

157:51

utility functions that's also within the

157:53

source folder. And finally, we'll have

157:56

validations for all different kinds of

157:59

validations within our application.

158:00

Right now, these are just different

158:01

empty folders and meaningless names, but

158:04

as soon as we dive deeper into

158:06

developing the application and we start

158:08

putting actual files within them, it'll

158:10

all start making so much more sense.

158:13

Now, you want to move the current

158:15

index.js gs express app setup that is

158:17

within this outside index.js into the

158:21

app.js. So simply copy it and move it

158:24

over to app.js. This is where we're

158:26

setting the express application. And

158:28

instead of defining the app listen and

158:30

the port, we can create a new endpoint

158:33

by saying app.get

158:35

forward slash. So this is the home

158:37

route. We get a request and a response

158:41

and we can respond something once the

158:44

user triggers this endpoint or reaches

158:46

it. Rest status of 200 send hello from

158:53

acquisitions API or just acquisitions is

158:57

enough. And then we can finally export

159:00

default this app. Now we can once again

159:04

copy this index.js JS and paste it over

159:07

into server because here we won't have

159:09

the actual app.get route but here we'll

159:13

actually be listening over to the

159:16

server. So we don't have to recreate the

159:18

app from express rather we just have to

159:20

import the app from slashapp.

159:25

So now we can see how we're connecting

159:26

it together. Now if you head over to the

159:28

index.js within the source you can now

159:31

import.env/config. env/config

159:35

to make sure that we can properly read

159:37

environment variables. And you can also

159:39

import the dot /server.js.

159:43

So now if you head back over to your

159:45

package.json,

159:47

you can modify your devcript to run

159:50

source index.js instead of just

159:52

index.js. And then we can delete this

159:55

index.js from the root because we no

159:58

longer need it. So now everything we

159:59

need is within the source folder.

160:02

Primarily we're creating the express

160:04

application right here within the app

160:07

and then we're also listening to the

160:09

server within the server file. What you

160:11

can do is maybe even say http colon/

160:15

slash and then add the port and delete

160:17

these three dots. Oh, and before we run

160:20

it, make sure that at the end of all the

160:23

imports, you add the.js extension or

160:27

whatever other extension we have in

160:29

React or Nex.js environments, it's no

160:31

longer necessary. But here with ES6

160:34

modules in Node, you have to specify the

160:37

extension. So once you do this, you can

160:39

go ahead and open up the terminal and

160:41

run mpm rundev. And now it'll say

160:45

listening on http colfor slash. Oh, and

160:49

looks like I forgot to add localhost.

160:52

But now if I add it, you can see that it

160:55

auto restarts because of the watch flag.

160:57

And now you can just click on this link

160:59

within your terminal and it'll open it

161:01

up within your browser saying hello from

161:03

acquisitions. This means that we have

161:05

now created the base file and folder

161:08

structure of our running express

161:10

application. Congrats. Let's not forget

161:13

to push it over to GitHub. And just to

161:15

stick with proper programming habits,

161:17

let's go ahead and commit this. No

161:19

matter how small of a commit it is, the

161:21

smaller the better. So, I will rename

161:23

this active terminal to app because it's

161:26

going to be running our app. And I'll

161:28

create another terminal right here

161:30

within which we can run additional

161:31

commands. So I'll just say get add dot

161:35

getit commit-m

161:37

initial commit

161:40

and get push. Immediately all the

161:43

changes are pushed.

161:46

In this lesson we'll implement a step

161:49

that is very easy to skip and a lot of

161:52

people on YouTube teaching these courses

161:54

simply skip over it and that is setting

161:57

up and installing eslint and prettier.

162:00

It is always useful to have it, but even

162:02

more so when you combine it with CI/CD

162:05

pipelines so it always properly formats

162:08

your code. First things first, we got to

162:10

install all the necessary dependencies

162:12

by running mpm install eslint

162:16

at eslint/js

162:19

prettier eslint-config

162:22

prettier to make them work together

162:24

eslint-plugin-pier

162:27

and dash capital d which stands for

162:30

development. Only install these as

162:33

development dependencies. When you

162:35

install these packages, create a new

162:37

file called eslint.config.js.

162:42

And within it, we can paste our new

162:44

eslink config. Now, this is the config

162:47

that I was building over the last couple

162:49

of years, but in simple terms, it just

162:51

extends the recommended JavaScript

162:53

config. And then it adds some additional

162:55

rules. You can find it and copy and

162:57

paste it from the video kit down below.

163:00

Once you add it, we can also add another

163:02

file for prettier. Oh, and make sure

163:05

that it's not within the source folder,

163:07

but within the root of the application.

163:10

The prettier file is also within the

163:12

root and it's called prettier rc. And

163:15

within here, we can form an object with

163:17

some settings such as whether you need

163:19

or don't need semicolons. The trailing

163:22

commas, that's the comma at the end

163:24

where you don't actually need it, but

163:26

you can have it. whether you want to use

163:28

single quotes or double quotes and so

163:30

on. Feel free to pause the screen right

163:32

here and type these out or you know what

163:35

I'll also leave it within the video kit

163:37

down below. Finally, we can add an

163:39

additional file called

163:42

prettier ignore and within here you can

163:44

paste the files and folders that you

163:46

want prettier to ignore. It's going to

163:48

be node modules coverage logs drizzle

163:51

logs and package log json. Once you do

163:54

that, head over into your source app.js.

163:58

And right here, you should be able to

164:00

see red squiggly lines telling you that

164:02

we have some issues with ESLint, such as

164:04

a missing semicolon, inconsistent

164:07

spacing, or a missing semicolon here as

164:10

well. This means that linting is

164:13

working. Now, to lint across all of our

164:15

pages, head over within our

164:17

package.json, and then let's add a new

164:20

script for linting. We'll use this

164:23

script later on in our CI/CD pipelines

164:26

to ensure our code is formatted

164:27

properly. So just add a new lint script

164:32

and make it simply call eslint dot. Then

164:36

you can also add a lint

164:39

fix which will run eslint dot-fix.

164:43

You can also add format which we'll use

164:46

with prettier.

164:48

So it's prettier-

164:50

dot. And finally, we'll have format

164:53

check. And this will be equal to

164:56

prettier

164:57

d-check dot. Now, if you open up your

165:01

app.js

165:03

and open up your terminal at the same

165:05

time and type mpm run lint, you'll see

165:08

that we'll have 55 errors in this very

165:11

small application, mostly due to

165:13

indentation errors and missing

165:15

semicolons. Then to automatically fix

165:17

them, simply run mpm run lint fix. And

165:22

as you can see, within a single terminal

165:24

command, all of these issues were fixed

165:26

automatically across all files. And to

165:30

also enforce prettier formatting, you

165:32

can also run mpm run format.

165:35

In this case, it was already good. Or to

165:38

check if formatting has been done

165:40

properly, you can run mpm run format

165:43

check.

165:45

Perfect.

165:46

All matched files use prettier code

165:48

style. Wonderful. This means that now we

165:51

have all of the necessary scripts that

165:53

we'll be able to later on run from our

165:56

CI/CD pipelines. So what I'll do is run

166:00

git add dot get commit-m

166:03

implement eslint and predier and get

166:07

push. Perfect.

166:11

If you've been following along, you most

166:12

likely already have an account on Neon.

166:15

But if not, let's create it right now by

166:17

clicking the link down in the

166:18

description and starting for free or

166:20

simply logging in. Once you're in,

166:23

create a new project. You can call it

166:25

JSM_acquisitions.

166:27

And you can choose the region that is

166:29

closest to you

166:31

and click create. Once you're in, you

166:33

can click connect and then copy this

166:36

connection string.

166:37

Then within your application, create a

166:40

new env file in the root of your

166:43

application.

166:45

And let's add a comment for server

166:48

configuration.

166:49

So here we can paste all of the

166:51

environment variables that have

166:53

something to do with the server such as

166:55

the port by default set to 3000, node

166:58

environment by default set to

167:00

development,

167:02

and log level, which we can set to info.

167:06

Then we can also do a database

167:09

configuration.

167:10

And here you can paste the database URL

167:12

and make it equal to the string that you

167:16

just copied over from neon. But make

167:19

sure to remove this psql at the start if

167:22

you have it and also the ending string

167:24

sign right here. Then we can also update

167:27

our get ignore to ignore all types of

167:30

different env files by sayingv dot and

167:34

then asterisk. And then it is always a

167:36

good idea to also alongsidev

167:39

create a newv.example

167:42

file. This serves the purpose of telling

167:45

people which variables they need, but it

167:47

won't actually include the sensitive

167:49

information. Now let's install neon by

167:52

running mpm install at neon

167:55

database/serverless

167:58

drizzle-m.

168:01

So we're installing both Neon as well as

168:03

Drizzle to keep our database queries

168:05

type safe. And we also want to add one

168:08

dev dependency which will be for Drizzle

168:11

Kit. So as soon as this is installed,

168:13

simply run mpm install-d

168:17

drizzle kit. And then we are ready to

168:20

configure our drizzle config by creating

168:23

a new file. It's going to be in the root

168:25

of our application and you can call it

168:28

drizzle.config.js.

168:31

js.

168:32

Start by importing env/config

168:36

at the top so we can actually refer to

168:38

our environment variables. And then

168:41

you'll need to export the configuration

168:43

for drizzle which will include a schema

168:46

which is actually a path to all of the

168:48

models. So that's going to be dot

168:50

/source/models/aststerisk.js.

168:56

This means we will store the schemas

168:59

right here. Then we can choose the

169:01

output which is just going to be

169:03

/drizle. We can also do a dialect of SQL

169:06

that we're using. In this case, it's

169:08

going to be posgress QL. And we need to

169:11

pass the DB credentials. That's going to

169:13

be an object where the URL is

169:15

process.env.

169:18

database URL. And now if you run that

169:21

eslint fix command

169:23

or just run eslint fix it'll fix all the

169:27

inconsistencies within this file. But

169:29

again we don't really have to worry

169:31

about it because later on we'll make

169:33

sure that linting is a part of our CI/CD

169:36

pipeline. Now let's go ahead and set up

169:38

the database by heading over into source

169:41

config and create a new file within the

169:43

config folder and call it database.js.

169:47

within it. You can also import.

169:52

You can also import neon as well as neon

169:56

config coming from at neon

169:58

database/serverless

170:00

as well as import drizzle coming from

170:04

drizzle or in this case we can use neon

170:07

http. Then we want to initialize the

170:10

neon client by saying constql is equal

170:14

to neon to which you need to pass the

170:16

database URL. So process envatabase

170:20

URL and you'll need to initialize the

170:23

drizzle by saying db is equal to drizzle

170:26

to which you pass this SQL variable. And

170:29

finally you export both the database and

170:33

the SQL. For now, since the neon config

170:35

is not used, we can remove it and bring

170:37

it back later on if needed. Now, we can

170:40

head back over to our package.json

170:43

and update our scripts to add additional

170:47

drizzle scripts. We can add them by

170:49

saying dbgenerate. That'll be

170:52

drizzle-kit

170:54

generate. And I will duplicate it two

170:57

more times. Then we will have db migrate

171:00

which will run drizzle kit migrate. And

171:02

finally DB studio which will run drizzle

171:06

kit studio. And now I can show you how

171:08

we can create a first model in our

171:11

application. That model is going to be

171:13

for users. So head over into source

171:17

models and create a new file called

171:20

user.mod.js.

171:23

within it you can export const

171:26

users and make it equal to pg table

171:30

which you need to import from drizzle rm

171:33

pg core as the first variable you're

171:35

going to pass the users that's the name

171:37

of the table and finally then you have

171:39

to pass the columns in this case we can

171:42

say that a user table needs to have an

171:44

ID which is going to be a serial ID and

171:48

it'll act as the primary key of this

171:51

table it'll also have a name

171:54

that'll be a varchar which we can import

171:56

from the same driorm pg core

172:00

of a name and a max length of 255 and we

172:06

need to make sure that it is not null.

172:08

Now I will duplicate this one two three

172:10

more times. For the second one we're

172:13

going to be talking about the email. So

172:15

we can say that email is of a varchchar

172:17

email with a length of 255 not null and

172:21

this one also has to be unique. After

172:24

that we're going to have a password. So

172:26

that's going to be a password of a

172:27

varchchar password with a length of 255

172:31

not null. And finally a roll. So I'll

172:34

say roll varchar roll with a length of

172:38

50 not null. And by default we can set

172:41

the role to just a regular user.

172:44

Finally, we can set a created at field

172:48

which is going to be a timestamp

172:51

coming from PG core and it'll default to

172:55

now. So default now dot not null and I

173:00

will duplicate it so we can also store

173:03

the updated at field and let's not

173:05

forget to import this serial at the top.

173:08

Perfect. So now we have created a users

173:11

table and the way it works with posgress

173:13

databases and drizzle is that now we

173:16

need to generate SQL schemas using

173:18

Drizzle by opening up a terminal and

173:22

running mpm run db generate.

173:27

Once you run that we will have gotten a

173:29

new SQL migration file right here under

173:33

drizzle. And here you can see that we

173:35

just created a new user stable. After

173:38

that, we'll need to migrate or push the

173:40

changes over to Neon DB by running mpm

173:44

run db migrate. If you do that, you'll

173:46

be able to see a warning, but everything

173:48

should have went through successfully.

173:50

So now, if you head back over to your

173:51

Neon dashboard and go over to tables,

173:55

you should be able to see a new users

173:57

table. Perfect. This means that we have

173:59

successfully set up a Neon Postgres

174:01

database with Drizzle ORM. So let's go

174:05

ahead and commit it by saying get add

174:07

dot get commit-m

174:10

setup neon posgress

174:14

with drizzle and get push. Perfect.

174:19

In this lesson we'll set up logging and

174:22

middleware. For that we'll use a super

174:25

popular logging library that has over

174:27

24,000 stars on GitHub and it can log

174:30

just about anything. info, errors,

174:33

debugging, and more. So, let's just

174:35

install it by opening up our terminal

174:37

and running mpmi winston. Then, we can

174:41

set it up by heading over to source

174:44

config. And within config, you can

174:46

create a new file called logger.js.

174:50

Now, if you head back over to this

174:52

GitHub repo, you'll see some

174:53

documentation about how to set it up.

174:55

So, you can just copy this usage part

174:58

where they guide you how to create your

175:00

own logger. If you paste it, you'll see

175:02

that we first import it. In this case,

175:04

they're using the old require. So, what

175:06

I'll do is I'll just change it over to

175:09

import Winston from Winston.

175:12

There we go. And once that is done,

175:15

we're basically creating our own logger

175:17

by calling Winston.create

175:20

logger. And then we pass over some info

175:23

such as the info level. In this case, if

175:27

we pass something different from our

175:29

environment variables such as

175:31

process.envlog

175:34

level, then we'll use that else we'll

175:36

just use info. But if remember this log

175:39

level for now is just info anyway. Then

175:41

for the format, instead of using the

175:43

default JSON one, we'll actually combine

175:46

a couple of things. So say combine and

175:49

then we can pass over all the different

175:51

formats. I will use the timestamp format

175:54

so we know when the log happened. We'll

175:57

combine it with the errors by saying

175:59

Winston.

176:03

And I want to see the whole stack of the

176:05

error. So I'll set it to true. And then

176:08

only then I want to see the whole JSON

176:11

right here. Then we have the default

176:12

meta which is going to be the name of

176:14

our application. So I can set it to

176:17

acquisitions API. And then we have

176:19

transports where you define the

176:22

importance level of error or higher to

176:24

error.log. So in this case we have new

176:28

winston.transports

176:30

file with file name error.log. So here

176:34

decide where should the Winston create a

176:37

new file for the error logs. I'll put

176:40

them over to logs error.log

176:43

with a level of error. And then for the

176:45

other ones, we can just pass them over

176:47

to logs slash combined.log.

176:51

Then if we're not in production, then

176:54

log to the console with the format info

176:56

level info message JSON stringified. So

176:59

in this case, we're simply saying if the

177:02

node environment is not production, then

177:05

log something to the console. In this

177:07

case, I'll also modify the format by

177:09

saying winston.format.

177:12

combine. I want to colorize it so we can

177:14

see it in colors as well as I want to

177:16

keep it simple. So I'll say formats

177:19

simple. I found these two properties to

177:22

work the best when logging. And finally

177:25

I want to export that logger by running

177:28

export default logger. So now if you

177:31

head over into source and then app.js,

177:35

we can now add that logger to whenever a

177:38

user makes a request to forward slash.

177:40

So, I'll say logger. Make sure to import

177:43

it from /config logger.js.info

177:48

and I'll say hello from acquisition. So

177:51

now we'll be able to see it not just in

177:53

the browser or the return of our API,

177:56

but also from the logs. But before we go

177:59

ahead and test it out, check out this

178:01

top part right here where it says

178:04

/config/loger.js.

178:07

There's nothing necessarily wrong with

178:09

that. That is the file path of where

178:11

we're importing this logger object from.

178:13

But you can easily make a mistake here.

178:16

Maybe if you forget a dot or a forward

178:18

slash or if you mess up the path. So

178:21

this is called a relative import system.

178:24

But I would much rather prefer to use

178:27

the absolute import system. So let me

178:29

show you how. You know how inexjest,

178:37

right? how you're importing different

178:39

packages as well. They're just there.

178:41

You don't have to search throughout the

178:42

entire relative path. Oh, and if you're

178:44

importing logger from somewhere else,

178:46

you would have to maybe go a couple of

178:48

levels deep to get into that file. So,

178:51

it's prone to errors. But imagine that

178:53

you could just call the logger from

178:54

anywhere by saying something like add

178:57

logger. This would be pretty cool,

178:59

right? So, let me show you how to set it

179:01

up. It all has to do by heading over

179:03

into package.json. Yep. Again, you can

179:07

see how DevOps has a lot to do with

179:09

setting up the scripts, but this time

179:11

it's not going to be the scripts, it's

179:13

going to be the imports. So, right above

179:16

scripts, you can create imports,

179:19

which is going to be an object. And

179:21

there we can define h#config

179:24

forward slash everything. So everything

179:27

within the config folder, we can

179:29

automatically point to that path

179:32

slashsource

179:33

config everything. And now I will

179:36

duplicate this for every single folder

179:38

that I have. 1 2 3 4 5 6 7 I don't know

179:42

how many there are. But we can do the

179:44

same thing for every single one. So I'll

179:47

repeat it over for controllers. Then we

179:49

have middleware. Then we have the

179:51

models. There's the routes services

179:55

utils.

179:57

And look at that. I duplicated it the

179:59

exact number of times ending with

180:01

validations. So now if you go back to

180:04

app.js and you want to use the absolute

180:06

import system to import the logger now

180:09

you would go ahead and say something

180:10

like hashconfig/loger.js.

180:14

Now I'll show you how Winston does all

180:16

of this logging very soon. But just

180:18

before we do that let's also install

180:21

something known as a helmet. See

180:23

helmet.js helps secure express apps with

180:27

various HTTP headers. It also has over

180:30

10k stars and is a widely recognized

180:33

package. So we first have to install it

180:35

by running mpmi helmet and it is a very

180:39

lightweight package. So it'll get

180:40

installed within a second. Then we can

180:43

copy its usage right here. Head over

180:45

within the app.js and paste it. You'll

180:49

see that we have some duplicates. We're

180:50

already creating the app and we're just

180:53

importing helmet from helmet.

180:56

And then right below we initialize the

180:58

app, we say app do use helmet. In this

181:01

case, helmet would be considered a

181:03

middleware. And alongside helmet, we'll

181:06

also set up Morgan. It's a logging

181:08

middleware that'll show you details like

181:10

the method, URL, status code or response

181:13

time whenever somebody makes a request

181:15

to our API. Basically, we use it to

181:18

monitor traffic and debug requests

181:20

easily, especially in development. So,

181:22

let's scroll down to its usage. First

181:24

you have to install it by running mpm

181:27

install morgan and then you'll have to

181:30

use it. Using it is super simple. You do

181:32

almost the same exact thing as before by

181:35

first importing morgan coming from

181:37

Morgan. Then in this case we'll also

181:39

allow our application to pass JSON

181:42

objects through its requests by saying

181:44

app dot use

181:46

express.json

181:49

and also app.use use express URL encoded

181:54

extended to true.

181:56

This is a built-in middleware function

181:58

in express and it allows you to parse

182:00

incoming requests with URL encoded

182:02

payloads based on body parser. So

182:05

finally we can now also use the Morgan

182:07

by saying app.use

182:10

Morgan

182:11

combined. So both in dev and production

182:15

stream and here we can define what it'll

182:18

actually do. So it'll write and then you

182:22

can define a callback function. When it

182:24

gets a message, it will simply return

182:27

logger.info

182:30

and then it'll pass the message.trim.

182:33

So in this case, we're actually

182:34

combining both our logging library

182:36

through Winston and Morgan by passing

182:39

over Morgan's messages into our logger.

182:42

And while we're here, let's also set up

182:44

a couple more very important pieces of

182:46

middleware. I'll open up the terminal

182:48

and install them. one by one by running

182:50

mpm install course. See course lets your

182:54

backend decide which external domains

182:57

can make requests to it. Without it,

182:59

browsers will block calls from different

183:01

origins like a React app on localhost

183:03

3000 calling an API on localhost 5000.

183:07

Then we also need cookie parser. Cookie

183:11

parser will read cookies from incoming

183:13

requests and make them available in

183:15

recit.cookies.

183:17

It's super useful for handling sessions,

183:19

authentication, and storing small bits

183:20

of user data. And finally, express.json,

183:24

which we used. This one you don't have

183:26

to install. It's already built in. But

183:27

basically, it parses JSON data in the

183:30

request body and exposes it to you. So,

183:33

you can access it within rec. It's

183:35

essential for APIs since most clients

183:38

send data in a JSON format. So, let's

183:40

install them and let's set them up

183:42

within our app.js.

183:45

You can already see how the app is

183:46

growing larger.

183:48

Right here after helmet, I will say app

183:51

dot use and setup course. Make sure to

183:55

import it at the top by saying import

183:57

course coming from course. And right

184:00

here at the end, I'll also say app dot

184:02

use.

184:04

And we will get access to the cookie

184:07

parser.

184:09

Make sure to also import the cookie

184:11

parser right here at the top.

184:15

from cookie dash parser like this. Now,

184:19

if you reload your application or just

184:21

rerun it on localhost 3000 and just make

184:25

a request to it by heading over to

184:26

localhost 3000. When you close it,

184:29

you'll be able to immediately see a

184:31

Winston log within the console where it

184:33

says hello from acquisitions. That is

184:36

this part right here, logger info. It's

184:39

also letting us know which service this

184:41

is coming from. gives us more

184:43

information about the date, the

184:45

operating system, and all the other

184:47

information. Oh, and also, if you head

184:49

over into acquisitions logs, the folder

184:53

we created not that long ago, you can

184:55

also see the logs created for us by

184:58

Morgan, all of the HTTP information is

185:01

stored right here, so we can retrieve it

185:04

whenever needed. This is super important

185:06

when debugging your servers. Perfect.

185:08

And with that in mind, you have just

185:10

successfully set up logging and

185:12

middleware within your backend

185:13

application.

185:16

In this lesson, we'll get started with

185:19

implementing the authentication. So,

185:21

head over into our source and let's

185:24

create our first group of routes. I'll

185:28

create a new file which will be called

185:30

o.outes.js.

185:35

And within it, you can first import

185:37

express coming from express. And then we

185:41

can get access to express's router

185:43

functionality by saying router is equal

185:46

to express.outer.

185:49

Router allows you to create routes.

185:52

You can do it like this. Router.post.

185:56

So you're creating a post route atward

185:58

slash sign-up.

186:01

And then as the second parameter you

186:04

provide a handler which is a function

186:06

that defines what will happen once this

186:08

endpoint is reached. So in this case we

186:11

can define a new callback function that

186:13

will be executed within the block of the

186:15

code. You write what's going to happen

186:18

but within the first two parameters

186:20

you're getting access to the request and

186:22

the response. The response has the send

186:25

method that allows you to respond to the

186:28

user that's trying to access this

186:30

endpoint. So here you can tell it

186:33

something like post/api/signup

186:38

response. And now I will duplicate this

186:41

two more times. For the second one we

186:44

will trigger the signin functionality.

186:46

So I'll say coming from sign in. And for

186:49

the third one I'll do sign out.

186:53

Perfect. Now for this router to work, we

186:56

first have to export it by saying export

186:59

default router and then we need to use

187:02

it within the app.js.

187:04

So now if you head a bit below right

187:06

here we have the app.get get. You can

187:09

also add app dot use similar to how we

187:12

use the middleware, but you can also use

187:15

the routers like this by saying all of

187:18

the routes within this router will start

187:21

with for/ API sl and then we expose this

187:26

entire o routes which we can import at

187:29

the top. So now when somebody goes to

187:31

for slap API slash off slashsignin they

187:37

will hit this signin route right here.

187:39

Alongside using this router let's also

187:42

add something known as a health check by

187:45

adding the app.get and it's nothing more

187:48

than just another endpoint which is

187:50

going to be called health. It'll once

187:51

again have the request and the response

187:54

and within it we'll simply say rest.st

187:56

status of 200 and we'll send over a JSON

187:59

object that'll have a status of okay.

188:03

It'll also have a timestamp of the

188:06

current date and time. We can even put

188:08

it to ISO string by saying new date. ISO

188:13

string so it's in a human readable

188:15

format. And finally uptime which is

188:18

process.up time to define for how long

188:21

has our server already been up. And

188:23

alongside the health since we just

188:25

exposed this new router on the O route

188:28

group, we can also create another

188:30

endpoint

188:32

app.get which is not going to be forward

188:34

slash which we have right here above.

188:37

It'll rather be forward/appi. And if

188:39

somebody tries to go to forward/appi,

188:42

we'll say rest.st status 200.json

188:45

JSON and we'll send over a message

188:49

saying something like acquisition API is

188:52

running. Perfect. So now we can test all

188:55

of these API routes. So let's actually

188:57

go ahead and get these routes tested.

189:00

What you can do is first ensure that the

189:03

server is running on localhost 3000 by

189:06

running mpm rundev. And once it is you

189:09

can visit it within the browser and then

189:11

manually change the URL path to

189:13

something like/halth. But you can only

189:15

do that for get requests because what

189:18

does it mean to load a website? Think

189:21

about it. It means just making a simple

189:24

get request over to that endpoint. So if

189:27

you want to test get routes, you can do

189:29

that with the browser. But as soon as

189:31

you have a post, put update or another

189:34

type of a route type, then you have to

189:37

use something known as an HTTP client.

189:40

There are many different clients out

189:41

there. There's Postman, Insomnia, but

189:44

recently I used the one that I found the

189:46

simplest, HTTP. Simply Google HTTP, go

189:50

to their web app. Once you're within

189:51

HTTP, you can type http col//lohost

189:57

3000

189:59

and we can test it out. You'll quickly

190:01

see that we get a DNS error saying to

190:03

check the URL and try again or for local

190:06

networks, download the desktop app. So

190:09

in this case, let's go ahead and very

190:11

quickly install HTTP on our device.

190:14

It'll try to open it. If you don't have

190:15

it, just quickly install it. We once

190:18

again get a DNS error. But this time, if

190:21

you just say col 3000, so we're hitting

190:24

our local network. You can see that we

190:26

indeed get back a response. This is a

190:28

response in an HTML format, hello from

190:32

acquisitions, because we didn't use

190:34

res.json to send the response. We just

190:36

use res.end. But if you head over to

190:39

forward slash API, in that case you'll

190:42

see that we get back a much nicer JSON

190:44

format object saying acquisitions API is

190:48

running. And if you head over to

190:50

forward/halth,

190:52

you'll see that we get the timestamp and

190:54

the uptime. Everything is working

190:56

perfectly. What this also allows us to

190:58

do is to make a post request to for

191:01

slash 3000 API off signin. And if you

191:07

click send, you'll see that we reached

191:09

the post sign-in response. So now is our

191:11

time to get it implemented. So we can

191:14

then test it out using this HTTP client.

191:16

So heading back over to our IDE, you can

191:20

open up the terminal and install a new

191:22

package called JSON web token. JSON web

191:27

token or JWT for short is a compact

191:31

URLsafe means of representing claims to

191:33

be transferred between two parties. In

191:35

simple words, we use them to ensure that

191:39

our users are signed in and that they

191:41

are who they claim to be. The way you

191:43

can implement JWTs is by heading over to

191:46

source and we can get started by

191:48

creating a utility function that'll make

191:50

it easier for us to use JWTs across the

191:53

entire authentication process. So create

191:56

a new file called JWT.js.

192:00

And within here you can import JWT from

192:04

JSON web token. And let's set up our JWT

192:07

secret by saying const JWT secret or

192:11

underscore secret is equal to

192:14

process.env.jwt

192:17

secret. Or if we don't have anything

192:19

within our env

192:21

here by default by saying something like

192:24

your secret key please change in

192:27

production. So it's very important that

192:30

this is not coming from your local code.

192:32

It needs to be coming over from

192:33

environment variables. And you can also

192:36

define another constant for how long for

192:40

that JWT to expire. So JWT expires

192:44

in and we can set it to 1D as in one

192:48

day. And then we can create this new JWT

192:51

token which is basically just an object

192:53

that has a couple of different methods

192:55

on it which we will define. The first

192:58

method will be called sign and this

193:01

method accepts a payload

193:04

and it'll try to return a signed JSON

193:08

web token after it verifies that we are

193:11

who we claim to be. So we can open up a

193:13

new try and catch block in the catch. If

193:17

there's an error, we can use the logger

193:19

functionality to log that error saying

193:22

something like failed to authenticate

193:24

the token and then we can render the

193:26

actual error and we can also throw that

193:29

error within the application. In the

193:31

try, we will actually return a signed

193:35

JWT

193:36

with the payload that we passed in by

193:39

verifying the secret

193:42

and passing over as options when this

193:46

JWT expires. A second part of this JWT

193:49

config besides the sign is the verify.

193:53

So after we have assigned JWT, we also

193:56

have to verify it with a special token.

193:59

I'll also open up a try and catch block.

194:03

In the catch, we can do the same thing

194:05

that we did before. We will say logger.

194:08

Fail to authenticate token and log the

194:11

error and then throw that error so we

194:13

can catch it. And then in the try, I

194:16

will return the verified token. So,

194:19

JWT.verify

194:22

token and the secret. This will only

194:25

work if we have access to our apps or

194:28

JWT's secret. And don't forget that we

194:31

are already exporting this, so we'll be

194:33

able to use it very soon. But just

194:36

before we use it, let's also create

194:38

helpers for the cookies.

194:40

We can do that right here within utils

194:43

and create a new file called cookies.js.

194:47

And following the same fashion, we can

194:49

export a new object called cookies,

194:52

which will have a couple of different

194:53

methods attached to it. The first one

194:55

will be to get the options. It's a

194:58

callback function with an automatic

195:00

return. An automatic return means that

195:02

we don't just put curly braces here.

195:05

That's opening up a code function block,

195:07

but rather we wrap it with parenthesis.

195:10

And that means that we're actually

195:11

returning this object. Here we can say

195:13

that it is an HTTP only cookie to make

195:16

it more secure. Talking about security,

195:19

we can define the environment. So in

195:21

this case it'll be process.env.node

195:25

env has to be equal to production. If it

195:28

is then we are in the secure mode. We

195:30

will also set the same site origin to

195:33

strict as well as a max age of this

195:36

token to be 15 * 60 * 1,000. This is 15

195:44

minutes. So 15 * 60 seconds * 1,000

195:48

milliseconds. Let's also set up a couple

195:50

more methods such as the set. How can we

195:53

set the cookies? Well, we can set them

195:56

by first accepting a couple of things

195:58

within parameters. So, we'll be

196:01

accepting the response, the name of a

196:04

cookie, the value that we want to set to

196:06

it, as well as some additional options,

196:09

which by default will be set to an empty

196:11

object. And then we can say rest.cookie,

196:14

pass the name and the value. And

196:17

finally, we want to spread dot dot dot

196:21

cookies.get

196:22

options. and then want to append

196:25

additional options to it. If we do

196:27

decide to provide some additional ones,

196:28

now we want to do a similar thing with a

196:30

clear. When we want to clear the

196:32

cookies, we will also be accepting all

196:34

of these different props. So, rest name

196:38

and options. By default, these options

196:40

are equal to an empty object. And then

196:42

we will say rest.cookies. Instead of

196:45

calling rest.cookie, we will call

196:47

rest.clear cookie with a specific name,

196:50

no value in this case. and we will still

196:52

be providing all of the options.

196:55

Finally, once we have set it or reset

196:57

it, we want to be able to access it. So,

197:00

I'll create a get method with a wreck

197:02

and a name of that cookie. And we will

197:05

simply return w.cookies

197:07

under a specific name. Perfect. Now,

197:10

before we test this out, we now also

197:12

want to install an additional package by

197:15

running mpm install zod. Zod is another

197:18

one of those very popular backend

197:20

packages. It's actually a TypeScript

197:23

first schema validation with static type

197:25

inference with almost 40,000 stars.

197:28

We'll use it to define schemas with

197:31

strongly typed validated results. So

197:34

let's define how our signup schema

197:36

should look like. I'll do it right here

197:38

within validations folder by creating a

197:41

new file called O.validation.js.

197:45

within it we can import in curly braces

197:49

z from

197:51

zod and then export this schema export

197:55

const signup schema is equal to z.object

197:59

and here we can define what the schema

198:02

needs to have. We can start with

198:03

validating the name field by saying name

198:05

will be z string and we can trim it. If

198:09

you want, you can also provide some

198:10

additional parameters like min to define

198:13

the minimum amount of characters that it

198:15

should have or max something like 255

198:19

and then trim typically comes at the

198:21

end. Let's also continue doing that for

198:24

the email by saying email will be of Z.

198:30

with a max of 255.

198:32

We will lowerase it and we will trim it

198:36

in case people left some extra

198:38

characters. I'll also do the password

198:40

with Z dot string dot min of about six

198:44

max of about 128. That's more than

198:47

enough for our password. We will not

198:48

lowerase it. Password can contain

198:51

uppercase characters and we will not

198:53

trim it. Let's not mess with passwords.

198:56

And finally, the role of a user. I'll

198:58

set that to be equal to Z.enum.

199:02

Enum stands for, you know, some of the

199:04

options that we can choose such as

199:06

either a user or admin. And then

199:09

finally, we will set it to default to

199:12

user. Finally, we can also have a signin

199:15

schema which will be very similar. So

199:17

I'll say export const signin schema

199:21

Z.Object. It'll have an email of Z.2

199:26

to lowercase.trim

199:29

and it'll have a password. So if

199:31

somebody's trying to sign in, they're

199:33

only using their email and password. So

199:35

Z.String

199:36

min one character. We basically need a

199:40

password right here. Perfect. So now we

199:43

have those schemas which we're exporting

199:45

so we can use them within our

199:47

application. And we also have to somehow

199:49

properly format all of these errors so

199:51

that they can get sent over to the user.

199:54

For that I'll go ahead and create

199:55

another utility function within utils

199:58

and I'll call it format.js.

200:02

Within here we can export a new function

200:06

called format validation in case we're

200:08

getting many errors. So this one will

200:10

accept all of the errors and then if

200:13

there is no errors or no errors.

200:18

We will simply return validation failed.

200:22

But if there are some errors and if it's

200:25

a type of an array. So say array.is

200:28

array. What an interesting way to check

200:30

whether something is actually an array.

200:33

Then we will map over all of those

200:35

errors by saying return error dot issues

200:41

map where we get each individual issue

200:44

take over its message and once we do we

200:47

can join them all together by commas.

200:51

Hopefully this makes sense. But if it's

200:53

not an array of issues, just one

200:55

validation error, we will simply pass it

200:57

over. So JSON stringify errors. So no

201:00

matter how many there are, we're always

201:02

going to present it in a single string

201:04

separated by commas. And now we're ready

201:06

to write the logic for creating the user

201:09

once they sign up. We can do that within

201:11

controllers. See where within routes you

201:15

define the actual endpoints. Within

201:17

controllers, you define what will happen

201:20

once those endpoints are reached. So

201:23

create a new controller called

201:25

o.controller.js.

201:28

The whole goal of a controller is to

201:30

export a single function that'll do the

201:32

job when a specific endpoint is called.

201:35

So in this case, I will export a new

201:38

function called signup that is going to

201:40

be an asynchronous function that accepts

201:43

a request and a response. and the next.

201:45

I'll talk a bit about what this next

201:47

means very soon. And then it does

201:50

something. I'll open up a try and catch

201:52

block as before. So if something goes

201:55

wrong, we can properly log it and handle

201:58

it. First things first, I'll turn the

202:01

logger on and log an error saying

202:04

something like signup error so we know

202:07

exactly where it happened. And then we

202:09

can provide additional error messages.

202:11

And then specifically if the error

202:13

dossage is equal to user with this email

202:19

already exists in that case we can

202:22

return some other status by saying rest

202:25

status of 409 which is an exact HTTP

202:29

status that says basically what that

202:31

message says user with this email

202:33

already exists and we'll provide more

202:36

information uh to that user but

202:38

otherwise we're simply going to say Hey,

202:41

take this error and forward it over by

202:44

using the next functionality. Next is

202:46

typically used when something is

202:47

considered a middleware or when a

202:49

specific action will be called before or

202:51

another action so that it passes the

202:53

torch in a way that it'll execute some

202:56

logic and then it'll pass it over to the

202:59

next function in the chain. It'll make a

203:01

bit more sense later on once we put the

203:03

sign up where it needs to be. Okay, so

203:05

now let's focus on implementing the

203:06

logic of the signup. First we want to

203:08

validate the data that is coming into

203:11

the form or into this endpoint by saying

203:14

const validation result is equal to

203:18

signup schema dot safe parse and we're

203:21

going to pass the rec.body. This rec

203:24

body will contain the form data that the

203:26

user has typed in when trying to make a

203:29

request. If no validation results

203:32

success, so if something went wrong, we

203:35

will return a ResNet status of 400

203:39

alongside the following JSON payload.

203:41

It'll have an error of validation failed

203:44

and it'll also have details within which

203:47

we're going to use our format validation

203:51

error utility function and to it we can

203:54

say validation result dot error. So the

203:58

user will know exactly what went wrong

204:00

and what they have to fix. But if

204:02

everything went right, we can get access

204:05

to the name, email, and the role that

204:08

the user has submitted through the form

204:10

by dstructuring them from the validation

204:12

result data. Then within here, we'll

204:15

have to call our O service to actually

204:18

create an account. And then later on, we

204:20

can use the logger functionality to

204:22

simply give us an info message of

204:24

something like user registered

204:26

successfully. And we can even render

204:28

their email. So let's make sure this is

204:31

a template string. Finally, we'll

204:33

respond with a status of 2011, which

204:36

means user created. And I'll pass an

204:39

additional JSON payload of message user

204:43

registered. And we'll pass over the user

204:46

information. For now, we can pass an ID

204:48

of one. We're faking it alongside the

204:52

user's name, email, and the role that

204:55

got created. Okay, so let's test this

204:58

out. If I now head back over to my HTTP

205:01

client and head over to sign up, if I

205:04

make a request, we'll see just a regular

205:06

signup response. That's because we

205:08

haven't yet hooked up our controller to

205:11

our route. So to hook it up, it couldn't

205:14

be any simpler. What you need to do is

205:16

just remove this callback function

205:19

because we no longer need it. And what

205:21

you need to instead do is simply call

205:23

the signup controller. So you're

205:25

basically saying once a user goes to O

205:28

API sign up call this function. So now

205:32

we will no longer be receiving this

205:34

rather we'll get a validation error

205:36

invalid input expected object received

205:40

undefined and this is perfectly fine.

205:43

Imagine that we just submitted a form on

205:46

the front end and it was asking us for

205:49

an email, a name and a role and we

205:53

basically left everything blank. That's

205:55

not how it goes. We have to get that

205:57

data from the form and actually pass it

206:00

through request body. So let's format.

206:02

I'll say that this request body it will

206:06

have a name of something like email and

206:09

a value of contact atjsmastery.com.

206:13

Now if I send it, it'll say invalid

206:16

input expected string received undefined

206:19

invalid input expected string received

206:22

undefined as well. It is referring to

206:24

our other two pieces of the form and

206:26

that is the name which I was also

206:28

missing and finally a password. So if I

206:32

pass a password of something like 1 2 3

206:34

1 2 3 you'll see that we have

206:37

successfully apparently right this is

206:39

just fake for now registered a new user.

206:43

This is looking good but take a look at

206:45

our back end. If you take a look at the

206:48

logs you can see that the application

206:50

was listening and that a new post

206:53

request was made. Now in this case we

206:56

were not console logging the password or

206:58

doing anything with it but we might as

207:01

well could have because it got passed in

207:03

a plain text format which means that we

207:07

need to secure it and we can secure it

207:09

by encrypting it. For that there's one

207:12

best package that most people use and it

207:14

is called brypt. So simply run mpm

207:17

install bcrypt and then we'll head over

207:21

into another file.

207:23

This time we'll actually create our

207:25

first service. So if you head over into

207:28

source services and create a new file

207:32

called o-service.js.

207:36

Here we will implement the logic of

207:38

hashing our password. So simply say

207:41

export const hash password which is

207:44

equal to an asynchronous function that

207:47

accepts the password in plain text and

207:50

then it'll open up a try and cache

207:53

block. In the catch, as usual, it'll try

207:56

to log an error saying logger.

208:01

Invalid password or actually what's

208:04

happening here is we have an error

208:07

hashing the password

208:09

which shouldn't really happen. And then

208:11

we can finally throw a new error error

208:14

hashing. But if everything goes right,

208:17

we can try to just return the hashed

208:20

password by calling the await because

208:24

hashing it takes some time and is

208:26

asynchronous and using the brypt

208:29

library. So brypt dot hash you just have

208:33

to pass the password and a number called

208:36

salt rounds. So how many rounds do you

208:39

want to hash it for? Typically the

208:41

default is about 10 to 12. So that's

208:44

what I'll do. And don't forget to import

208:46

brypt at the top by saying import brypt

208:51

from brypt. And this is it. This is a

208:54

function that will now hash our

208:55

password. Once we hash the password, we

208:58

also have to create a new user. So let's

209:01

create it just below by saying export

209:04

const

209:06

create user is equal to

209:10

an asynchronous function that'll accept

209:12

an object which we can automatically

209:15

dstructure and get its name, email,

209:18

password and the role which by default

209:21

will be set to user. Then we can open up

209:24

a try and catch block. In the catch we

209:28

will simply log the error by saying

209:30

logger.

209:32

And the error will be all about creating

209:34

the user. So say error creating the user

209:39

and we will throw that error. And in the

209:42

try we'll first see whether that user

209:44

already exists in the database by saying

209:47

constex existing user is equal to db do.

209:52

Select from the table of users where EQ

209:59

this is coming from Drizzle ORM

210:02

users email matches the one of email

210:05

that we're now trying to create the

210:07

account for and limit it to one user.

210:10

Make sure to import this DB right at the

210:13

top coming from config database.js. Also

210:17

make sure to import the users model by

210:19

saying import in curly braces users

210:22

coming from models user modeljs. And I

210:26

think we're good. Then if an existing

210:29

user exists so if their length is

210:32

greater than zero in that case we will

210:35

throw a new error saying user already

210:39

exists. But if it doesn't, we can start

210:42

hashing the password and creating a new

210:44

user by saying const password hash is

210:50

equal to await hash password. That's the

210:53

function that we just created above. And

210:56

then we can get access to this new user

210:59

by saying const dstructure the array of

211:02

the response and get this new user out

211:04

of it. And then call await

211:08

database.insert.

211:09

Insert.

211:11

Insert into the users table the

211:14

following values.

211:16

The values of name, email, password, but

211:21

not the password in the plain text

211:23

format, but rather a password hash and

211:25

then a role. And from this database,

211:29

return me the following things. So I'll

211:31

say dot

211:33

returning id of users do id name of

211:38

usersname

211:40

email of users.mmail

211:43

ro of users roll and created at of

211:48

userscreated

211:50

at I think you get the idea. Oh I think

211:54

I have a typo right here. So let's fix

211:56

it. name of users.name

211:59

and I will actually put this into

212:01

multiple lines so it's much easier to

212:03

look at values in one line and returning

212:07

in another. Finally, once we create this

212:09

new user, we can log it out by saying

212:12

logger.info

212:13

user with this new email created

212:16

successfully and then we want to return

212:18

that user from this service. These

212:21

services are like just additional

212:23

utility functions that we're going to

212:24

call within our controllers.

212:27

So now let's head over into our

212:30

ocontroller.

212:31

That's within o.controller.js.

212:35

Here I left some space for this o

212:37

service. And first we want to call the

212:39

create user const user is equal to await

212:44

o service. Oh, it looks like we forgot

212:46

to export it. So head over to O service

212:51

and as you can see we're exporting these

212:53

services one by one. So what you can do

212:56

instead of calling o service you can

212:58

just simply call create user and it

213:02

should autoimp import it from services o

213:04

service. To create user you can now pass

213:07

everything it needs such as the name

213:10

email password which now will

213:12

automatically get hashed as well as the

213:15

role. This password is coming from the

213:17

validation data of course. And once we

213:19

have the user, we can create their JWT.

213:23

So con token is equal to JWT

213:26

token.

213:28

Make sure to also import this

213:31

dot sign pass the ID of user id email of

213:37

user.mail and ro of user.

213:42

Finally, we can take that token and set

213:44

it to the cookies by saying cookies

213:48

again import it from our utility cookies

213:51

dot set. We created this method on our

213:54

own and we want to set the entire

213:56

response with the name of token and a

213:59

value of this token we just crafted by

214:02

using JWT sign. Once we do that, we're

214:05

successfully logging this in. And now we

214:07

no longer have to fake this user ID. We

214:10

now have a real one. User do ID name is

214:14

equal to user name. Email is equal to

214:17

user.mail and roll is equal to user.

214:22

Perfect. And as you know we're calling

214:25

this controller right here within

214:27

routes. So if you head over into routes

214:31

you can see we're calling the signup.

214:33

And if you make a request to this

214:36

endpoint but with proper data, in this

214:39

case I'll be using just a regular text

214:41

element. So we can pass a regular JSON

214:43

object that looks something like this.

214:46

Name of admin, email of

214:49

adminjmastery.pro

214:50

with a password of 1 2 3 4 5 6 and a

214:53

role of admin. You can now send that

214:56

request and you'll get back a 2011

214:59

saying user registered with a role of

215:02

admin. And you can also create another

215:05

one maybe Adrian AdrienJSM mastery pro

215:08

this time with a role of user and you'll

215:11

see that the ID will be incremented. So

215:13

this time it'll be saying two and for

215:16

each one of these requests you can also

215:18

see the response. So if you click right

215:21

here you can see all the information

215:22

about this response and take a look at

215:25

this field called set cookie. You can

215:28

see that the cookie actually contains

215:30

the actual JWT. So within our

215:33

application, we'll know that this user

215:36

got authenticated. Amazing job. This

215:39

basically handles the majority of the

215:41

authentication setup. We have the sign

215:43

up and user creation. Now we also got to

215:47

figure out how to do sign in and sign

215:49

out which are of course essential parts

215:52

of every good authentication. So we'll

215:55

do that soon. Before we dive into

215:57

implementing rest of authentication

215:59

features and running DevOps tasks on a

216:02

realworld app, I wanted to stop for a

216:04

second and show you a tool that I've

216:06

been using a lot lately. It's Warp, the

216:09

fastest way to build and ship

216:11

applications using AI right inside a

216:14

single environment. It includes a

216:16

terminal, a code editor, and an agent

216:19

hub. So instead of memorizing commands

216:21

or juggling through different docs, you

216:23

can just type in what you want in plain

216:25

English and Warp will figure out the

216:27

rest. Execute the commands and just get

216:30

the job done. With Warp, you can chat

216:32

with any AI, all the top models

216:35

including Claw, GPT5, and more. And it

216:38

helps to plan architecture, explain or

216:41

review your codebase. You can also write

216:44

and edit code in line with smart

216:46

suggestions. You can run multiple AI

216:49

agents at once to speed up the results

216:51

and automate entire workflows. So typing

216:54

something like undo last merge actually

216:57

rolls back the changes in seconds. In

216:59

short, you describe what you want and

217:02

warp does the heavy lifting. And the

217:04

best part is the developer experience of

217:06

having everything together. A terminal,

217:09

a code editor, and agents all in one

217:12

place. No need to have three separate

217:14

browsers, an editor, and 10 tabs open

217:16

just to get some output from AI agents.

217:19

And talking about pricing, you can try

217:21

it all for free. They have a pretty

217:23

generous free plan. But if you need

217:25

something bigger, for a limited time,

217:27

Warp is offering you, yes, you watching

217:30

this video, the Pro plan for only $1,

217:34

which is normally 18 bucks per month.

217:37

So, click the link down in the

217:38

description before it's gone and we'll

217:40

use it very soon to 10x our

217:42

productivity. Let's dive right in. To

217:44

get started, click the link down in the

217:46

description and download Warp. Once you

217:49

download and log into Warp within your

217:51

device, you'll see a pretty empty screen

217:54

that allows you to create a new project,

217:56

open a repo, or clone it. But the real

217:59

magic happens right here at the bottom.

218:01

Here you have two different modes, a

218:03

terminal and an agent mode. Then you can

218:06

select any AI agent you want to use for

218:08

this specific task. In this case, I'll

218:10

go with auto, which is cloud for sonnet

218:13

since it's currently the best option for

218:15

coding tasks. Then if you press a

218:17

forward slash, you'll be able to see

218:19

some options such as adding MCP servers,

218:22

adding prompts, rules, and more. Oh, and

218:25

there's also a voice input mode. So if

218:27

you're more of a vibe coder who prefers

218:29

to code by speaking over typing, well,

218:32

you can do that, too. Here you also have

218:34

some directories. So you can switch

218:36

between different repos and attach

218:38

additional context from different

218:40

folders. So let's cd into the repo we've

218:42

been working within its acquisitions.

218:45

And immediately warp is asking us

218:47

whether we want to optimize it for this

218:49

codebase. And for sure I'll select

218:51

optimize. It'll do that by indexing this

218:54

entire codebase. And it can also create

218:56

its own MD file. So I'll say sure go

218:59

ahead. As soon as we enter this repo,

219:01

you can see that now it tells us the

219:03

version of node we're running, where

219:05

this project is located. We can see more

219:07

git metadata such as the branch and how

219:10

many changes we've made. And we can also

219:12

attach some additional context. Now,

219:14

while it's doing its thing, I'm going to

219:16

open up the warp drive on the left side

219:18

right here. You can open it up by just

219:19

pressing this button at the top under

219:21

personal. Here you have different things

219:23

such as the rules you can add, the MCP

219:26

servers, and the getting started

219:28

notebook. And it's not limited just to

219:30

personal projects. You can also have

219:32

your entire team join you. And while I

219:34

was clicking around, you can see how it

219:36

actually opened different windows all

219:38

within a single environment. This is the

219:40

beauty of this developer experience

219:42

using Warp. No longer do you have to

219:44

have 10 different apps opened up. It's

219:47

just one. So, let me go ahead and create

219:49

a new team that I'll be using for this

219:51

project. I'll call it acquisitions

219:54

since that is the name of our app. And

219:56

let's start with JSM acquisitions. Once

219:58

you create it, you can start adding some

220:00

workflows or notebooks right here. Or

220:03

you can press this plus icon and then

220:05

start with a new prompt within the video

220:07

kit down below. I'll give you access to

220:09

the complete prompt that you can type

220:11

right here. So we can set it up in the

220:13

same way. Let's start by giving it a

220:15

title of codebase architect explainer.

220:18

And then we can give it a description

220:19

saying that this is an AI prompt that

220:22

studies any codebase and produces a

220:24

clear structured explanation of its

220:26

architecture and how it works. And then

220:28

you can pass the prompt itself which you

220:30

can find within the video kit down

220:32

below. Now when you create this codebase

220:34

architect explainer prompt,

220:37

it'll be added to your team or to your

220:40

personal project. Then when you head

220:41

back over to your terminal and type

220:44

forward slash, you'll be able to see the

220:46

list of your different prompts. And one

220:48

of those is the codebase architect

220:50

explainer. As soon as you type it, it'll

220:52

automatically select it. And the only

220:54

thing you have to do is simply press

220:56

enter. So no need to type in long

220:59

prompts manually or scout your past chat

221:02

GPT history to find them. Now we can

221:04

store them all in one place and

221:06

re-execute them whenever you need to.

221:08

The first thing that it is doing is

221:10

reading the files that we have. And as

221:12

you can see, immediately it started

221:15

explaining what is currently happening

221:17

within our codebase.

221:19

Of course, you took the time to build

221:21

it, so you already know what's

221:22

happening. But of course, seeing an

221:25

overview of the project structure, for

221:27

example, and of all of the folders we

221:29

created is always super useful. we get a

221:32

complete breakdown of all of the

221:34

components that form the application

221:36

together and even the data flow of the

221:38

application where a client makes a

221:40

request. Then we have the express

221:42

middleware to the route handler to the

221:45

controller to zod to services and

221:47

finally we return the response. So this

221:51

is perfect in case you want to study it

221:53

in more detail or ask it more questions.

221:56

I'm just amazed. I mean, it even gives

221:58

us a sample request execution that we

222:00

can send over to test this API. This is

222:03

great. And this is just one single

222:05

prompt that I gave it and it immediately

222:07

spat all of this back within this a bit

222:10

unorthodox terminal like experience. So

222:15

like it's not a typical chat and it's

222:17

not a terminal either nor it's an editor

222:21

but it's all combined into one and that

222:25

feels so familiar to me as a developer.

222:28

So let's see what else Warp can do

222:30

besides just explaining our codebase and

222:33

I think to truly test warp to increase

222:36

our speed as developers I want to ask it

222:39

to implement what was going to be the

222:40

next step for me to do manually within

222:42

the application. Remember, we just

222:44

created a route for the signup and a

222:48

signup controller, but we haven't yet

222:50

implemented controllers nor services for

222:53

sign in and sign out routes. So, let's

222:57

ask Warp to do it for us. Now, check

223:00

this out. I will run a clear command to

223:04

clear what I'm currently seeing within

223:06

my terminal window. And then I'll switch

223:08

over to the agent mode. And you can

223:10

immediately start typing of what you

223:12

wanted to do.

223:13

I'll give it some background such as you

223:16

are a back-end developer

223:19

working on an ExpressJS

223:22

app with O features.

223:25

Your job is to extend the O service and

223:30

controller

223:32

to support user login and logout. I

223:36

think that this even might be enough.

223:37

But just to make sure that you can

223:38

follow along and see the same exact

223:40

output that I'm having. I took a bit of

223:43

time to write a bit more detailed prompt

223:46

just so I don't have to type it out

223:47

manually. I'll give it to you within the

223:49

video kit down below. So simply copy it

223:51

and paste it right here. I provided it

223:53

with a bit more info on what it can do

223:56

to make this happen and then saying that

223:58

it needs to implement these two

224:00

functions. So now simply press enter

224:03

like you're running a terminal command

224:05

in agent mode. Warp immediately started

224:08

warping and told us that it'll help us

224:10

extend the authentication service

224:12

controller with these two additional

224:15

functionalities. It'll do some things on

224:17

its own but for some things it's going

224:19

to ask us whether it's exactly what we

224:21

want. So for example it created this

224:23

compare password function and it put it

224:26

within services. This is exactly how I

224:28

want it to look and this is exactly

224:30

where I want it to be. So, I'll gladly

224:33

accept the changes by simply pressing

224:34

enter. And of course, if you want to

224:36

edit something, you can do that by

224:39

pressing command E or pressing the edit

224:41

button and then edit it and then submit

224:43

it and then it'll continue producing the

224:45

code leading to the output. Same thing

224:48

right here for adding the authenticate

224:50

user to the O service. In this case, it

224:53

looks like it's actually applying a fix

224:55

to the code that I wrote that I didn't

224:58

necessarily anticipate, and that is that

225:00

I forgot to put the await right here

225:02

before DB select. So, this was a crazy

225:04

catch by Warp. It didn't only do what I

225:07

asked it to do, which is to add

225:09

additional features to the O, but it

225:11

actually fixed a mistake that I made on

225:13

my own. So, I'll definitely apply those

225:15

changes as well. And only now it's

225:17

starting to extend it. I'll press enter

225:20

a couple more times. And let's see what

225:22

it comes up with. It's properly

225:24

implementing the schemas as well as the

225:27

two controller functions, sign in and

225:30

sign out. And finally, it'll update the

225:32

routes to use the new controller

225:34

functions. That one should really be

225:36

quick. There we go. It removed all of

225:38

these lines and simply imported sign up,

225:42

sign in, and sign out and trigger them.

225:44

When we reach those endpoints, it'll now

225:46

read through all of those files and

225:48

check the implementation. and it says

225:50

perfect, I've done it. Gives me an

225:52

overview of exactly what it

225:54

accomplished, which code it added,

225:57

and how it updated the routes, and

226:00

finally, which features were

226:01

implemented.

226:03

Finally, also gives me an example of how

226:05

I can test it. So, let's test out the

226:08

implementation. I've opened up HDPI and

226:10

instead of sign up, I'll head over to

226:13

sign in to see whether that works. And

226:16

now I need to sign in with my email and

226:18

password. And before I send it, let's

226:20

make sure that our application is

226:22

running on localhost 3000. And then we

226:25

got user signed in successfully, which

226:27

means that the sign-in function has been

226:29

implemented. And now we can also try the

226:31

sign out. It actually told us what we

226:33

have to do. No body required. We just

226:36

cleared the cookies by making a post

226:38

request to sign out. So if I do that

226:40

without passing anything within the

226:42

body, it says user signed out

226:45

successfully. To be honest, I'm just

226:47

blown away by not even the simplicity of

226:50

using it, but just by developer

226:53

experience of having it all within a

226:56

single environment. And of course, since

226:58

it's connected to our repo, all the

227:00

changes that have been made are

227:02

immediately visible within our other

227:04

editor as well. That's amazing. So now,

227:07

if I switch over back to the terminal, I

227:10

can clear it. I can run git add dot git

227:14

commit-m and say implement o

227:20

and then push it all from within a

227:22

single terminal like experience where 5

227:24

seconds ago I was speaking with agents.

227:27

Oh, and want to see something else? If I

227:29

press right here, I can open up the

227:31

project explorer and actually browse my

227:33

entire application and all of the code.

227:35

And I can then hover over that code to

227:37

add it as additional context. and I can

227:39

have the files on one side and the

227:41

terminal and the agent on the other. It

227:43

really feels like a full experience. So

227:45

with that in mind, authentication is now

227:48

done. And in the next lesson, let me

227:50

show you how we can secure it.

227:54

Now that we've implemented

227:55

authentication with proper logging and

227:58

monitoring, it's time to make it more

228:00

secure. And for that, we'll use Arkjet.

228:03

See, a backend without security, rate

228:06

limiting, or bot protection isn't just

228:08

unfinished, it's also unsafe. Because

228:11

these exposed APIs with no safeguards

228:14

are left to abuse, spam, and even DDoS

228:17

attacks. All it takes is one bad actor

228:20

to overwhelm your system and take it

228:22

down. Thankfully, ARJet has built-in

228:25

features for Noode.js applications that

228:27

protect us. ARJet Shield automatically

228:30

protects your apps against most common

228:32

attacks, including the top 10 most

228:34

popular attacks. Then without rate

228:36

limiting, a single client can flood your

228:39

servers with requests and starve out

228:40

real users. But with Arjet, you can

228:43

limit the amount of requests that each

228:45

user can make. There's also bot

228:47

protection so that you can stop

228:49

automated scripts that exploit your

228:51

endpoints or scrape data or brute force

228:53

credentials. And also, you can protect a

228:56

signup form by combining all of these

228:58

things together. If you think about it,

229:00

these attacks aren't edge cases. They're

229:02

everyday realities. And making your app

229:04

secure is not only smart, but it is

229:07

necessary. More than that, it wouldn't

229:09

make sense not to make it secure as it

229:12

is so simple doing that using Arjet. So,

229:15

click the link down in the description

229:17

and let's set it up together. Sign up

229:19

for free. You can go with GitHub. As you

229:21

can see, I already have a couple of

229:23

projects that I'm hosting over on ARJet.

229:25

So, I'll create a new site right now.

229:27

You can name it something like JSM and

229:30

we're going to use the acquisitions or

229:32

you can call it DevOps as well and

229:34

create. Now, immediately you're given a

229:36

key that you can copy. So let's do that

229:39

first and add it over to your env right

229:43

below. You can call it arjet

229:47

and add the arjet key equal to this key

229:52

right here. And then we can follow the

229:53

setup for node and express. So just

229:57

click right here and it'll give you

229:59

instructions on how you can set it up

230:01

within 5 minutes. We have already

230:02

installed express. So what we have to do

230:04

is install at arjet/node

230:07

and add arjet/insspect.

230:09

Now it'll be up to you whether you want

230:11

to keep using your current IDE or editor

230:14

or you want to switch over to warp and

230:16

run everything there. Both ways are

230:18

totally fine. I'll proceed with warp

230:21

just to see how it all works. So I'll

230:23

clear everything and install the

230:25

necessary packages. Arjet node and arjet

230:28

inspect. While that is installing, the

230:30

next step is to of course set our key

230:32

which we have already done and then add

230:35

some configuration rules. So go ahead

230:37

and copy this file from the

230:38

documentation and then open up your

230:41

project explorer. Within here you can

230:43

head over to config and create a new

230:46

file. Call it arjet.js.

230:49

Within this file you can paste what you

230:51

just copied. But we don't have to set up

230:53

a new app right here. We just need to

230:54

set up an instance of arjet with our own

230:57

rules. So that's going to be import

230:59

arcjet shield detect bot and sliding

231:02

window. This is one of the raid limiting

231:04

methods. And then we don't need this

231:06

spoofedbot or express. While setting up

231:09

our instance, we first need to pass over

231:11

our key coming from process.env.archjet

231:14

key. And then we can start defining the

231:16

rules. First of all, we have this shield

231:19

rule which protects your app from the

231:21

most common attacks such as SQL

231:24

injections which is super useful in this

231:25

case and I will leave it live. Then we

231:28

have the bot detection rule where you

231:31

can also say live or you can also use

231:34

dry run to log only and then we're

231:36

specifying which bots should we allow

231:39

because not all bots are necessarily

231:41

malicious. Maybe you want some bots

231:43

within your application such as for

231:45

monitoring and so on. In this case,

231:47

we're allowing the search engine bots

231:50

such as Google or Bing to crawl our

231:51

application so we have good SEO. This is

231:54

incredibly important. And we can also

231:56

turn in link previews for example for

231:58

Slack or Discord. So I'll say category

232:02

preview as well. Finally, instead of

232:04

using a token bucket rate limiting

232:06

algorithm, we're going to use a sliding

232:09

window. The way it works is you say

232:11

sliding window. And then you can define

232:14

the mode of that window. I'll set it to

232:17

live. And then we can set the interval

232:20

to 2 seconds, which will refill the

232:23

sliding window every 2 seconds. And

232:25

it'll allow for a max of five requests

232:28

per interval. For example, you can

232:31

increase this interval to maybe 1

232:33

minute. So you allow five requests per

232:36

minute. It's totally up to you. Now,

232:37

since this is just the configuration

232:39

file for ArcJet, in this case, we don't

232:42

necessarily need to have this request

232:44

right here. We're going to do that later

232:46

on from within our application or we'll

232:48

add it as part of the middleware. So, I

232:50

will just delete this part from here.

232:53

And instead, I will simply export AJ as

232:57

this new instance of the ARJet

232:59

configuration that we've just created.

233:00

And then we can move over to creating

233:02

security middleware by heading over to

233:05

source middleware and creating our first

233:08

middleware which will be called security

233:12

middleware.js.

233:13

Within here we can now import this

233:15

instance of arcjet that we created

233:18

coming from config/archjet.js

233:22

and we can create this security

233:23

middleware which is going to be just an

233:25

asynchronous function that accepts the

233:27

request the response and then the next.

233:30

So we can forward it over to the next

233:31

function in the chain. That's what the

233:34

middleware functions are for. And then

233:36

I'll open up a new try and catch block.

233:39

In the catch, we can just log the error

233:42

by saying console.

233:45

A rjet middleware error. And then we can

233:48

actually log it right here. And then

233:50

we'll also send a reset status of

233:54

500.json JSON with the error of internal

233:58

server error if something went wrong

234:00

with a message saying something went

234:04

wrong with security middleware. But if

234:08

everything is going well here we can set

234:10

up all the different limits and all the

234:13

different security measures that we want

234:15

Arkjet to implement. So first things

234:17

first we got to figure out which user

234:19

are we working with. Is that user a

234:22

guest, an admin, or a regular user? So

234:26

we'll say const ro is equal to wc do

234:29

user question mark. roll and by default

234:32

we can set it to guest. Maybe they're

234:34

unauthorized.

234:36

Then I'll create two empty variables of

234:39

limit and the message that we want to

234:41

display. And then I'll open up a switch

234:43

statement that's going to change the

234:45

limit and the message depending on the

234:47

role. So if the role is set to admin in

234:53

that case we're going to set the limit

234:54

over to 20 requests

234:58

and we'll set the message to be equal to

235:01

admin request limit exceeded 20 per

235:06

minute slow down. Now we can also add a

235:09

break statement right here to end this

235:11

case and we can duplicate it two more

235:14

times below.

235:16

Okay, just like this. For the second

235:19

case, we're going to have the user,

235:21

we're going to limit them to 10 requests

235:24

and then we'll say user limit exceeded

235:26

that is about 10 per minute also slow

235:29

down. And finally, the guest will be

235:32

allowed five requests per minute. So

235:34

this automatically tells you how

235:36

extensible the security measures that

235:38

you implement within your applications

235:39

are. You can give specific types of

235:41

users more requests. If your API is

235:45

paid, maybe you can have different tiers

235:47

and then if somebody's paying more, you

235:49

can actually charge more money for more

235:51

requests per a specific window of time.

235:53

After that we can define a new arjet

235:56

client by saying ajwith rule and we'll

236:01

provide a rule of sliding window and

236:04

pass in a mode of live an interval of 1

236:09

minute a max which is the limit defined

236:12

above and also a name of that rule which

236:16

is going to be set to a template string

236:18

of roll rate limit. So this is the new

236:23

rule that we're applying. And we can

236:24

even take it a step further by adding

236:26

logging for when Arjet figures out

236:29

someone is a bot. So I'll say if

236:33

decision this is very important. So if

236:35

arjet decides that something is a bot

236:38

and this decision is coming from client

236:41

protect. So as soon as we try to protect

236:43

this request cause decision is equal to

236:46

await client.protect not protect this

236:50

request that we're trying to make. So if

236:52

this decision is denied to make that

236:55

request and if the decision reason isbot

237:01

obviously then they're a bot then we can

237:03

use the logger functionality by

237:06

importing the logger at the top. So we

237:08

can say something along the lines of

237:10

import logger from hashconfig/logger.js.

237:17

And we also need to import this sliding

237:19

window algorithm by saying import

237:22

sliding window coming from at arjet

237:27

slash node. So now if we are denying our

237:31

request because of a bot then we can use

237:34

the logger.warn

237:36

feature and we can say something like

237:38

bot request blocked and then provide

237:41

additional information such as the IP

237:44

address of w.ip. IP

237:47

user agent of rec.get user agent. So we

237:52

know which user agent tried to make that

237:54

request and then the path that this

237:56

request was trying to be made too. So

237:58

that's going to be wreck.path. And then

238:01

of course since we have declined them we

238:02

can also return a rest. status of 403

238:06

with the following JSON message error of

238:10

forbidden. We are blocking you. And we

238:12

can also add a message saying something

238:15

like automated requests are not allowed.

238:20

Perfect. It is that easy to implement

238:24

bot detection. Now we can duplicate this

238:27

request down below including this logger

238:30

warn and the return statement. And this

238:33

time instead of checking for bots, we

238:35

can check if it's denied but if the

238:37

reason is shield. So this has to do with

238:41

the 10 most common attacks. In this

238:43

case, I'll say shield blocked request

238:46

and we'll say the same thing. Give me

238:49

the IP address, give me the user agent,

238:52

give me the path. And in this case, we

238:54

can also try to log the method that the

238:57

user is trying to make. So that can

239:00

either be post, put, update, get and so

239:03

on. And finally, I will do it one more

239:06

time by once again copying this bot

239:08

detection. And instead of checking

239:10

whether it's a bot, I'll check whether

239:13

it's a rate limit. Remember the limits

239:15

above. So in this case, we'll say rate

239:17

limit exceeded. And we will log the same

239:20

things again. And instead of saying

239:22

automated requests are not allowed for

239:25

the shield blocks, we can say something

239:27

like request blocked by security policy.

239:31

And then for the third one, we can say

239:32

something like too many requests.

239:35

Finally, if we pass all of these if

239:37

statements, that means that we are

239:38

allowed to make a request. So we can

239:40

finally just say next. This is good.

239:43

This middleware did what it's supposed

239:45

to. It tried to secure the application.

239:47

There's nothing to secure it from. So go

239:49

ahead and make the request. So now back

239:52

within our app.js, we can add another

239:55

middleware. Right here above all of our

239:57

requests, I'll add the app.use Use

240:00

security middleware and make sure to

240:03

import it right at the top by saying

240:06

import security middleware

240:09

coming from hash middleware/security.m

240:14

middleware.js.

240:16

And I think now you get a better idea of

240:17

what these middlewares are. So these are

240:20

all the different types of middlewares

240:21

that we're injecting in between our

240:23

requests that extend our app with some

240:25

additional functionality such as this

240:27

one that is applying security to our

240:29

application. Now to test it out, head

240:31

back over to HTTP and let's try to make

240:34

a request to for/halth and we can make

240:37

it a get request. So if you send it out,

240:40

we get a connection refused. That's

240:42

fine. Let's not forget that we can

240:44

immediately open up a new tab, make sure

240:46

that we are in the right repo, and then

240:48

just run mpm rundev to spin up our

240:51

application. Now, it looks like I have a

240:54

typo right here that is in security

240:56

middleware. So, if I quickly head over

240:57

to security middleware, we can fix it in

241:01

one go. It is on line 27. It looks like

241:05

I had one extra curly brace. And then we

241:07

also have to close this function right

241:09

here after we close the catch. Now if we

241:11

head back over to the config file of the

241:14

arkjet, same thing here. We have one

241:16

extra curly brace and back within our

241:19

app

241:20

we have to properly import security

241:22

middleware and we are properly importing

241:24

it but we forgot to export it from here.

241:27

So just do export default security

241:31

middleware and now we are running and

241:33

listening on localhost 3000 and as you

241:36

can see argjet is listening. So back

241:38

within our application, if you try to

241:40

make a request to forward/halth, you'll

241:43

see that everything is good. And now if

241:45

you rapidly make requests to health,

241:48

you'll see forbidden too many requests.

241:51

This means that you've just successfully

241:52

added rate limiting to your application.

241:55

And if you head back over within your

241:57

terminal, you'll be able to see

241:59

something like this. All the infos are

242:01

here, but we also have different

242:03

warnings from our logging system that

242:06

say rate limiting exceeded. We can see

242:08

the IP, the path, the service, and the

242:11

user agent, in this case, HTTP. Oh, and

242:14

if you head over within your ARJet

242:16

dashboard, you'll see that at first it

242:18

allowed a couple of requests over to

242:19

forward/halth. But then as soon as we

242:22

hit the rate limit, it starts denying

242:24

them, saying that the limit is a max of

242:26

five. At the same time, it also checked

242:28

out all the different rules that we set

242:30

for the rate limiting and bot protection

242:33

with specific allowances. But in this

242:35

case, we did not break any bot

242:36

protections. But just so you know that

242:38

you are protected. So if somebody tries

242:41

to hack your application, make some

242:43

malicious requests or just tries to use

242:46

bots to scrape your API, ArcJet will

242:49

protect you. That's it. It was super

242:51

seamless to add security to our

242:53

application. So let's go ahead and

242:54

commit it by saying get add dot getit

242:57

commit-m secure our API with arjet and

243:03

push it. Perfect. Now you know how you

243:05

can secure any API that you create in

243:07

the future and you can do the same thing

243:08

for different frameworks such as Nex.js.

243:13

In this lesson, let's dockerize our

243:16

application. You've learned a lot about

243:18

how Docker works within the crash course

243:20

part of this course, but now we'll

243:22

actually dockerize our application for

243:24

local and production environments so

243:26

that you can run it within any

243:28

environment and it works for everybody

243:30

who's running it. We've already talked

243:32

about what dockerization is. So now

243:34

let's leverage AI to help us implement

243:37

it within this project. In the video kit

243:39

down below, I'll provide you with a

243:41

special docization prompt. Simply copy

243:44

it and paste it within warp and then

243:47

let's go through it together. It says

243:50

you're a senior DevOps engineer. Your

243:52

task is to dockerize my application that

243:55

uses a neon database. The setup must

243:58

work differently for development and

243:59

production. Development environment is

244:02

local and we want to use Neon local via

244:05

Docker. Configure docker compose to run

244:08

neon local proxy alongside my

244:10

application. And again, you can learn

244:12

more about Neon Local right here. That's

244:15

the beauty of these agents. If something

244:17

is not working properly, you can just

244:19

provide it access to a link so that it

244:22

knows how to work with it. Again, your

244:24

prompts typically are going to be much

244:25

shorter and much less precise than this.

244:28

But just to ensure that we have more or

244:30

less the same output that we're getting,

244:31

I decided to take a bit more time

244:33

writing this prompt. So, let's go ahead

244:35

and run it. It says, "I'll help you

244:37

dockerize your acquisitions application

244:39

with neon database support. Let me first

244:41

analyze your current project structure

244:43

and then create the necessary docking

244:45

configuration." Then it walks us through

244:48

the tasks that it created for itself.

244:50

The first step will be to create a

244:51

Docker file for NodeJS application.

244:53

Immediately, we get back a Docker file

244:56

for Node.js acquisitions application

244:58

with a multi-stage build that works for

245:01

both production and development. What

245:04

it's doing here is creating a new

245:06

nonroot user for security. Then we're

245:08

changing its ownership and exposing the

245:11

port. There's even a health check right

245:13

here. And then we repeat the same thing

245:15

for development stage. Now you can see

245:17

for development right here the command

245:19

is mpm rundev. But for production it is

245:22

manual. It's just pointing to a source

245:24

index.js. Instead of that we can head

245:27

back over to our code, head over to

245:30

package.json JSON and like we have added

245:33

a dev script right here for development.

245:35

We can also add a start script for

245:38

production that'll simply be node

245:41

slource/index.js

245:44

or I think it's just source/index.js.

245:47

This way we're watching for the changes

245:48

in development. But when you run it on a

245:51

server, you simply want to run it once

245:53

and let it run. Now we can edit this

245:55

file provided by warp and simply change

245:57

this over to from node source index.js

246:00

gs to simply mpm,

246:04

start,

246:05

and apply changes. And now it'll

246:08

continue doing its thing. Now we've

246:09

gotten the docker compose file, and it

246:12

looks like it has the configuration for

246:14

neon local, the environment, volumes,

246:17

health check, network, and then the

246:18

node.js application right here. It's

246:21

using all of the best Docker practices

246:23

to create the files that will dockerize

246:25

the application for us. So for the time

246:27

being, let's go ahead and accept it. We

246:29

might need to make some small changes or

246:31

adjustments to it later on, but for now

246:33

it's perfect. It says that it is right

246:35

now on a step three out of eight. And

246:37

now it created another docker compose

246:39

for production. Note that in here it is

246:42

using the neon cloud database whereas

246:45

for development it used neon local. Neon

246:49

local is a feature by neonb which allows

246:52

you to use docker environments to

246:54

connect to neon and manage branches

246:56

automatically. In simple words, it's a

246:58

proxy service that creates a local

247:00

interface to your Neon Cloud database.

247:03

So, let's go ahead and accept this

247:04

production database, too, or at least

247:06

the configuration for it. And now, it'll

247:08

create the environment configuration

247:10

files. Here we got just some more ENVs.

247:12

It created them undervvelopment.

247:15

So, let's go ahead and accept them. For

247:17

sure, we'll have to add our own ENVs

247:19

here later on anyway. And it did the

247:21

same thing for ENV production. Now it'll

247:24

automatically update the package JSON

247:26

with docker scripts. Now this is a lot

247:28

of docker commands that we have to run

247:31

manually. So in this case I won't go

247:34

ahead and accept these changes since

247:36

this is a DevOps course. What we'll do

247:39

instead is together we'll develop a bash

247:42

script that will run all of these

247:45

different Docker commands in sequence so

247:47

that we can automate them which is the

247:49

whole goal of DevOps. So for the time

247:52

being just go ahead and cancel this file

247:55

right here. And now since we canceled it

247:57

we can just ask it to continue. So I'll

248:00

just type continue and we can go ahead.

248:02

Since I cancelled the step of updating

248:03

package JSON it's trying to do it again.

248:05

So this time instead of canceling I'll

248:07

go ahead and manually remove these

248:09

additional commands that it added and

248:11

just click done. Oh and make sure to

248:14

remove the comma after the last command

248:16

that you have right here. Oh and the

248:18

extra curly brace as well. Perfect. So

248:20

now the only thing that we changed is

248:22

adding the start script. And now we can

248:24

apply the changes. As I said, we'll be

248:27

implementing all of these different

248:28

Docker scripts within our custom bash

248:31

script. So more on that soon. Now it'll

248:34

create a simple Docker ignore file which

248:36

we can accept. And like any good

248:39

engineer, it'll give you a comprehensive

248:41

documentation for the entire Docker

248:43

setup that it just implemented. And

248:45

there we go. Here's a complete docker

248:47

setup.md. So we can see all the changes

248:50

in markdown. It talks a bit about

248:52

development and production environment

248:55

more about the prerequisites to set us

248:57

all up. It told us what it did, what we

249:00

still have to do by adding our env to

249:02

make it work, how to start up our

249:04

development environment, and exactly

249:06

what it will do. Same thing for

249:09

production and more on using Neon Local

249:12

to run it all together. At the end, it

249:14

even provided us with a quick start

249:15

checklist so we know exactly what we

249:18

have to do to make it work. Let's go

249:20

ahead and apply those changes and see

249:22

what Warp has to say next. Oh, it looks

249:24

like it even provided a quick setup

249:26

script that we can run to make our

249:28

process easier. I'll definitely go ahead

249:30

and accept that. It's also asking us to

249:32

run it, but for now, I will just go

249:33

ahead and cancel it so we can go through

249:36

the changes that it implemented on our

249:38

own. The next step is to get all the

249:40

necessary env. The first environment

249:42

variable we can get is going to come

249:44

from Neon. And you can get it by heading

249:46

over to the top right, pressing your

249:48

profile photo, heading over to account

249:50

settings, and then switching over to the

249:53

API key section where you can create

249:55

your own personal API key. You can give

249:59

it a name. In this case, I'll call it

250:01

JSM acquisitions.

250:04

And click create. Then copy it. Head

250:07

back over to your application and go to

250:09

env.development. development. You should

250:11

be able to see two different files, one

250:14

for production, one for development.

250:16

Thankfully, Warp left some nice comments

250:18

right here saying that this is the

250:20

development environment configuration

250:22

used when running application with

250:23

Docker Compose in development mode with

250:26

Neon local proxy. Okay, so we have some

250:28

things that we have specified before. We

250:31

also have our database configuration

250:32

right here. And what you'll have to do

250:34

here is put the Neon API key right here

250:37

that you just copied as well as a Neon

250:40

project ID and a branch. So we just got

250:43

the API key. Now to get the additional

250:45

things, you can head back over to your

250:47

project. Then head over to the settings.

250:49

And right here you'll see the project

250:51

ID. Simply copy it and paste it over

250:54

here under Neon project ID. And the last

250:57

thing is the branch name or the branch

251:00

ID. This one you can get if you head

251:01

over to branch overview and you can copy

251:04

this branch ID right here and simply

251:06

override the main branch. Oh, and let's

251:08

also not forget the arjet key. I believe

251:10

that one was previously in just the env.

251:13

So you can just copy the arjet key from

251:15

here and paste it over at the bottom.

251:19

I'll call it arjet and just specify the

251:21

arjet key right here. Now a cool trick

251:25

or a concept in docker is that instead

251:27

of specifying whole env line by line in

251:30

a docker compose file you can just point

251:32

it out to the respective file. So head

251:35

over to docker composedev.yaml

251:39

and modify it to point to

251:41

enenv.development. So here where we have

251:43

our environment variables instead of

251:45

that you can simply say env file and

251:50

point it to env.development. development

251:54

and now it'll get access to all the

251:56

environment variables. This was done for

251:58

neon local which I will collapse right

252:00

now. And then below you have the NodeJS

252:02

application and you can repeat the same

252:04

thing. So here we also have the

252:05

environment and what you can do is say

252:09

env

252:11

then you can specify the path of the

252:14

file by saying

252:16

development. So now we have updated our

252:18

environment variables for both services

252:21

neon local as well as the app. And we

252:23

can repeat the same thing for

252:25

production. So if you head over to

252:28

docker compose.pro

252:30

you can also remove the environment

252:32

variables and just say envouro

252:36

and in this case you don't have to do it

252:38

for neon local. Oh but make sure that

252:41

this says env file instead of

252:43

environment. Perfect. Now, do you

252:46

remember how Warp generated a set of

252:48

Docker bash script for us not that long

252:51

ago? We have it here, but it's pretty

252:54

detailed and maybe a bit too long. And

252:56

maybe this one differs from the one that

252:58

I generated for you. For that reason, I

253:00

want to make sure that we have the same

253:01

bash script that we can run. So, head

253:03

over to the video kit down below, copy

253:05

the bash script there, and paste it

253:07

right here. You'll notice that this one

253:09

is much shorter. And now we can go

253:11

through it together, and I can explain

253:13

how it works. This is a development

253:16

startup script for the acquisitions app

253:18

with neon local. This script will start

253:20

the application in development mode with

253:22

neon local. We already covered most of

253:25

these commands during the crash course

253:26

part such as echo which is basically you

253:29

can think of it like an alert or a

253:30

console log in a bash language. It just

253:33

says what is happening. Then we have a

253:35

check for the development environment

253:37

variables and whether they exist. If

253:39

they don't, we can just say, "Hey, this

253:41

is not found." Then please copy them and

253:44

then proceed. Once again, we're checking

253:45

if Docker is running. If everything is

253:48

good, we're creating a new neon local

253:50

directory if it doesn't already exist.

253:52

And then we're adding it to get ignored

253:54

if it's not already present there.

253:56

Finally, we're building and starting

253:58

development containers so that the neon

254:00

local proxy will create an ephemeral

254:02

database branch and application will run

254:04

with hot reload enabled. Ephemeral means

254:06

that it is a temporary data store that

254:09

exists only for a short period of time

254:11

and then the data will be lost, shut

254:13

down or terminated. Keep in mind the

254:15

data here is not persistent. The goal

254:17

here is only to hold it for some time

254:19

and then application will run with hot

254:21

reload enabled. We then run migrations

254:24

with Drizzle to make sure to apply the

254:25

latest schemas. Then we wait for the

254:28

database to be ready. We use a docker

254:30

compose command and then we start a

254:32

development environment. Finally,

254:34

development environment, if everything

254:35

went well, should have started on

254:37

localhost 5173. So, let's head over to

254:41

the package.json here. Let's add a

254:43

command to execute our new bash script.

254:47

Remember when we had all of the other

254:49

commands right here? Now, it's going to

254:50

be just one. I'll call it dev docker.

254:54

And it'll simply run sh setup docker.sh.

254:58

Make sure to have sh at the start or

255:00

whatever the name of your sh script is.

255:03

Maybe it's going to be something

255:05

different. As a matter of fact, just so

255:06

we all have it the same, let's actually

255:09

create a new folder in the root of our

255:11

application and let's call it scripts.

255:15

Then move this script over to the

255:17

scripts

255:18

and simply rename it

255:22

to dev.sh.

255:25

This will be our docker dev command. So

255:27

change it here as well /cripts/dev.sh.

255:32

And now before we run this command, make

255:35

sure that docker desktop is running and

255:37

then run mpm rundev docker. You can see

255:41

that it'll start executing the commands

255:43

from the bash script such as pulling the

255:45

changes from neon local and then

255:47

building out our entire application. And

255:49

there we go. Just like that, it

255:51

successfully built the acquisitions app

255:53

and then created the network and two

255:56

different containers for our development

255:58

environment. No issues and immediately

256:01

the app is running within a Docker

256:03

containerized environment. Typically,

256:05

this would take hours to set up, but

256:07

with proper understanding from the crash

256:10

course part and a bit of help from AI,

256:13

we were able to get it up and running in

256:15

a couple of minutes. And immediately we

256:17

got a message that our application is

256:19

listening on localhost 3000. So let's

256:22

just open it up.

256:24

And you can indeed see that it is

256:26

running. You can visit the root route or

256:29

head over to API endpoint where you get

256:31

the acquisitions API is running. Even

256:34

the health check should work. Perfect.

256:37

Of course, testing it with HDPI as well

256:40

will do the trick. So if you try to hit

256:43

a 3000, you'll get a successful

256:45

response. This is not running from our

256:48

terminal or from warp terminal. This is

256:50

running directly from a docker

256:52

container. But now let me show you

256:54

something else. Instead of just making a

256:56

get request to hello acquisitions, if we

256:59

head over to 3000 API sign up and try to

257:04

create a new account. So I'll head over

257:08

to a text and I'll pass over all the

257:10

important information such as the name.

257:13

I'll go with something like Adrian

257:16

and we are in JSON. So double quot

257:18

strings all the way. I'll enter my

257:20

email. I'll make it a bit different this

257:22

time. Contactjsmastery.com.

257:26

And let's not forget about a password as

257:28

well. I'll do one, two, three. One, two,

257:30

three. Oh, and I forgot to add o before

257:33

the sign up. Now the reason we got this

257:36

error right here, it's mentioning a

257:38

failed query. But basically what it's

257:40

saying is, hey, these users don't exist.

257:43

I cannot find this table or the schema

257:45

for the users. And the reason for that

257:47

is we have to reconfigure our database

257:49

connection so that it actually points to

257:52

neon local for our local development

257:55

purposes. You can do that by heading

257:57

over to source config database.js.

258:01

And then here we'll have to do setup for

258:03

neon local by checking if

258:07

process.env.node

258:10

env is set to development. In that case,

258:14

we'll do some further neon config which

258:17

you can import from neon database

258:19

serverless by fetching the endpoint that

258:23

we're trying to reach. And in our case,

258:25

we're setting it to well, you can see

258:28

what docker config is saying. If you

258:30

head over to docker compose dev under

258:33

neon, you can see that its port will be

258:36

5432.

258:37

So we can set the endpoint to http

258:40

col/neon-local

258:44

col 5432/sql.

258:48

We can also set the neon config use

258:51

secure

258:53

web socket and set it to false. And we

258:56

can also set the neon config dot pool

259:01

query via fetch and I'll set it to true.

259:05

I found these values in the Neon

259:07

documentation for setting up the Neon

259:09

local environment. And that's it. So now

259:12

if you head back and retry the request,

259:15

there we go. The user got registered,

259:17

which is amazing. ID number three,

259:21

adrien contact.jsmastery.com.

259:23

This is great because it seems like we

259:25

just created our third user and

259:27

everything is connected to our original

259:28

DB. But if you head back over to Neon

259:31

and reload your tables, you'll see there

259:34

are only two users that we created

259:36

before. So, it seems like our database

259:38

is not connected. Or is it? Actually,

259:40

we're connected to the local instance

259:43

using Neon local docker connection so

259:46

that we don't mess with production

259:47

databases. This means that all the

259:49

changes that you make right here for our

259:51

local environment are going to stay

259:54

local and it's impossible to break

259:56

anything in production exactly how it

259:58

should be in real applications. Now, how

260:01

would you go about actually dockerizing

260:02

this for production? Well, there's

260:04

another script that we can add. So if

260:07

you head over to root scripts and create

260:10

a new script which you can call prod.sh,

260:14

you can find this script within the

260:16

video kit down below. It's very similar

260:18

to the dev script almost exactly the

260:21

same, but instead of using neon local,

260:24

it's using a regular database

260:26

connection. We won't go ahead and run

260:27

this right now because it's going to

260:28

work in the same way that the dev one

260:30

did. But before you're running this

260:32

later on, just make sure that your

260:35

environment production variables are

260:37

also properly set. Here you'll put your

260:39

real database URL that before we kept

260:42

within our originalv.

260:44

And you can also add the arjet keys,

260:47

jwts, and so on. And then you're almost

260:49

ready to run it. You just need to add it

260:51

to your package json by saying proder.

260:56

And then you're going to run this

260:57

different command sh/scripts

261:01

slpro.sh

261:04

and you're ready for production. I mean

261:06

the fact that we implemented

261:07

dockerization to our application so

261:09

quickly and in such a simple way is just

261:12

crazy. We just needed a good prompt and

261:15

we let warp handle the rest. So I'll

261:17

definitely go ahead and commit this over

261:19

to GitHub right now by running git add

261:22

dot getit commit-m

261:24

implement dockerization

261:28

and get push. Once you understand

261:30

theoretical concepts and the reasons why

261:33

we do things we do then it's very simple

261:36

to turn it into a clear prompt with

261:37

detailed steps and warp and its army of

261:40

agents will do the rest. It's that

261:42

simple. You focus on architecting and AI

261:46

will take care of the implementation.

261:50

Now that we've dockerized our

261:51

application, let's implement all the

261:54

user routes and all of the CRUD

261:56

functionalities regarding users. We can

261:59

start by creating a new file right

262:01

within source routes. And then within

262:05

routes, we can create a new file called

262:07

users.outes.js.

262:10

And then as with the o routes we can

262:12

create a new router by saying const

262:16

routouter is equal to express.outer

262:20

and then we can define a router.get to

262:24

get all the users. So once somebody

262:26

points to this route we'll get a request

262:28

and a response and we will just run the

262:30

res.end

262:32

get slash users. I will duplicate it

262:36

three times. And let's not forget to

262:39

export this router. Now for the second

262:42

one, I'll do a get request, but not to

262:44

forward slash users, rather the do

262:46

forward slash users slash colon id,

262:50

which means that this will give us the

262:51

details of a specific user. Then we can

262:54

do a put request so that we can modify a

262:56

user profile. And we of course have to

262:59

know which user we're modifying. So that

263:01

again is going to be the ID property,

263:03

but this time a put request and finally

263:07

a delete request to a specific user ID.

263:11

So we want to delete a specific user.

263:15

Perfect. So now that we have those

263:16

routes, let's head over into our app.js

263:20

and alongside getting the o routes, we

263:22

can also get the forward slap API slash

263:26

users routes. And that's going to refer

263:29

to the user routes coming from the file

263:32

we just created. Perfect. Now, we'll

263:36

also need to create some additional

263:38

services to fetch and create all of

263:40

those users. So, head over into source

263:44

services and create a new service file

263:48

called users.services.js.

263:53

And we'll need to define the function

263:55

that'll give us back all the users.

263:57

That'll look something like this. Export

263:59

const get all users is equal to an

264:03

asynchronous function that will have a

264:05

try and catch block. In the catch, we

264:08

will of course just use the logger

264:09

functionality to log the error something

264:12

like error fetching all users or error

264:15

getting users. And then we can also

264:17

throw this error. And then in the try

264:20

we'll actually fetch all users by saying

264:23

all users is equal to await db coming

264:28

from database.js file dot select. So we

264:32

want to select specific fields from the

264:34

database. Specifically we want to get it

264:37

dot from the users table or the users

264:41

model. So simply import that users model

264:43

and then within the curly braces you can

264:46

pass which field you want to get back

264:47

from the users such as id is going to be

264:50

users do ID then we can do the name and

264:53

the email we can also get the role of

264:56

users roll created

265:00

is users created at and updated at these

265:03

are all the fields that we need and once

265:05

we get those users we can simply return

265:08

them so I'll say return all users

265:12

Perfect. In this case, it's saying that

265:15

this local variable is redundant. So,

265:17

you don't even have to put it in a

265:18

variable. What you can just do is add a

265:21

return statement right here because it's

265:24

the same thing. We're just returning the

265:26

output of this DB select, which are the

265:29

users. Perfect. And now we need to

265:31

create a new controller for the users.

265:34

So, head over to controller and create a

265:36

new file called users.controller.

265:39

controller.js

265:42

and let's create a controller that will

265:44

fetch all the users using the service we

265:46

just created. Once again, I'll create a

265:48

new function export const get all users

265:52

is equal to an asynchronous function

265:54

that has access to a request a response

265:57

and the next function I'll open up a try

266:00

and catch block.

266:02

In the catch, we will use the logger to

266:05

just log the error as before. And then

266:08

if there's an error, we can just pass it

266:10

over to the next function. But in the

266:12

try, we will use the logger to console

266:15

log the info that we're just trying to

266:17

fetch all users. So getting users dot

266:20

dot dot and then we can get all users by

266:22

calling and using a weight of this users

266:26

service that we created. Or you can just

266:29

say get all users and make sure to

266:32

import it at the top by importing get

266:35

all users from services users. And maybe

266:38

we can rename this controller to fetch

266:41

all users not to confuse with this

266:43

service that we have right here. Once we

266:45

fetch them, you can just return a JSON

266:48

object with a message of successfully

266:52

retrieved users. And you can pass the

266:54

users object equal to all users. And you

266:57

can also pass the user count which is

267:00

going to be all users.length.

267:03

Now it might seem like a bit of an

267:04

overkill to create a service and then to

267:08

use that service within a controller

267:10

whereas the controller itself could just

267:12

use this logic that we had here. Like we

267:15

could have just done this and that would

267:18

be a bit easier to get the users. But

267:20

keep in mind that as our app scales so

267:23

will the controllers and the services

267:25

and the routes and everything else. So

267:27

allowing all of these to be separate

267:29

parts and to breathe independently is

267:31

very important for scalability.

267:33

Controllers will handle logging,

267:35

validation, and more. And the service

267:37

part will only handle the database

267:39

parts. That's a clear separation of

267:41

concerns, which is a must for clean

267:44

code. So finally, let's head over into

267:46

the routes, which once again are serving

267:48

their own purpose. And the only thing

267:50

you have to do here now is just fetch

267:54

all users because we've already created

267:56

a service and the controller for

267:58

fetching them. So you're now just saying

267:59

whenever a user goes to that's going to

268:02

be the API users then simply give me all

268:06

the users. Now there's two ways of

268:08

running this app. You can either run mpm

268:11

rundev to spin it up on localhost 3000

268:14

or you can use the other command we

268:17

pointed out right here and that is dev

268:21

docker. So let's go ahead and run mpm

268:23

rundev docker and make sure that you

268:26

have your docker app actually running on

268:28

your device. Once you have it, you can

268:30

run the command and it'll start the

268:33

acquisitions app in a development mode.

268:35

it'll build it out and do all the Docker

268:38

stuff that it does. There we go. Neon

268:40

local is running. Acquisitions is up and

268:43

it says that it's listening on localhost

268:45

3000. So if you head over there, you

268:47

should be able to see hello from

268:49

acquisitions. But if you head to

268:51

localhost 3000 slapi/

268:54

users, you'll get back a JSON output

268:57

with a message of successfully retrieved

268:59

users. And you can see a full users

269:02

array getting returned to us right here.

269:05

Perfect. Now, for all of the upcoming

269:08

services and controllers, you'll have to

269:10

do almost the same thing that we did

269:12

with this one endpoint right now. And to

269:14

be more productive, we'll use AI to

269:16

speed up our process and get the job

269:18

done. So, let's head back over to Warp.

269:22

Warp has a special feature called

269:24

project scoped rules that allow you to

269:26

create some specific rules just for this

269:29

project. Think of some custom guidelines

269:31

or contexts different for each project.

269:33

If you open up the sidebar, you'll see

269:35

rules right here and you can add global

269:37

rules or you can add projectbased rules.

269:41

So let's initialize a new project. Then

269:44

in your finder or file explorer, you can

269:46

find the repo of your application and

269:48

click open. Now it's asking us would you

269:51

like the agent to index this codebase

269:53

which will lead to more efficient and

269:54

tailored help. I'll say yeah definitely

269:57

go ahead and index it. So if you head

269:59

over to codebase indexing, you'll be

270:00

able to see that it has been

270:02

successfully synced. And if you open up

270:04

this file, this is the warp md file

270:07

which is guidance for this specific

270:08

project. Now if you open up this file,

270:10

you'll see that warp automatically

270:12

generated some custom rules for this

270:14

project specifically. It gave it all the

270:17

project information and the key

270:19

technologies that it needs to know when

270:21

running some additional code. So, if

270:23

there's some additional info that you'd

270:24

want Warp to know when going over your

270:26

project and when developing additional

270:28

code, you can just add it right here.

270:30

For now, we're good. And now, back

270:32

within Warp environment, we can pass

270:34

this new prompt that'll generate all of

270:37

the additional user CRUD services for

270:39

us. You can find it in the video kit

270:41

down below and just copy and paste it

270:42

here. In simple terms, we ask it to

270:45

implement all of the other CRUD

270:47

services. You don't necessarily have to

270:50

be this descriptive but in this case

270:52

just so we get the same output I

270:54

specifically pointed out that we need a

270:56

get user by ID update user and delete

270:59

user functions. Then we are also

271:01

implementing some validations and some

271:04

additional controllers. So let's press

271:06

enter and let's let warp do its thing.

271:09

It nicely examined the task and split it

271:12

into multiple smaller tasks and now

271:15

it'll ask us for input as it is

271:16

proceeding.

271:18

Now, there's this little thing right

271:20

here, the two arrows that say

271:22

autoimprove all agent actions for this

271:24

task. And I'll turn it on because I

271:26

believe that it should be able to nicely

271:28

generate all the user validations,

271:30

controllers, and services. So, let's let

271:33

it work and I'll be back in a minute.

271:35

And there we go. In about a minute, Warp

271:37

has successfully implemented the

271:39

complete user crowd functionality for

271:42

our Express application. And like a good

271:44

guy, he even provided a comprehensive

271:46

summary of what has been implemented.

271:49

The user service is right here. These

271:51

are three separate functions that deal

271:54

directly with speaking with the database

271:56

to retrieve the user, update them, and

271:59

delete them. We also have the validation

272:01

to make sure what we pass into the API

272:03

is correct. It fixed the authentication

272:05

middleware and finally it created the

272:08

user controllers which actually use the

272:10

services created above and then pass

272:12

over to data as API responses.

272:16

Perfect. We can even see the API

272:18

endpoints right here so that we can test

272:20

them out back within our code. It

272:22

created a new users.services.js.

272:26

If we want to be very specific, we can

272:28

rename this file to user.service.js.

272:32

I think I misspelled it when I was

272:33

giving instructions. And make sure to

272:35

also fix the imports for this service

272:38

within the user controller. It was

272:40

supposed to be users.service.js.

272:44

So these services alongside get all

272:47

users which we implemented also include

272:49

get user by ID which gets a specific

272:52

user, update user and then delete user.

272:56

And finally, in the controller, we have

272:58

all the functions that actually use

273:01

those services and return the data. And

273:04

most importantly, within the routes,

273:07

we're now using those three new

273:09

controllers. And each corresponds to its

273:12

own endpoint. Oh, and look at that. It

273:14

even left some comments so we know

273:15

what's happening. For the time being,

273:17

I'll just test a simple get request to

273:20

see whether we can get the information

273:21

for a single user. That's the simplest

273:23

way to do right now. We can just copy

273:25

the ID, head over to localhost 3000

273:30

slappi slash users slash and then pass a

273:34

number of your user like two and in this

273:37

case we get back the authentication

273:40

required no access token provided

273:42

message. Now this response is exactly

273:44

what we wanted. It means that our

273:46

verification is working. You cannot get

273:49

user details of another user if you're

273:51

not logged in. So, let's head over into

273:53

our HTTP client and let's log in. So,

273:56

I'll just head over to sign in and I'll

273:59

sign in with my details. I think I just

274:02

need my email and my password.

274:05

If I do that, you'll see that I got

274:08

signed in successfully. And then if you

274:11

expand these headers, under the headers,

274:14

you will see a token right here. What

274:17

you need to do here is copy this entire

274:20

token and then pass it as a cookie

274:23

header in your upcoming responses. So

274:26

cookie is equal to this token you just

274:29

copied. If you do it that way and then

274:31

head over to API users and you try to

274:35

retrieve a specific user like the one

274:37

with an ID of two and click send. Oh,

274:40

let's make sure that it's a get request.

274:43

Now the user got successfully retrieved

274:46

and the data is back which means that

274:49

this is working perfectly. So now if I

274:51

wanted to delete this account I would

274:54

just have to make a delete request to

274:56

it. And if I click send this is perfect.

274:59

Check this out. We're getting a 403

275:02

forbidden because we're trying to delete

275:04

somebody else's account and we don't

275:06

have permissions to do that. This works

275:09

thanks to warp adding middleware to

275:11

every single one of our routes. Check

275:14

this out. Now middleware will make more

275:17

sense. We always have this controller

275:20

function that we're calling once we hit

275:22

this endpoint like the delete right

275:24

here, right? But before this function is

275:26

called, it's calling two pieces of

275:28

middleware. One is called authenticate

275:31

token which needs to make sure that we

275:34

are authenticated and only if we are it

275:36

pushes it to the next one in the line

275:39

which is require ro. So in this case

275:41

we're saying that we need to be an admin

275:43

to delete something. If we are then we

275:46

can proceed and finally then we delete

275:49

it. Now I'm not sure whether I'm

275:50

currently logged in with an admin

275:52

account or not but let me try to delete

275:54

a different account like maybe a user

275:57

number one. Nope. For all three of

275:59

these, I don't have permissions to do

276:01

that. If I try to call all the users, so

276:05

I can see which one is the admin. But

276:07

even for getting all the users, we don't

276:10

have the permissions because to be able

276:12

to read the details of all the users,

276:13

you have to be an admin. That makes

276:15

sense. So for a second, I will just

276:18

remove this function from here and

276:20

remake the request just so I can see

276:21

which user is the admin. Okay, so we

276:24

have these three users right here. But

276:26

instead of logging into one of these

276:27

previous accounts, let's go ahead and

276:29

create a new account that also has admin

276:31

privileges. So make a new post request

276:34

to API o sign up. We don't have to pass

276:40

any of the headers, but we do have to

276:42

pass a name, which I will call something

276:46

along the lines of I am the admin. And

276:50

then the email will be adminadmin.pro.

276:55

And we can pass some kind of a password.

276:57

And don't forget to add a role of admin.

277:01

If you do this and send over this

277:02

request, you'll be able to see that this

277:04

new admin user had been registered.

277:08

And in the request, we automatically got

277:11

this user's token. This one is more

277:13

powerful. So, let's replace the old one

277:16

as we're now logged in as the admin. And

277:18

now if you try to delete a user, you can

277:21

head over to API users and then maybe

277:25

like user number one, we want to delete

277:27

it. You would make a delete request to

277:30

it with the proper headers so that the

277:33

user know who is authenticated and we

277:35

don't have to pass anything within the

277:37

body. So now if you make a request,

277:40

you'll see user deleted successfully.

277:43

Wonderful. This means that warp

277:45

successfully implemented authentication

277:48

and rolebased access middleware which is

277:51

working perfectly once you understand

277:53

how things work and once you know what

277:56

you want to implement using AI to do it

277:58

for you just feels like a superpower

278:01

because it gets done right and it gets

278:03

done fast. You might need to tweak it

278:06

here and there, but more or less if you

278:08

start with a proper scalable nice

278:11

codebase like the one that we have right

278:13

here with controllers, routes, services,

278:17

utils, and more. AI agents will also be

278:19

able to make a better job of

278:21

implementing additional features. So

278:23

with that in mind, let's go ahead and

278:26

run get add dot

278:29

git commit-m

278:32

implement users CRUD and get push.

278:36

Perfect. This wasn't necessarily related

278:38

with DevOps, but I still wanted to

278:41

include a bit of functionalities within

278:42

our application so that in the next

278:44

lesson we can get back to DevOps this

278:47

time in form of testing.

278:52

And finally, we are at a part where

278:54

we're going to dive into testing.

278:56

Testing is one of the crucial parts of

278:59

development in general, but specifically

279:02

DevOps because you want to ensure that

279:04

the entire development process is well

279:06

tested and predictable. So, I'll show

279:09

you how to use one of the most popular

279:11

testing libraries out there called Just.

279:14

Believe it or not, it has almost 30

279:17

million weekly downloads and installing

279:20

it couldn't be any simpler. You just run

279:23

mpm install-save-devest.

279:28

So, we're adding it as a dev dependency.

279:30

Oh, and we'll also add superest which

279:33

provides highle abstractions for testing

279:36

HTTP while still allowing you to drop

279:38

down to the lower level API provided by

279:40

the super agent. 7 million weekly

279:42

downloads. So, let's install that as a

279:45

dev dependency as well by running mpm

279:48

install superest-save-dev.

279:51

Once that is done, you can head over to

279:53

justestjs.io

279:55

and head over to getting started. I'll

279:57

turn on the dark mode and follow the

279:59

first steps. After installing just, you

280:02

can head down to additional

280:03

configuration. And the first step right

280:06

here is to generate a basic

280:08

configuration file. And we can do that

280:10

by running mpm init just add latest. So

280:14

just run this command and we'll have to

280:16

answer a couple of questions. First, do

280:18

you want to install the create just CLI?

280:21

To which I'll say yes, please go ahead.

280:23

And then it'll ask you would you like to

280:25

use justest when running test script in

280:27

package json? I'll say yes to that.

280:30

Would you like to use typescript for the

280:32

configuration file? That'll be a no. In

280:34

this case, we're running a JavaScript

280:36

application. Then you can choose between

280:38

node and jsdom. In this case, we'll be

280:41

testing a node application. Do you want

280:43

just to add coverage reports? I'll say

280:45

yes to that. Which provider should be

280:48

using? In this case, we'll be using v8.

280:50

And do we want to automatically clear ma

280:53

calls, instances, and context, and

280:55

results before every test? I'll say yes

280:57

to that. And that will have generated a

281:00

justestconfig.mjs.

281:02

So you can just open it up by finding it

281:04

right here within your project explorer.

281:06

and head over into justconfig.mjs.

281:10

There's a lot of comments right here for

281:12

all the different things that you might

281:13

want to turn on. And there's some things

281:15

that are not commented out such as clear

281:17

mocks, collect coverage, coverage

281:19

directory, and so on. It's a very long

281:21

file, but as you can see, most of it is

281:23

just commented out. One thing I want to

281:25

point your attention to is this test

281:27

environment variable where it says the

281:29

test environment that will be used for

281:31

testing. In this case, we want to switch

281:33

it over from just environment node to

281:36

just node. We'll be running our tests

281:39

there. Then you want to head over into

281:41

package.json.

281:42

And under imports, you can add an import

281:45

alias for the source folder as we'll

281:47

need that when writing tests. So you can

281:49

say hash src/

281:52

everything. We'll point it to /source

281:55

slash everything. And another thing

281:58

we'll have to do is add the test script.

282:01

Right now it just says test justest, but

282:04

instead we'll say test and provide some

282:06

additional options such as node options

282:10

is equal to d-experimental-vm

282:16

modules and then we'll actually run the

282:18

command. The reason why we're doing that

282:20

is so that it works with the type of

282:22

node applications using the module. So,

282:25

new ES6 import statements. That's why we

282:28

have to provide this experimental VM

282:30

modules. Great. With that said, let's

282:33

navigate over to source app.js. And then

282:37

within here, we'll need to add some

282:39

default middleware logic that will catch

282:41

all endpoints that don't exist or are

282:44

not defined. We'll use this later on in

282:46

the test to check if a request has been

282:48

made. It should not throw an error, but

282:50

show a meaningful message. So right

282:53

below these two routers I'll say app do

282:55

use and immediately render a reckon res

282:59

because we don't need a path as it'll

283:01

act as a catch all route. So immediately

283:04

we can just return one thing from it.

283:07

That'll be a res status of 404 which

283:11

means it doesn't exist or could not be

283:14

found. And then we'll return an error

283:16

saying route not found. Perfect. We'll

283:19

use this later on within our tests. So

283:22

let's go ahead and create a new test

283:24

folder by heading over into our

283:26

acquisitions app. We can create it in

283:28

the root of our application and let's do

283:30

it via terminal. Make sure that you are

283:32

currently in the acquisitions folder and

283:35

then run mkder deer which is to make a

283:38

directory and call it tests. Within

283:40

tests you can then create a new file

283:43

which you can call app.test.js.

283:48

And within it we can write our first

283:50

sample test. So let me teach you how we

283:53

do tests in justest. The way you

283:55

approach creating tests is always

283:57

describing what you're trying to do. So

284:00

you can say describe and then define

284:02

what you're describing. So in this case

284:04

API endpoints and then you can create a

284:07

callback function within it. Then you

284:09

can describe again what should happen.

284:11

In this case, we want to get a forward

284:14

slashhealth

284:16

route and then as the output of that you

284:19

say what should happen. It should return

284:24

health status. And then you define a new

284:27

asynchronous callback function after

284:28

that and make that actual request by

284:32

saying const response is equal to await

284:36

request to which you pass the app and

284:39

that app can be coming right at the top

284:41

by importing app from hash srcapp.js

284:46

JS and then once you make that request

284:48

you can make a get request to

284:50

forward/halth and then you can say

284:52

expect a response of 200 then we know

284:56

that our app is working. Another thing

284:57

we can do is expect the response.body to

285:02

have property of status which is set to

285:07

okay. So I think you can already get how

285:09

intuitive writing just tests is. You're

285:12

basically saying describe this. It

285:15

should do this and we are expecting that

285:17

it will do this and I will repeat this

285:20

expect two more times. So let me just

285:22

paste it below and indent it properly.

285:27

For the second time we're expecting the

285:29

response body to have a time stamp

285:32

because that's also one of the things

285:33

that we're returning. Oh, in this case

285:35

we're missing the okay at the end. So

285:37

let's end it properly here. And finally

285:40

for the third time we're expecting it to

285:42

have the uptime because that's also

285:45

another properties that we have there.

285:47

And now we can repeat this inner

285:49

describe in case you want to add another

285:51

test. This time we can say that you want

285:54

to make a get request to slap ai and it

285:58

should return an API message. So we're

286:01

making a request to forward slash API

286:04

expecting a 200 response and then we're

286:08

expecting a response body to have

286:10

property message

286:13

which is equal to a string of

286:15

acquisitions and instead of just typing

286:17

it out we can head over to the app.js to

286:20

see what we're actually responding.

286:22

We're responding acquisitions API is

286:24

running. So this is the exact thing we

286:26

want to see. Perfect. And we can do it

286:29

one more time by duplicating it and

286:32

saying that we are trying to get a

286:34

forward slashnonexistent

286:36

route and it should return

286:39

404 for nonexistent routes.

286:43

So if we make a request to forward

286:46

slashnonexistent,

286:48

it should return a 404 and we're

286:51

expecting the response body to have a

286:53

property of error say something like

286:57

route not found without an exclamation

287:00

mark. Perfect. And this request right

287:02

here has to be imported because that's

287:04

part of superest. So import request

287:07

coming from superest.

287:10

save it and open up your terminal or

287:13

conveniently enough we're immediately

287:14

within our terminal right here since

287:16

we're within a warp environment. So just

287:19

run mpm run test which will run our

287:23

suite of tests. There we go. There's

287:24

going to be a lot of stuff right here.

287:26

But the most important thing is that our

287:28

tests within the app.test.js file all

287:32

pass. We get health API and non-existent

287:35

all return exactly what we expected that

287:38

they will return. So, three out of three

287:41

passed. This will also generate the

287:43

entire coverage of the whole testing.

287:45

And to see it in action, you can head

287:47

over to coverage LCOV report and then

287:50

open up its index html

287:52

in the browser. Of course, not here.

287:54

Once you do that, you'll be able to see

287:56

something that looks like this, which

287:57

tells you exactly how well tested your

287:59

code is. The way in which we wrote tests

288:02

today, even though they're minimal, is

288:04

still the way it works in a real

288:05

production environment. Of course, this

288:07

is just the beginning. As you continue

288:09

building your application, you'll be

288:11

writing more advanced tests and making

288:13

sure that it's resilient against all the

288:15

errors. So, if you'd like me to create a

288:17

more detailed course on testing, let me

288:19

know in the comments down below. Very

288:21

soon, we'll implement this test as part

288:23

of a CI/CD pipeline so I can show you

288:26

how we can run it automatically as soon

288:28

as you push to your application. That's

288:31

the beauty of DevOps. So, let's do that

288:33

next.

288:36

Before we jump into building the

288:38

remaining features of the application,

288:40

let's take a step back and set up CI/CD

288:42

pipelines for linting, testing, and

288:45

building our Docker image. We're not

288:47

doing this just because it's a DevOps

288:48

focused video. It's because in a real

288:51

world workflow, you don't leave CI/CD

288:53

for the very end. The whole point of the

288:55

pipelines is to catch issues early and

288:58

ensure your code is reliable as you go.

289:01

I've been scrolling through the CI/CD

289:02

actions of JSM Pro. That's the repo

289:05

behind the jsmastery.com platform. But

289:08

it's not just us who are doing these

289:10

actions. If you take a look at Nex.js's

289:12

official repo, you'll see that they have

289:14

run almost 300,000 workflows and some

289:18

are running right now like a minute ago.

289:20

This means that there's always something

289:22

happening within the repo. Setting up

289:23

logging, writing tests, dockerizing the

289:26

application, putting CI/CD pipelines in

289:28

place. That way, every new feature you

289:30

add automatically goes through all of

289:32

these checks to make sure your

289:34

application stays solid. So, let's get

289:37

that in place right now. Since in the

289:39

crash course part of this course, we

289:41

have gone through how CI/CD pipelines

289:43

work through a hands-on demo. This time,

289:46

we'll approach it a bit differently.

289:49

Instead of implementing it yourself,

289:51

we'll have an AI agent set up pipelines

289:53

by clearly specifying what needs to be

289:55

done. So in the video kit down below,

289:57

you can find this new prompt. You can

289:59

copy it and paste it right here. That's

290:02

the mindset you should start adopting

290:03

for application development. You focus

290:06

on the architecture and let AI handle

290:08

the implementation. In this case, we're

290:10

asking it to study the codebase and

290:12

create three GitHub action workflows.

290:14

One for linting and formatting, another

290:17

for testing, and a third one for Docker

290:20

build and push. So press enter, and

290:22

let's warp do its thing. It's asking us

290:25

for a permission to create a new repo

290:27

called workflows. For sure, we can allow

290:29

it to do that. And then it'll start

290:31

creating these three workflows. I'll

290:33

actually turn on the auto approval

290:35

because I believe it should be able to

290:36

do it properly. And there we go. In less

290:39

than a minute, it created all these

290:41

three GitHub action workflows. One for

290:43

linting, which will trigger on pushes to

290:45

main and staging branches. Running

290:47

tests, which will also trigger on pushes

290:50

to main and staging. And finally, Docker

290:52

pushes. Now, it told us that we need

290:54

some secrets to make sure these actually

290:56

run. We need a Docker username, a Docker

290:59

password, and a test database URL. So,

291:02

let me show you how we can get those.

291:04

Head over to your browser and head over

291:06

to DockerHub.

291:08

Then, on the left side, you should be

291:10

able to see some settings. Head over to

291:13

personal access tokens and generate a

291:16

new token. You can give it a

291:19

description, something like JSM

291:20

acquisitions and generate it. Now, while

291:24

keeping this page open, head over to

291:26

your GitHub repo. And once you're there,

291:28

go ahead and open up the settings.

291:32

Then scroll down under security secrets

291:35

and variables and search for actions.

291:38

Here we'll need to add repository

291:40

secrets. So click new repository secret

291:44

and give it a name of docker

291:48

username.

291:49

Then back within our application, you

291:51

can find your username right here at the

291:53

top and simply paste it right here. Make

291:55

sure there's no extra spaces. Let's

291:57

repeat the same thing for the docker

292:00

password. So say docker

292:04

password. And for this one, I will copy

292:06

this password right here and paste it

292:09

there. Let's add two more. We also need

292:12

to add our node environment. In this

292:14

case, we'll set it to production

292:16

finally. Right. And finally, the last

292:19

one is our database URL. So, just create

292:22

it, call it database URL. And I believe

292:25

that within our originalv file, there

292:28

should be this entire database URL. So,

292:31

simply copy it and paste it right here.

292:34

Now, back within our editor, you can see

292:37

this new.github at GitHub folder with a

292:39

workflows folder inside of it and three

292:41

new actions docker build and push tests

292:45

and lint and format. They're all

292:47

following a proper YAML configuration

292:50

and they would take us quite some time

292:52

to write on our own but thankfully it's

292:54

much easier using AI once you know what

292:57

you want it to do. So let's test it out.

293:00

The only thing we have to do is just

293:02

make a push over to the main branch.

293:04

I'll do that right here by running git

293:07

add.get commit-m implement cicd

293:10

pipelines and GitHub actions

293:14

and get push. As soon as you do this,

293:16

head over to your repo and back over to

293:19

the actions. And in a matter of seconds,

293:22

you'll see that a new workflow has been

293:24

cued. So one by one, they will now run.

293:27

Linting and formatting, testing, and

293:29

then finally docker build and push.

293:31

linting failed, which is completely

293:33

expected as we haven't properly linted

293:36

all of our files. So, it's possible that

293:38

we're missing a couple of semicolons.

293:40

And the fact that we have just 10

293:41

different errors is super good across

293:44

all the files. So, you can easily go

293:46

ahead and make those fixes. And if you

293:48

click over on lint and format check, you

293:51

can see exactly what it did. So, it ran

293:54

the eslint doccomand and then it figured

293:57

out exactly where those issues are. Now

293:59

you can totally autofix those as well

294:01

and then push again. Our second action

294:03

succeeded and this one was running our

294:05

tests. Remember that test.yamel file

294:08

that we created not that long ago. Well,

294:11

thankfully all of those tests have

294:12

passed. And at the bottom you can even

294:15

see a new artifact generated through

294:17

this workflow run. It's a full coverage

294:19

report which you can click on and then

294:22

it downloads it and then you can pull

294:24

the index html to the browser so you can

294:26

see the full coverage report. Pretty

294:28

cool stuff, right? And finally, there's

294:30

the Docker build and push. We could

294:32

inspect further why this Docker image

294:35

failed, but this is a perfect chance to

294:36

take a look at the file itself and maybe

294:38

do some debugging. That's the point of

294:40

working with AI. Sometimes, like your

294:42

co-workers, it'll ship broken code. But

294:44

if you're good enough, you can fix it.

294:47

Let's see.

294:49

Oh, take a look here. We have our

294:51

username, which is good, but the image

294:54

name is acquisitions. That's not the

294:56

image name we've been using so far. If

294:58

you head over to Docker, you'll know

294:59

that we used Kubernetes demo API as the

295:02

image name. So simply replace this one

295:05

with that one and save it. And while

295:08

we're here, let's also lint our repo by

295:11

running mpm run lint. I believe that's

295:13

the command that we had right here. And

295:15

this will give you all the issues that

295:17

we have to fix. Then which other

295:20

commands do we have within a package?

295:22

JSON. It was lint fix. Yep, this is

295:26

exactly what I wanted. So now we can run

295:28

mpm run lint fix

295:32

and this should autofix most of these

295:33

issues. So if you once again run mpm run

295:36

lint you'll see that now we have no

295:38

issues. And finally we can run mpm run

295:41

format to do prettier formatting as

295:43

well. Beautiful.

295:46

And now we can do another push and see

295:48

what our GitHub actions say. get add dot

295:51

get commit-m

295:53

fix docker GitHub action and linting and

295:58

get push. As soon as you do this and

296:01

come back to your actions, you'll see

296:03

that three new actions will instantly be

296:05

cued running one after another. So let's

296:09

wait a minute and let's see what they

296:10

have to say. And what do we have

296:13

regarding the linting? It failed again,

296:15

but this time eslint is good. It's just

296:18

saying that maybe there's a bit of a

296:20

formatting issue with prettier, but I'm

296:22

totally okay with that for now. The

296:24

other thing I'm more concerned about is

296:26

Docker build and push failing. And if we

296:29

look into it, you can see that it's once

296:32

again complaining about some OOTH token

296:35

permissions. Unauthorized access token

296:38

has insufficient scopes. Okay, so this

296:40

makes me think that this token we

296:42

created doesn't have the necessary

296:44

permissions to do what it needs to do.

296:46

So let's go ahead and create a new one,

296:48

but this time we won't make it read

296:49

only. We'll give it write permissions.

296:52

So I'll call it JSM acquisitions

296:56

token. And this time I will make it

296:58

read, write, and delete.

297:02

And we'll have to copy this password.

297:04

Head over into the settings of this repo

297:07

under secrets and variables and actions.

297:11

And then modify the Docker password,

297:13

which is more or less the Docker token.

297:15

And then you can paste it right here.

297:18

It's asking me to verify and it got

297:20

updated. So 30 times the charm. Let's do

297:23

another push. This time we don't even

297:25

have to do it from the code because we

297:26

didn't change anything. Rather we can

297:29

change a readme which will trigger

297:30

another push. I'll say testing CIC CD

297:35

pipelines and commit and push. Now under

297:39

actions, you'll see that all three will

297:41

be re-triggered. And this time we're

297:43

hoping for two out of three. And there

297:46

we go. You can see that all the steps of

297:49

the build and push Docker image action

297:52

have been completed. And this time it is

297:55

green. Now, why did we even create this

297:58

action in the first place? Well, we did

298:00

it so that whenever you make any changes

298:02

to your codebase, we automatically

298:04

regenerate and repush a new Docker

298:07

image. So when you decide to add

298:08

Kubernetes to this project, your

298:10

Kubernetes clusters will always be

298:12

pointing to the right versions of the

298:14

code. So with that in mind, you've

298:16

successfully added CI/CD pipelines and

298:19

GitHub actions to this DevOps

298:21

acquisitions API. Great work. So what's

298:24

next? Well, so far we've implemented the

298:27

controllers, routes, and middleares for

298:30

authentication and for the users. Not

298:34

yet an acquisitions application, right?

298:36

where people can buy some SAS businesses

298:39

and sell them and so on. But now it's

298:41

all about repeating the same process you

298:43

followed so far. Create the listings,

298:46

their model, end points, let Warp

298:49

generate the rest, test it, and then do

298:51

the same for the deals. Check if your

298:53

pipelines are running smoothly, make

298:55

adjustments, test it again, and keep

298:58

iterating. Somewhere along the way after

299:01

completing your first MVP, try deploying

299:04

with Kubernetes locally as I showed you

299:06

during the crash course part of this

299:07

course. Then once you're comfortable

299:09

doing that, pick a cloud provider and

299:12

replicate the same setup using their

299:14

clusters instead of mini cube. If you

299:16

want a full final codebase with

299:18

deployments, click the link down in the

299:20

description. It'll give you more info,

299:22

most likely pointing to jsmastery.com

299:25

where I'll guide you through the rest of

299:26

this course in detail. And for an even

299:29

deeper dive, including self-hosting

299:31

Postgress, learning cloud providers like

299:33

AWS, deploying Dockerized applications,

299:36

building advanced pipelines, setting up

299:38

notifications, and more. The ultimate

299:40

backend course, which is not here yet,

299:42

but is coming very, very soon, is

299:44

exactly what you need. I'll link the

299:46

weight list down in the description so

299:48

you can join and know as soon as it's

299:50

out. For now, I hope this video helped

299:52

you understand what DevOps is and gave

299:54

you the strategies you can apply to your

299:56

own projects to impress recruiters and

299:59

land that job. So, with that in mind,

300:02

thank you so much for watching and I'll

300:04

see you in the next one. Have a

300:06

wonderful

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.