TRANSCRIPTEnglish

n8n Tutorial – Zero to Hero Course

3h 34m 53s41,170 words5,740 segmentsEnglish

FULL TRANSCRIPT

0:00

Master the future of process automation.

0:03

Nin is an incredibly powerful open-

0:05

source platform that enables you to

0:07

integrate APIs and orchestrate

0:10

intelligent workflows without the usual

0:12

coding headaches. This course from

0:14

Maronei will guide you from the

0:17

foundational concepts of nodes and

0:19

architecture to deploying advanced real

0:22

world systems. You'll master essential

0:24

skills like connecting various services,

0:27

configuring API keys, and handling

0:30

complex data flows. Also, the course

0:32

goes into cutting edge AI integration,

0:35

teaching you how to build advanced

0:37

retrieval automated generation or rag

0:40

agents and coordinate multi-agent

0:42

systems. By the end, you'll be able to

0:44

automate sophisticated business

0:46

processes, giving you a competitive edge

0:48

in DevOps, AI, and data engineering.

0:51

Automation is no longer just a buzz

0:53

word. It's reshaping how teams work,

0:55

collaborate, and scale. At the heart of

0:58

this revolution is Nen, an open-source

1:00

extensible automation platform that lets

1:03

you connect APIs, orchestrate workflows,

1:05

and even build intelligent AI agents

1:07

with ease. From sending simple automated

1:10

emails to creating advanced multi-agent

1:12

rack system, NAN empowers you to

1:15

streamline operations, boost

1:16

productivity, and unlock whole new

1:18

levels of innovation. Welcome to NATE

1:21

zero to hero course by Cookcloud. I'm

1:24

Maronei and together we'll go from

1:26

beginner to advanc as we explore how to

1:29

build powerful automations with NIET.

1:31

Whether you're a DevOps engineer, an AI

1:33

enthusiast or someone from a

1:35

nontechnical background looking to

1:36

automate real world processes, this

1:38

course is designed for you. We begin

1:41

with the introduction section where you

1:43

get familiar with Naden itself, the

1:45

playground environment and the code

1:47

structure. You'll also understand the

1:49

objectives so you know exactly how each

1:52

step builds towards your automation

1:53

skill set. Next in the foundations of

1:56

edit end we'll cover the essentials

1:58

understanding notes inputs and outputs

2:01

and we'll talk about data types in editn

2:03

and finally how wful run under the hood

2:06

using an endn's default logic. You'll

2:08

also configure API keys for services

2:10

like OpenAI, Anthropic, and CodeLab

2:13

Keyspaces so you're ready to integrate

2:15

AI into your workflows right from the

2:17

beginning. From there, we jump into

2:20

hands-on AI agent workflows. This is

2:22

where the fun begins. You'll build your

2:24

first email AI agent to drop responses

2:27

on autopilot. Create a multi-agent

2:29

research workflow that pulls knowledge

2:31

from tools like proplexity and open AAI,

2:33

turning hours of work into minutes.

2:35

Explore the HTTP request node. your key

2:38

to working with APIs from web scraping,

2:40

data fetching or calling other external

2:43

agentic tools. Experiment with creative

2:45

workflows from text to image, text to

2:48

video, and even image to video

2:50

automations using bleeding edge models

2:51

like Google's V3 and Cand. And yes,

2:55

we'll even build a Slack workflow that

2:57

lets AI reply to your co-workers and

2:59

your managers on your behalf. So while

3:01

you're sipping coffee, Edit end is

3:03

already answering questions for you, aka

3:06

doing your job. In the optional setup

3:08

section, we'll explore how we can

3:09

self-host nit with Docker and use Ola as

3:12

your local LLM, as well as hosting on

3:15

AWS EC2 in our playground environments.

3:18

This flexibility means you'll know how

3:20

to run NAD in a way that matches your

3:22

scale, budget, and preferences. Then

3:25

we'll dive into rag agents retrieval

3:27

augmented generation. You'll learn how

3:29

vector databases like pine cone gives

3:31

workflows memories and contexts.

3:33

Together, we'll build a customer sport

3:35

rag agent, the kind of workflow that

3:37

real businesses use to provide

3:38

intelligent contextaware support. We'll

3:41

also explore MCPs and see how they

3:43

compare to traditional workflows,

3:45

unlocking new possibilities for

3:46

reusability and scalability. In the

3:49

multi-wordflow advanced build, we'll

3:51

combine multiple agents into enterprise

3:53

style system, showcasing how subflows

3:56

let you manage complexity without chaos.

3:59

Finally, we'll cover retries, error

4:01

handling, best practices, and how to

4:03

leverage the editor workflow template

4:05

marketplace to accelerate your builds.

4:07

By the end of this course, you'll have

4:09

learned how to build a practical real

4:10

world automations like Slack AI agents,

4:13

multi-agent research workflows,

4:15

multimodal automations, customer support

4:17

rag agents, and advanced multiworkflow

4:19

orchestration systems. And along the

4:22

way, you'll engage with hands-on labs

4:24

designed to reinforce learning by doing

4:26

so. So, every concept becomes second

4:27

nature.

4:29

At Cook Cloud, we believe in learning by

4:31

building. And you'll be part of a

4:33

vibrant community where you can share

4:35

insights, ask questions, and grow

4:37

together.

4:38

So, ready to move from zero to hero with

4:40

Nin. Let's dive in and unlock the full

4:42

potential of workflow automation.

4:50

Hey there and welcome. In this section,

4:52

we're going to walk through the

4:53

foundations of NIDN, an open source

4:56

workflow automation tool that makes

4:57

connecting apps and services simple.

5:00

Here's what we'll cover. We're going to

5:01

do a quick introduction of NIDN and its

5:04

building blocks nodes. And then we're

5:07

going to explore AI agent architecture

5:09

and API flow. And later on, we'll

5:11

showcase how it works in an email to

5:13

select use case. I'll also show you how

5:16

input formats and expressions work in

5:18

NIDN. And finally, we're going to talk

5:20

about how Nadn handles data types. Let's

5:22

dive in. All right. So, what is NN?

5:26

Well, NN is short for notation, and it's

5:29

a free open-source workflow automation

5:31

tool. Think of it like a universal

5:33

connector. You can link different apps,

5:36

services, and even AI agents together

5:39

without writing tons of custom code. The

5:41

magic happens through something called

5:43

nodes. You set up workflows visually

5:46

step by step and edit end handles all

5:48

the data passing in between. In add

5:55

workflow with a trigger event that you

5:58

define like a new email arriving or a

6:01

web being called and then there's the

6:03

action nodes that do the work like send

6:06

an email, interact with an LLM or call

6:10

an API. By chaining these together, you

6:13

can turn everyday manual tasks into

6:15

smooth automated flows. Now, let's look

6:17

at AI agents inside Nitn. Just like any

6:20

other node, an AI agent within a

6:22

workflow will be triggered by an event

6:24

or a trigger node. And there are three

6:26

important elements that make this node

6:28

an AI agent. Namely, the LLM, which is

6:32

the brain that powers the agent. And

6:34

this could be open AI, plot or llama, or

6:37

any LLMs that you hook up to it. And of

6:40

course, the way you connect them would

6:41

be through an API access token. And then

6:43

there's a context window memory which

6:45

gives the agent context of the workflow

6:48

interaction and the tool that you can

6:50

use to complete tasks like Gmail, web

6:53

scrapers, or any tools that you connect

6:55

to it. And of course, you connect that

6:57

node to an output into either the next

7:00

node or just as a chat response. Here's

7:03

how it works. Your workflow grabs some

7:06

user input, say a simple chat trigger.

7:09

The agent thinks about that input and

7:11

based on what you've asked, whether it's

7:13

sending an email or just having a quick

7:15

chat, it decides if it needs to call

7:17

tools like Gmail or any other tools that

7:19

you hook into it through the APIs. Then

7:22

it pass the output back to your

7:23

workflow. So it's input going into the

7:26

agent, deciding whether to use the tools

7:28

or not, and then passing out the output.

7:30

And that's how you get intelligence

7:32

mixed into your automations. And here's

7:34

how that looks inside N itself. A chat

7:36

trigger kicks things off. An AI engine

7:39

agent node hooked up to GPT does the

7:41

thinking. A Gmail node can send messages

7:44

and an output node fires the result

7:46

straight to Slack. It's the same logic

7:49

but now mapped visually into the

7:51

workflow editor. Now by default addin

7:54

run nodes sequentially.

7:56

So it's trigger to the next node to the

7:59

next note and the next node. If the

8:01

nodes are connected in a line,

8:03

everything runs in order until the

8:05

workflow finishes without skipping any

8:07

nodes. Which means if along the line one

8:11

of the nodes fail, it will cause the

8:13

workflow to error out. In this case,

8:15

you'll see that there's a Tavly tool

8:17

attached to the AI agent. And because it

8:20

is only a tool hooked up to the agent,

8:22

it is optional for the workflow to run

8:24

this. And it depends on whether the AI

8:27

agent decides that it is appropriate to

8:30

call on this tool based on the

8:31

instructions received. And in the

8:33

middle, you'll see that there's a if

8:35

conditional loop. And this just ensures

8:37

that the workflow doesn't error out when

8:40

it doesn't get the result in time from

8:42

external APIs. But we'll get to the

8:44

details of that in the next few

8:46

sections. But workflows don't always go

8:48

in a straight line. You can brush them

8:50

out. So one input might trigger two

8:53

parts. Maybe one branch post to Slack

8:56

and the other goes to Gmail. In this

8:58

case, each branch runs independently.

9:01

It'll run the top one first and then the

9:04

second one sequentially, giving you two

9:07

actions from the same trigger. AI agents

9:10

need memory. And there are two main

9:12

flavors. We have the context memory,

9:16

which is a short-term memory inside the

9:18

AI's prompt. It's great for chat history

9:21

and to get the AI to stay in context.

9:23

And it is a difference between a

9:25

stateless and a stateful response from

9:27

the AI agent. And we'll run some

9:29

examples to show you the difference in

9:31

practice later on. And then there's a

9:32

vector database or documents rack

9:35

memory, which is long-term searchable

9:37

memory. Here your documents get stored

9:40

as embeddings and the agent retrieves

9:42

only what's relevant when answering.

9:45

This is a knowledge source outside of

9:47

LLM's training data. So simple memory is

9:50

leveraged for context and continuity to

9:53

make sure that the AI agent stays

9:55

context and rag is used for scale and

9:58

accuracy.

10:00

Every node in addit has an input and an

10:03

output. In the input panel, you see the

10:06

payload data that the node is receiving

10:08

from the previous node. So in this case,

10:11

it is a chat message received from the

10:13

user because the previous node was a

10:16

chat trigger node and the output is

10:18

basically the response of the LLMs or

10:20

the AI agent and the data structure

10:22

ranges from very simple with one

10:24

variable or multivariable with different

10:26

ranges of data types and additional

10:29

informations. But it is worth noting

10:31

that every node has an input and an

10:34

output data and the output data of this

10:36

node will be the input payload of the

10:38

next node and that's how the logic

10:40

strings the entire workflow together. So

10:42

this makes it super easy to see what's

10:44

flowing through your workflow and data

10:46

doesn't always look the same. So in

10:48

ended end there are three main types of

10:50

data that you'll see in the input and

10:52

output panel. So the first one is schema

10:55

which is a defined data structure and

10:58

the second one is a table format with

11:01

rows and columns handy for spreadsheets

11:04

and then of course there's that

11:05

underlying JSON which is the most common

11:07

structured and flexible way to show the

11:09

data and sometimes you get other types

11:12

like binary for files and images. It's

11:15

worth noting that these are different

11:17

representations of the same data. When

11:20

you configure a node, you can type in a

11:22

fixed value like a static string or you

11:25

can use expression which pulls dynamic

11:27

data from earlier nodes. Expressions

11:30

allow your workflow to adapt to

11:32

real-time inputs instead of being

11:33

hardcoded in. And the cool thing about

11:35

N&N is the fact that you can drag and

11:38

drop the variables or parameters into

11:40

the fields within their low code UI. Now

11:43

let's talk data types. The basic

11:45

building blocks inside NN. So in Naden

11:48

there are many five different data types

11:50

which is strings, numbers, boolean,

11:52

arrays and objects.

11:55

So strings are like text for example

11:57

hello world. Numbers as it suggests are

12:00

values like 42, 3.14 or -100. Booleans

12:05

are simply true or false statements and

12:07

arrays are essentially lists like a list

12:10

of numbers or a list of names. And

12:12

objects are structured data like what

12:14

you see in the example which contains

12:17

the user credentials including name and

12:19

email. Objects can even nest and you

12:22

often access them with dot paths and

12:24

expressions and knowing the differences

12:26

and mastering these makes everything

12:28

else click. One of the coolest things

12:30

about NN is the community nodes. These

12:33

are extra connectors built by users

12:35

extending NN way beyond the official

12:37

nodes. So, if you don't see a built-in

12:39

integration for your favorite tool,

12:41

check the community first. Chances are

12:44

someone's already made it.

12:48

Once you're in the workflow, the very

12:50

first step that you want to do is to

12:52

introduce a trigger node into the

12:53

workflow. The trigger node, as it name

12:56

suggests, is a node that's going to

12:59

trigger your entire workflow when the

13:01

event that is dependent on it is

13:03

triggered. So let's go ahead and hit the

13:06

plus button on the top right hand side

13:08

here and it will generate a list of

13:11

options or possible trigger nodes that

13:13

you can start your workflow with. Now

13:15

the most simplest one is the trigger

13:17

manually node which triggers every time

13:19

you click on it. But in this case what

13:22

we're going to choose is an on chat

13:23

message trigger node.

13:26

And once you click on a note there

13:28

usually are going to be some fields that

13:30

you need to configure. But in this case,

13:32

because it's a simple chat message

13:34

received node,

13:36

what happens is that a terminal will pop

13:39

up here where you can actually interact

13:41

with the chat node. In this case, I'm

13:43

just going to type in hello to populate

13:45

it with the relevant data.

13:48

Now, a quick way for you to

13:50

differentiate between a trigger node and

13:53

a regular node is by the lightning icon

13:55

here on the left that you see.

13:59

Now let's open up a chat message receive

14:01

trigger node to see how the data is

14:03

populated within the note.

14:05

As this is a trigger note, you only have

14:08

an output data from the node and you do

14:10

not have the input data.

14:13

Now if we covered in the previous

14:14

sections, there are three types of

14:16

representation of the same data that

14:18

you'll be able to see within each nodes.

14:21

In this case, the underlying data format

14:23

is JSON with three different information

14:26

that's contained within the payload. Now

14:29

the first information is the session ID

14:31

which identifies a session of the chat

14:33

and then we have the action message

14:35

which in this case is send message

14:37

because we're sending message through

14:39

the chat input and there's a chat input

14:42

content which in this case was what I

14:44

typed in which is hello.

14:47

Now when you toggle over to the table

14:49

section what you'll see is the same data

14:52

representation in a table format. As you

14:55

can see this session ID, the send

14:57

message action at the chat input which

15:00

is hello.

15:03

Now if you toggle over to schema, you

15:05

can see the same representation of the

15:07

data in a schema format which makes it

15:09

easy for you to drag and drop to the

15:11

next node when you need to.

15:14

Now let's go back to the workflow. Once

15:17

that first node has been set, the second

15:19

node that you want to hit is the AI

15:21

agent node. Now in this case I'm going

15:23

to click AI. I'm going to select AI

15:26

agent.

15:28

Now when you open up a node the data

15:31

from the output of the previous node

15:33

automatically populates the input of the

15:35

AI agent node. And this is true for any

15:38

types of nodes when you connect it to

15:40

the previous nodes. The output payload

15:42

of the previous node is going to be the

15:44

input of the current node.

15:46

Now in this case because the AI agent is

15:49

already connected to the chat trigger we

15:51

can leave this as a connected chat

15:52

trigger node

15:55

in terms of the prompt or the user

15:56

message it is correctly identifying as

16:00

JSON chat input which is the content

16:02

here which is hello and you can see that

16:05

hello is represented here as the actual

16:08

value of the variables.

16:11

Now with every AI agent node, what you

16:13

need to pay attention to are three key

16:15

elements that make this node an AI

16:17

agent. So the first is of course the

16:20

LLM. So LLMs are the lash language

16:24

models basically is the brain behind the

16:26

AI agent note. Now in this case there

16:29

are long list of options of LLM that you

16:31

can choose from and each of these are

16:34

good for specific use cases. In this

16:37

case, we're going to choose an OpenAI

16:39

chat model as the model that we want to

16:41

hit.

16:43

Now, I already have an OpenAI account

16:45

set up here, but I'll show you how to

16:47

connect your OpenAI account or your

16:49

OpenAI API keys pretty easily. In this

16:52

case, just hit the dropown and click on

16:55

create new credential.

16:58

And in this field, what you want to do

17:00

is to populate it with the API key that

17:03

you can either obtain from CodeCloud

17:04

Keyspace or the OpenAI developer

17:07

platform. We'll cover how you can get

17:09

the OpenAI API key from CodeCloud

17:11

Keyspace or the OpenAI developer

17:14

platform in another section. In this

17:16

case, you can also fill in your

17:18

organizational ID or it could be

17:20

optional.

17:22

Now, once you've key in your API key,

17:24

you can hit save.

17:27

And once you've connected your OpenAI

17:29

account, what you can do now is to pick

17:31

from a list of GPT models that OpenAI

17:35

has to offer. In this case, we're going

17:38

to pick GPT for Mini because it's

17:40

intelligent enough for most use cases.

17:44

Now, moving on, the second element that

17:47

we don't want to attach to the AI Asian

17:49

node, it's a memory node.

17:53

And what does memory mean for the AI

17:54

agent? Well, this is basically the

17:57

persistent memory that tells the AI

17:59

agent or gives the AI agent the context

18:01

that it requires from previous

18:02

conversation.

18:04

And I'm going to show you very quickly

18:05

what happens when an AI agent has no

18:08

memory versus when it does. Now, in this

18:10

case, we're going to type in hello,

18:14

my name is Mark.

18:21

And when I type that in, the agent now

18:24

access this open AI LLM chat model to

18:26

infer the information and respond to me

18:29

by saying, "Hello, Mark. How can I

18:31

assist you today?" Okay. And in the next

18:34

thread, what I'm going to ask is, "What

18:36

is my name?" So, since I've just told

18:38

him that my name is Mark, it should be

18:41

able to tell me what my name is.

18:45

But as you can see in the output, and

18:47

let me just move this up. It says, "I'm

18:50

sorry, but I don't have access to

18:51

personal information about users unless

18:53

you shared it with me in this

18:54

conversation. How can I assist you

18:57

today?" So, as you can see, it has no

19:00

memory context of what I just said,

19:02

which is my name is Mark, even though I

19:04

just typed it in a minute ago.

19:07

So, now let's attach a memory bank. In

19:10

this case, we're going to choose a

19:11

simple memory. And as you might have

19:13

noticed, there are various options of

19:16

different types of memories that you can

19:17

attach to the AI agent. But the simplest

19:19

way uh to get started is to click on

19:21

simple memory and choose that as the

19:23

option.

19:25

And there are a couple fields that come

19:26

with the simple memory as well. But in

19:29

this case, because our connected trigger

19:31

node is a chat trigger node, we can

19:32

leave this as that.

19:35

And the context window length basically

19:37

means how many past interactions to

19:38

model that the memory will retain. So in

19:41

this case, we're going to keep it as

19:43

five and then it will remember five

19:45

different iterations or five previous

19:47

iterations of the conversation that you

19:50

have with the agent. So we're going to

19:52

leave it as that.

19:55

Now you might notice that now the AI

19:58

agent node is showing a yellow highlight

20:01

around it with a triangular icon and

20:04

that means some changes has been made to

20:06

the AI agent and in this case the change

20:08

was the memory node being attached to

20:10

the AI engine and showing that so that

20:12

you can run the workflow again so that

20:14

the data can be repopulated.

20:17

In this case I will do the same thing. I

20:19

will say hello my name is Mark.

20:25

As you can see it's doing the same

20:27

operation but this time it's actually

20:29

dipping into the simple memory and now

20:32

are able to store that information.

20:35

So I'm going to ask it again what is my

20:38

name?

20:42

And this time as you can see it says

20:44

your name is Mark. How can I help you

20:46

today Mark? So now it retains the memory

20:48

of the information that I've just given

20:50

it a minute ago.

20:53

And you might notice the terminal in the

20:54

middle here are basically the logs of

20:57

what's going on within the node. So in

20:59

this case, what happens was a chat

21:01

message was received. The data was

21:03

passed on to the AI agent and the AI

21:05

agent is zipping into the simple memory

21:08

to check what my name is or the

21:10

information that was given earlier and

21:12

then using the open AI chat model to

21:14

return the information.

21:22

Now that the chat model and the simple

21:24

memory has been set, the third thing

21:26

that we want to attach the AI agent to

21:28

or provide the AI agent with is the

21:31

email tool. So now there are various

21:33

tools that you can attach. But in this

21:35

case, because we want the AI agent to be

21:37

an email agent, we want to attach the

21:40

Gmail note.

21:43

when you're attaching a Gmail note, you

21:45

need to set up your Gmail credentials to

21:47

be associated to the account. I already

21:49

have my Gmail account set up here.

21:52

But if you haven't set up your Gmail

21:54

account before, it is pretty easy. The

21:56

way you would do it is to create a new

21:58

credential. And if you're using an

22:00

cloud, you would sign in with Google as

22:03

an ooth to recommended way.

22:06

or if you're using a self-posting

22:08

method, you might want to choose a

22:10

service account and fill in the

22:11

necessary information that's required to

22:14

set up the Gmail account properly.

22:23

Now, as you can see, there are a couple

22:24

fields to configure here.

22:27

The first one is tool description. We're

22:29

going to leave this as set automatically

22:30

because we're going to let the AI model

22:32

determine whether they want to use Gmail

22:35

or not.

22:37

The second field is a resource. In this

22:40

case, we want to do message because we

22:43

want to be able to send message through

22:45

the Gmail tool. And the operation of

22:47

course is send. And so you can see there

22:48

are other options as well. So when we

22:50

attach a Gmail tool oftent times it

22:52

doesn't mean that we just want to send

22:54

an email. It can also do many other

22:56

operations such as reply a specific type

22:59

of email or mark an email as read or

23:01

unread or delete an email or many other

23:04

options. Now going into the information

23:07

of the email itself, there are a couple

23:09

of information that we need to pass to

23:11

the AI agent for it to determine what to

23:14

do with the email. So first of course

23:16

who is the recipient of the email. Now

23:18

because this is going to change

23:20

depending on your input. Now remember

23:22

because your input is going to be in

23:24

natural language whether that's English

23:25

or any other languages this is going to

23:29

be different every time. So what you

23:31

want to do in that case is to let the

23:32

model define what the email address is.

23:35

That way whatever your input is, the

23:38

model is going to infer the information

23:40

and try to extract the necessary

23:42

information that you think is correct

23:44

for the email address

23:48

and the subject itself is also going to

23:50

be determined by the model. We're going

23:51

to let it do that and we can do that by

23:53

clicking the same blue button here to

23:55

let it determine the subject.

23:58

When it comes to email type, in this

24:00

case, we want to choose a text email

24:02

just for simplicity. HTML is very good

24:05

when it comes to imagerich type of

24:07

email. So, if you're looking to attach

24:09

any image or any thumbnail and you want

24:11

it to be aesthetically formatted within

24:12

the email, then you would choose HTML.

24:15

But in this case, we're going to go

24:16

ahead with text.

24:18

And then on message, we want to choose

24:20

determine by model as well because that

24:23

way we're going to let the AI determine

24:25

what kind of body of the message that is

24:27

going to attach.

24:30

Okay. So now the three elements have

24:32

been configured.

24:34

We want to go back to the AI agent node

24:37

to specify certain prompts. But before

24:40

we do that, there are a couple of things

24:41

that I think it's worth highlighting. As

24:44

you probably have noticed, with every

24:45

single field that comes within a

24:48

particular node, there is a fix and

24:50

expression toggle here. Now, I just want

24:52

to highlight the difference between fix

24:54

and expression. So, because the AI agent

24:56

node is connected to the chat trigger

24:58

node, we've left this as connected to

25:00

chat trigger node. But what you can do

25:02

is actually to define what the input is.

25:04

So, when we click on define below, we

25:06

specify exactly what kind of input that

25:09

goes into the prompt or user message

25:11

prompt.

25:14

So in this case I can drag and drop the

25:16

chat input and it will fill it up as an

25:18

expression.

25:20

Now the difference between fix and

25:22

expression a fixed format is basically a

25:24

natural language format with which you

25:27

can tell the AI agent a specific but

25:30

permanent definition of what the user

25:32

message is. So for example, if I were to

25:34

put the user message as please send an

25:36

email, this is going to be the input for

25:38

every single run that I execute of this

25:40

workflow. And that's not ideal because

25:43

we want the AI agent to be able to get

25:45

the information dynamically depending on

25:47

my input. So in the first run, I might

25:50

put what is my name instead of please

25:52

send an email. So we want the prompt or

25:54

the user message to change together with

25:57

my prompt. So how do we do that? That's

25:59

when we toggle into expression. And what

26:02

happens is we can now drag the chat

26:04

input variable in this case and simply

26:08

drop them into the prompt user message.

26:11

And what happens is as you can see it

26:13

creates an expression in this case a

26:15

JavaScript expression of the variables

26:17

of the chat input. Meanwhile the actual

26:19

value of the variable is dictated here

26:22

which is what is my name question mark.

26:24

So, as you can see

26:27

on each separate run of the workflow,

26:29

it's going to populate the information

26:31

based on my input instead of a prefix

26:34

input if I were to choose a fixed

26:36

format. I hope that explains the

26:39

difference between fix and expression.

26:40

Well, and in this case, the next thing I

26:44

want to do is to go to options and under

26:48

add option,

26:50

I'll go to the dropdown and click on

26:52

system message.

26:54

Now for those of you who are not sure

26:56

the difference between a system message

26:58

and a user message, the user message is

27:02

a set of instructions or input that

27:04

comes directly from the user.

27:07

Meanwhile, the system message describes

27:09

the core function of this particular

27:11

model. So in this case, what I want to

27:13

tell it is that you are a helpful email

27:15

assistant which helps craft effective

27:17

and succinct email based on users

27:19

instruction. You also help with sending

27:21

the email by using attach email tool

27:23

when asked. Now this is a very simple

27:26

system prompt and I would suggest that

27:29

for a more effective system promp that

27:31

you go to chatgpt or llm of your choice

27:34

to prompt engineer an effective system

27:36

prompt for a particular AI agent. That's

27:39

what I would do in a production

27:40

scenario. But in this case, because

27:42

we're just running a quick demo, we're

27:44

going to stick with a simple one just so

27:46

that you understand the principle behind

27:48

it. Cool. Now that we have the system

27:50

prompt defined as well as the user prom

27:53

variable in place, we can now take it

27:55

for a spin. Now in this case, what I

27:58

want I want to do is to say the same

28:00

thing, which is hello, my name is Mark.

28:06

And as you can see, it's accessing both

28:09

the open AI chat model and a simple

28:12

memory to store the information, but

28:14

it's not necessarily using the Gmail

28:16

tool. And the reason for that is because

28:17

I did not specifically ask the agent to

28:20

send any email just yet. So as you can

28:23

see from the response, hello again Mark,

28:25

what would you like to do today? Is the

28:27

response which is also the same response

28:29

here. Now I would say I would like to

28:32

send

28:35

an email

28:38

to my boss

28:41

at

28:43

maronei@codecloud.com

28:47

which is not my real boss's email

28:49

address is actually my email address

28:51

because I don't want to necessarily send

28:53

my boss a test email about

28:57

the upcoming marketing meeting

29:02

on

29:04

26th of August

29:08

2025.

29:11

So, I'll hit that.

29:14

As you can see, it's not necessarily

29:16

using the Gmail tool straight away

29:18

because it is intelligent enough to know

29:21

that it needs a little bit more

29:23

information. So, it says, "Sure, what

29:26

would you like the subject and the

29:27

message of the email to be?" So in this

29:30

case, because I'm too lazy to come up

29:32

with the subject and email, please,

29:35

please come up with it yourself. Okay.

29:38

So I'm going to say that

29:43

and it gives you a draft of the email.

29:45

So in this case, it says subject

29:47

upcoming marketing meeting on August 26,

29:50

2025. Dear boss's name, I hope this

29:53

message finds you well. I wanted to

29:54

remind you about the upcoming marketing

29:56

meeting scheduled for the 26th of August

29:59

2025. Let me just blow this up. We will

30:02

be discussing our strategies and plans

30:03

for the upcoming quarter. Please let me

30:05

know if there are any specific topics

30:07

you would like to address during the

30:08

meeting. Looking forward to your input,

30:10

Mark. And I would like to make changes

30:13

because it says, as you've noticed, my

30:15

boss's name. So, let's say

30:19

my boss's name is Moonshot.

30:24

Okay, so now it registers that

30:27

and gives me a new draft. Dear Moonshot,

30:30

I hope this message finds you well about

30:33

the upcoming marketing meeting. Da da

30:35

da.

30:37

And then it says, shall I go ahead and

30:38

send this email to

30:39

moonshotaronecloud.com?

30:41

Again, this is not his email. This is my

30:43

email because I don't actually want this

30:45

to be the email that sent to him. So I

30:48

would say yes, please go ahead and send

30:50

it.

30:52

So once I've confirmed that, what

30:53

happens is, as you can see, the AI agent

30:55

is now accessing the Gmail tool to send

30:58

the email to actually in this case my

31:01

own email. So let's take a look what it

31:04

looks like in my inbox.

31:06

Now this is the email that's been sent

31:07

to me. As you can see, because the Gmail

31:10

account over here

31:12

is connected to my email or my Gmail,

31:15

which is marone@codecloud.com,

31:18

it is essentially sending from

31:19

marone@codpow.com

31:21

to me. That is this in the content of

31:24

course. Dear Moonshot, I hope this

31:26

message finds you well. Wanted to remind

31:27

you of the upcoming marketing meeting

31:29

schedule for 26th of August 2025.

31:32

So, everything looks nice. I can almost

31:35

send this directly to Moonshot. But what

31:38

I want to do is as you can see at the

31:40

end of it, it says this email was sent

31:42

automatically by NAND. So that's not

31:44

ideal because Moonshot will find out

31:46

that I'm actually automating this email.

31:48

So what I want to do is go back to the

31:50

NAND workflow and go to the Gmail tool

31:54

and scroll down to options. Under add

31:57

options, I'm going to click on the

31:59

dropown

32:01

and click on append edit and

32:02

attribution.

32:04

And what I want to do now is to toggle

32:06

this off. In this case, it will actually

32:08

stop attaching the message of the end

32:10

attribution in the email.

32:13

So in this case, what you can do when

32:15

you want to execute a step is to hit the

32:17

play button on top of the note so that

32:19

you don't have to rerun the entire

32:20

workflow. So let's just click on that.

32:26

Okay. So now that that's done, I want to

32:28

take a minute to explain some of the

32:30

little functions here that you see and

32:31

what they do.

32:33

The first one is the play button which

32:35

is the execute step button. So this

32:37

button is super useful when you want to

32:39

run the node without essentially running

32:41

the entire workflow again. So if you

32:43

change anything within the AI agent node

32:46

and you only want to run the AI agent

32:48

node again, you would hit this button.

32:50

It will only run the node here.

32:53

The power button as you can see here is

32:55

to deactivate the node because this is

32:57

the only node basically that does any

32:59

processing in this workflow. But as your

33:01

workflow grows more complex, you might

33:03

have five or six branches of nodes. And

33:07

some of them, if they're not in use, you

33:08

might want to deactivate. So if you

33:10

click on deactivate, what happens is

33:12

it's going to say that the node has been

33:14

deactivated. And basically, the workflow

33:16

is not going to work for any processes

33:18

that go through this particular node.

33:22

And we're going to reactivate that

33:23

again. And of course, the bin icon is to

33:25

delete the entire node together.

33:29

Now another very useful function in

33:32

within any is a pin. Now if you were to

33:35

open up the node and go to the top right

33:37

hand corner you can see that there's an

33:39

option to pin the data. Now this is

33:41

super useful because now you don't have

33:43

to key in or populate this particular

33:45

node with new data all the time.

33:48

And this is extremely valuable when

33:50

you're running a workflow that uses API

33:52

tokens or might cost you for every run

33:55

of the workflow.

33:56

So to avoid running the workflow again

33:58

and again and using or activating the

34:00

API token, what you can do instead is to

34:03

pin the data to the node so that every

34:05

single demo run or every single trial

34:07

run is using the data that's already

34:10

been populated.

34:12

So once I pin this data, every single

34:15

run of the chat input is only going to

34:16

say yes until I push it to production.

34:20

Now once you've done all this, this

34:23

doesn't mean that your workflow has been

34:25

pushed to production. So the way you

34:27

would push to production is to toggle

34:29

the activate workflow option right at

34:31

the top here.

34:34

And when you do that, it's going to give

34:36

you a warning message to say you can now

34:38

make calls to your production chat URL.

34:41

Okay. So click on got it. And there you

34:44

go. Your workflow is now in production.

34:47

You do not have to click on execute

34:49

workflow to run the workflow. And every

34:51

single message that it received is going

34:52

to run the way you set up the workflow.

34:54

So just to try that out, I'm going to

34:56

use hello.

34:58

I'm going to have to pick on pin and

34:59

send.

35:02

And there you go.

35:04

It's a new response. Now, of course,

35:07

this workflow isn't very useful because

35:08

the only way you can interact with the

35:10

AI agent is through the terminal here.

35:13

Now, how can we provide a publicly

35:15

available terminal so that more than one

35:17

users or people who don't have access to

35:19

your end to end workflow can also use

35:21

this AI agent? Well, if you open up your

35:24

chat note, what you can see is that

35:26

there is a make chat publicly available

35:29

option. And what you want to do is just

35:31

to toggle that. In this case, it's going

35:34

to give you a URL

35:36

that you can copy and open up.

35:40

And you might have to wait for a couple

35:42

seconds for it to stand up, but once you

35:44

do, this is a page that is publicly

35:46

available as long as you pick the option

35:49

to make the chat publicly available

35:53

and you can access the AI agent

35:57

as any user would. Cool. And as you can

35:59

see, there are a couple options here

36:01

that you can configure if you want to

36:03

include an authentication or if you want

36:05

to have an initial message that is

36:07

crafted or customized to your needs.

36:11

But in this case, we're going to keep it

36:12

simple because in this section, we just

36:14

want to run through the principles of

36:16

how to create your first AI agent

36:17

workflow on NN. And I hope that has been

36:20

helpful. And just before we go, you

36:22

might notice that there are three

36:23

options to choose from from here, which

36:24

is the editor version where we have been

36:27

actually building the NDN workflow on

36:30

which obviously as the name suggests

36:32

where you edit all the nodes and

36:34

configure all the nodes in the workflow,

36:36

but also the executions button

36:40

where you can check out the logs of the

36:41

executions.

36:44

As you can see, it gives you the date,

36:46

the time, and the type of execution. The

36:49

one with the beaker icons are test

36:51

executions or basically the workflow

36:53

runs that we did when it was in demo

36:56

mode. And once I push it to production,

36:58

it is an actual execution to the

37:00

workflow in production.

37:03

And one of the things you can do once

37:05

your workflow is in production is also

37:08

to copy the data that's been populated

37:10

during the execution for troubleshooting

37:12

purposes. So if you click on copy to

37:15

editor

37:16

what happens is your editor will now

37:18

show the information that is derived

37:21

during the execution of that production

37:23

workflow.

37:26

So remember in the chat we said hello

37:29

and here it is the chat input content is

37:33

hello

37:38

and similarly the AI agent response is

37:40

hello how can I assist you today which

37:42

is the response that we got from the

37:44

live chat.

37:46

Now, this is a very useful option

37:48

because it just makes your

37:50

troubleshooting as well as your

37:51

iteration a lot easier

37:54

as you build or modify workflow in

37:56

production.

37:59

So, I hope you were able to follow

38:00

through on the demo of how to build your

38:03

first AI agent workflow on Edn. In the

38:05

next section, we're going to give you a

38:07

lab where you build the exact same AI

38:09

agent within that particular lab and see

38:12

for yourself if you're able to create an

38:14

AI agent that crafts email and sends it

38:16

on your behalf. This course comes with

38:19

free labs that could be accessed right

38:20

in your browser. So, you don't need to

38:22

set anything up yourselves or use your

38:24

credit cards. The labs are challenge

38:26

based, meaning we give you a challenge

38:29

and ask you to solve it. Use the link in

38:31

the description below to sign up for

38:33

free labs on codecloud. I'll now walk

38:35

you through your first lab. In the

38:37

upcoming labs, you're going to

38:39

experience hands-on building AI agents

38:41

on NAD environments. And in doing so,

38:44

you're going to need an API access to

38:46

the LLMs of your choice. And in order to

38:49

do that, you need the API keys. And

38:51

there are a couple ways to get your API

38:52

keys. Naturally, you can go to OpenAI to

38:56

get your OpenAI API keys or platforms

38:58

like Open Routers on which you can

39:00

obtain API keys of several different

39:03

models. But those platforms require

39:05

payment information before you're able

39:07

to get the API keys even if you're just

39:10

trying out. So, one of the things that

39:12

we've done here at CodeCloud is we've

39:14

created the CodeCloud code key. And what

39:16

it is, it is the all-in-one AI

39:18

playground where you can get the API

39:21

keys to the following LLMs of your

39:23

choice. And at the time of this

39:25

recording, models like GPT40, GPT4.1,

39:29

cross on 4, Gemini 2.5, and Grog 3 are

39:33

available through the code key

39:35

playgrounds. And the code key playground

39:37

is available on various different

39:39

subscription plans, but you get the most

39:41

access with AI plans and the business

39:44

plan. Now, if you were to go to this

39:45

page, which is

39:46

codecloud.com/ai-playground/code

39:49

key with the right access, you can hit

39:51

launch now and click on start playground

39:54

and you'll be led to a dashboard where

39:56

you can essentially create and grab your

39:58

API keys by selecting the models as well

40:02

as the different types of ways or

40:04

methods with which you can access the

40:07

LLM model. Now, in this case, I'm going

40:09

to do an example of how to call a GPT

40:11

4.1 model through a code key on your NDN

40:14

environment. So, there are two ways to

40:16

do that. The most straightforward way is

40:18

to go to your end workflow. I'm going to

40:21

start the trigger node with a chat

40:23

trigger because I want to be able to

40:25

communicate with the agent and I'm going

40:27

to just populate the node with something

40:30

like hello. And we're going to introduce

40:34

a second node which is the OpenAI node.

40:37

In this case, we're going to choose a

40:39

message and model node. And what we want

40:42

to do here is to create a new

40:44

credential. And we call this code key

40:48

demo for example.

40:50

And what you want to do under the base

40:52

URL, replace this base URL with the base

40:55

URL that you can obtain from the code

40:58

key page. Copy that. Go back to the

41:00

workflow and paste the base URL in. And

41:04

of course for the API key, we're going

41:06

to take the API key here and we're just

41:09

going to paste it in and we're going to

41:10

hit save. And you'll see that sometimes

41:12

depending on the version of the NA cloud

41:15

that you're in, this red bar might pop

41:17

up. Don't worry too much about it

41:19

because I'm going to show you how you

41:21

can keep using it even if the red bar

41:23

shows up. And this is just simply

41:26

because the UI itself is not designed

41:29

for users to patch in their base URL

41:31

themselves like that. And sometimes that

41:33

causes some form of incompatibility

41:35

which causes the red bar to show up. In

41:38

this case, under model, what we want to

41:39

do is we're just going to define it by

41:42

ID because we're going to type in the

41:43

name of the model ourselves because this

41:46

is not going to be the models that are

41:49

in the list from OpenAI. And if you were

41:52

to try to grab it from the list, it's

41:54

not going to load at all. So, we're

41:56

going to hit buy ID and we're going to

41:57

manually type in open AI/GPT-4.1.

42:04

And that's the model that we want to

42:06

call here. As you can see, is GBT-4.1.

42:10

And with that model, we're going to

42:12

transfer or rather drop in the chat

42:15

input variable. So, that essentially

42:17

everything is connected. And we're not

42:19

going to add any tool at this point

42:20

because we're just going to hit execute

42:22

step. And as you can see it is coming up

42:24

with an output or response to the chat

42:28

input which is hello. It says hello how

42:30

can I help you today. So with that

42:32

simple setup you can now chat with the

42:34

model here. For example I can ask why is

42:37

the color of the sea blue and as you can

42:42

see under the chat the output is here in

42:45

the response the sea appears blue

42:47

primarily because of the way water

42:48

absorbs and scatters sunlight. Um, and

42:51

that's the response for that. Now,

42:53

that's one way to gain access to a LLM

42:57

model through code key. The other way to

42:59

do that is to call a HTTP node. So, I'm

43:02

just going to add another trigger node

43:04

here. And this time, I'm going to select

43:05

trigger manually. I'm going to move this

43:08

down right here. I'm going to add the

43:11

HTTP request node. And if you were to go

43:14

back to the code key documentation, as

43:18

you can see, we can actually toggle this

43:21

to the curl command and I'm going to

43:23

copy the entire chunk here of the curl

43:26

command and go back to my workflow. I'm

43:28

going to hit import curl. I'm going to

43:29

paste the entire thing and it's a post

43:33

API call to this particular endpoint

43:35

URL. And of course there's some

43:38

authentication here which is already

43:40

prefilled actually because I pasted the

43:42

entire curl command. Ideally we want to

43:45

set up the authentication on the headers

43:47

but we'll go over that part later on in

43:49

the next section. But in this case

43:51

without doing too much what I want to do

43:53

is I want to just hit execute step and

43:56

there you go. So under JSON body you can

43:58

see that there is a preconfigured

44:01

content which is in a single sentence.

44:03

Why should someone choose codecloud over

44:05

other platforms to learn DevOps? That is

44:08

the user prompt that has been

44:10

prepopulated there. So you can

44:12

essentially replace this with some other

44:15

variables that you receive from the

44:17

input. Like for example, if it's a chat

44:20

trigger that's connected, you can drag

44:21

those chat trigger as the user input and

44:24

drop it in here to replace this whole

44:26

line. But here I just want to show you

44:27

that it is responding to the user prom

44:31

as someone should choose code cloud over

44:33

other platforms to learn devops because

44:35

it offers highly interactive hands-on

44:36

labs and real world scenario based

44:38

exercises that accelerate practical

44:40

skills development far beyond

44:42

traditional video based courses.

44:44

So cool. These are the two ways that you

44:46

can gain access to the LLM API keys via

44:50

KOI. And as I've shown earlier, you can

44:53

choose from these various models right

44:55

now. For the next few sections, this

44:57

might come in handy for you. Otherwise,

44:59

you can always go to the actual provider

45:01

like OpenAI or Anthropic to get those

45:04

API keys. I'll see you in the next one.

45:06

So I want to run through the differences

45:08

between running your instance on end

45:10

cloud versus the lab playground that we

45:12

have on the codecloud course. Now the

45:14

first thing you'll notice as you go into

45:15

the lab is that you still have to

45:17

provide your email and your first name

45:19

last name credentials as well as a

45:20

password in the instance. And don't

45:22

worry this is not saved anywhere. So you

45:24

don't actually have to save the

45:25

credential and the password somewhere.

45:27

You can use different emails and

45:28

password for each of these instances. So

45:30

once that's filled in you can hit next.

45:32

And there's going to be a series of

45:33

onboarding questions here which you

45:35

don't need to fill. So you can hit get

45:36

started and in the same way for the paid

45:38

features information you can just skip

45:40

and there you go. Now you're in the

45:42

admin dashboard that is very similar to

45:44

the end cloud environment. However,

45:46

there's still some nuances and some

45:47

differences as we go into the workflow.

45:50

So here what we're going to do is we're

45:51

going to click start from scratch. And

45:53

the very first workflow that you're

45:55

going to build is the email AI agent.

45:58

And the first trigger to that is a chat

46:00

trigger. And I'm going to speed through

46:01

this cuz this is very similar to the

46:03

workflow that we've done. But I want to

46:05

just show you quickly the differences of

46:07

using the KCloud keyspace as well as the

46:11

OpenAI API key. So if you're using the

46:13

KCloud keyspace, what you want to do

46:15

when you select your model, for example,

46:17

in this case, I'm just going to select

46:18

the OpenAI chat model. And what I want

46:21

to do is to follow the instruction in

46:23

the left hand bar right here. And you

46:25

see that there's a link to Code Key. I'm

46:28

going to just hit that URL. I'm going to

46:30

hit launch now. and it's going to lead

46:33

me to this dashboard right here. I'm

46:35

going to click start playground and

46:37

there we go. So, this is the dashboard

46:38

on CodeCloud Keyspace. And what I want

46:40

to pick is the OpenAI GPT 4.1. And in

46:44

this case, I'm going to just copy the

46:45

API key here. I'm going to go back to my

46:49

workflow. And here, I'm going to hit

46:50

create credential. And I'm going to

46:52

paste the same exact API key. I'm going

46:55

to skip the organization ID. And for the

46:56

base URL, I want to make sure that I'm

46:59

replacing this with the base URL that I

47:02

obtained from the KCL keyspace. Go back

47:05

to my lab and paste the base URL right

47:08

here. Okay, so I'm going to hit save

47:10

right now. As you can see, it says

47:12

connection tested successfully. However,

47:14

there's a couple of things I want to

47:15

point out here cuz if you pick from

47:17

list, as you can see, the list doesn't

47:19

really match the known models of GPT.

47:21

This is because it's not really working

47:23

based on the UI that's been built. So if

47:25

you try to run it based on the chat ID

47:28

or chat model that we've selected, I'm

47:30

just going to run a hello message here.

47:33

It's going to go to the AI agent, but

47:34

it's going to error out. So what you

47:36

need to do is go into the chat model and

47:38

instead of picking from list, you want

47:40

to go by ID and as suggested from the

47:43

instruction on the left hand bar here,

47:45

you want to copy the OpenAI/GPT-4.1.

47:49

Just copy that and paste it all word for

47:53

word. And let's do a test run again. And

47:55

this time it should actually be able to

47:57

access the appropriate model. So that's

48:00

if you're using the CCloud keyspace API

48:03

keys. Now what I want to point out is

48:04

the difference between this and using

48:06

the OpenAI API key is that it's much

48:09

more straightforward when you use OpenAI

48:11

API keys. So to show you the difference,

48:13

what I'm going to do here is I'm going

48:14

to create another new credential and I'm

48:17

going to call it OpenAI account 2. This

48:19

time I'm going to head over to

48:20

platform.opai.com.

48:22

And what I want to do is head over to

48:24

API key section. I'm going to create a

48:26

new secret key in and then email

48:29

integration. I'm going to hit create

48:31

secret key and I'm going to copy the API

48:33

key and head back to my lab. I'm going

48:35

to paste the API key and leave the base

48:38

URL as is and I'm going to hit save. So

48:40

there you go. So connection tested

48:42

successfully. And the difference is

48:44

instead of by ID, I can now just pick

48:46

from the list and it should load up the

48:48

correct list of models that I might

48:51

possibly want to use. So for example, if

48:52

I choose GPT 4.1, it's just going to be

48:55

that. And we're going to run this node

48:57

again. And as you can see, it's calling

49:00

the correct model right here. Okay. So

49:03

the next thing I want to point out is

49:04

when you add your Gmail note on the next

49:06

workflow or Google tool for that matter

49:08

the difference between doing that in our

49:10

labs versus edit cloud is that on edit

49:12

and cloud you often see when you create

49:14

a new credential with Gmail that you can

49:17

actually have a button which you can

49:18

sign in directly using your Google

49:20

account if your browser happens to be a

49:22

Google Chrome browser. However, with the

49:24

labs what you need to do when you create

49:26

a new credential is you actually need to

49:28

connect it with the o method. So what

49:30

you want to do is you want to head over

49:31

to console.cloud cloud.google.com.

49:34

And the first thing you want to do is

49:36

create a new project. And for the new

49:38

project, I'm going to name it Enit in

49:40

email app. All right. So, I'm going to

49:42

leave this as no organization. I'm going

49:44

to hit create. And it's going to take a

49:46

couple seconds to create a project. And

49:49

once that's done, I'm going to select

49:50

the project. And as you can see, it says

49:52

end email app project. And the very

49:55

first thing I want to activate is I want

49:56

to go to Gmail API.

49:59

So what I'm doing right now is I'm

50:01

creating a project because that's how

50:02

Google recognizes each of these ooth

50:04

access that we're giving it. But the way

50:06

the security works is you need to enable

50:08

the particular tool that you want to use

50:10

within the project. So in this case I

50:12

want to use Gmail. So I want to make

50:13

sure I enable the Gmail API. All right.

50:15

Once that's enabled, I want to head over

50:17

to OOTH consent screen. And right now

50:20

there's no OOTH consent that's set up.

50:22

So I'm going to just hit get started.

50:25

And in this case, I'm going to have to

50:27

call it an app name as well. So I'm just

50:29

going to say it is an email

50:33

app. Okay. User support email. Going to

50:35

put this.

50:37

And we're going to select external. And

50:39

by the way, each of these steps is

50:41

documented on the left hand side panel

50:43

of the lab. So you don't have to

50:45

memorize any of these. But we're going

50:47

to go to next. And under contact

50:49

information, just going to put my email

50:51

here. And once you're done, just hit

50:54

continue. and create. So, just before we

50:57

go, I just want to go to audience and I

50:59

want to add a test user here, which is

51:04

an email that you're going to use to

51:07

send out basically the emails that you

51:09

want the agent to send out. So, in this

51:11

case, it's maroneia atcookcloud.com.

51:13

We're going to save that. And lastly,

51:15

we're almost there. We're going to go to

51:17

API and services again, and this time

51:19

we're going to go to credentials. And

51:20

what we want to do is to hit create

51:22

credentials with OOTH client ID and

51:25

application type we want to choose web

51:27

application and under name you want to

51:30

name it Canadian email oath client and

51:35

then we're going to add the authorized

51:37

redirect URLs which we can obtain from

51:40

our lab. So, going back into the lab

51:42

here, you see that this is the oath

51:44

redirect URL and we're going to copy

51:45

this and we're going to head back and

51:47

just fill this in and we're going to hit

51:50

create. As you can see, we now have the

51:52

client ID and client secret to the app

51:54

that we just created. So, we're going to

51:56

copy this and head back to client ID.

51:59

Paste it in. Client secret. Copy that.

52:01

Paste in the client secret. And you'll

52:03

see now that there is a sign in with

52:05

Google button that pops up. So, what you

52:07

want to do is just hit that and a Google

52:09

login popup will show and then you just

52:11

want to correctly select the email

52:13

address and you'll say that Google

52:15

hasn't verified this app. But because

52:16

you're the one who created it, you know

52:18

it's safe. So, we're just going to hit

52:19

continue and we're going to select all

52:22

because we want to have the agent be

52:24

able to do all these actions with our

52:26

Gmail. So, we're going to hit continue

52:27

now. And as you can see, it says

52:29

connection successful and we're good to

52:31

go. So, just wait a couple seconds here

52:33

within the labs and it's just going to

52:34

load up. And there you go. I already

52:36

have my credentials set up here and we

52:39

just want to run an execute step to show

52:41

you that everything is working. So in

52:42

the workflow you'll see that we've

52:44

chosen to define all of this by the

52:47

model. So we're going to let the AI

52:49

agent define this and we're going to

52:51

start chatting and say hi can you send

52:54

an email to.com

52:59

to just say hello. All right so we're

53:03

going to hit this. So as you can see now

53:05

the workflow has run and it's actually

53:07

sent an email to my Gmail. So let me

53:09

take a look and there you go. It says

53:11

hello Mercury just want to say hello

53:12

best regards. Okay. So obviously it's

53:14

not very sophisticated because actually

53:16

in the AI agent we didn't even specify

53:18

any system prom. So the whole point is

53:20

just to show you the main difference

53:21

between running the environment on N

53:23

cloud and within our playgrounds

53:25

specifically covering the part where

53:27

code key is being used as well as when

53:30

you're going to access any Google tools,

53:32

Gmail, Google sheets and stuff like

53:33

that. You do need to go to your Google

53:35

Cloud Console to set up the project, the

53:38

app, and the OOTH clients in order to

53:40

access Google services with the

53:42

workflow. And as you explore Nit in the

53:44

course, you're going to realize that

53:45

there's going to be some differences

53:46

between running edit cloud and edit

53:49

within your own self-hosted environment.

53:50

For example, the availability of

53:52

community notes, supported versions, and

53:54

a few other features that might only be

53:56

available on edit cloud. So if you're

53:59

facing any issues any part of that

54:01

build, just keep in mind that it might

54:02

be because you're using a self-hosted

54:04

version or a lab hosted version from

54:06

CodeCloud. And it's not necessarily a

54:08

limiting issue. There's always a workar

54:10

around for that. It's just something to

54:11

keep in mind about. I want to take a

54:13

minute to go through the different types

54:14

of authentication that we can make in

54:17

our approaches when we make HTTP request

54:20

nodes and the corresponding API calls.

54:23

Now, as you've seen in the previous

54:24

section, the HTTP request node is a very

54:27

convenient way for us to replicate an

54:29

API call without configuring everything.

54:31

And it's especially useful when we can

54:33

just go to an API documentation and hit

54:36

the import curl, essentially import the

54:39

entire curl command. Now, as you've seen

54:41

in the previous example, there are a few

54:42

ways that we can configure the

54:44

authentication method in these API

54:46

calls. Now, most of the API calls are

54:48

going to be header authentication. As

54:50

you can see in this section after I've

54:52

imported the curl, it automatically

54:54

configures the header section and

54:56

populate it with the name and the value

54:59

format of the API keys for

55:02

authorization. Now you can do it this

55:04

way or you can actually set up a

55:06

credential type here. Now I want to

55:07

explain the difference between the two.

55:09

So there are two main reasons why you

55:11

would want to do it this way by setting

55:13

up a credential type. The number one

55:15

reason is it's just super convenient

55:17

because the moment you have set this up

55:20

with the header off credential, you can

55:22

just pick from the list of credential

55:24

you've created in the next upcoming

55:26

nodes or even in another workflow. And

55:29

this is especially useful when you have

55:32

an API call that has a post and a get so

55:35

you don't have to set it up again in

55:37

next node. Now the second reason why you

55:39

want to do it this way is for security

55:41

reasons because setting up credential

55:44

type allows you to approach the

55:45

authentication without hard coding the

55:47

API key into the parameters and this is

55:50

just a better practice to ensure

55:52

security especially when you're sharing

55:54

the workflow on the project with

55:56

multiple team members and individuals

55:58

and all your credentials is going to be

56:00

stored under credential tab you can have

56:02

an overview of the list of credentials

56:04

that you've set up and of course you can

56:06

delete it or you can recon configure it

56:08

from this page. Now, that's it about

56:10

authentication. I'll see you in the next

56:12

section. So, in this section, we're

56:14

going to run through how to set up your

56:15

Google Cloud Console and connect it to

56:17

your NDN. That way, you're going to be

56:19

able to start using the Google notes

56:20

such as Google Drive, Gmail, Google

56:23

Sheet, Google Docs, and a variety of

56:25

others without setting it up over and

56:27

over again. So, the note that we're

56:29

going to start with here is the Google

56:31

Drive node. And the reason why we're

56:32

choosing this is because the way to

56:34

connect it is going to be similar across

56:36

all nodes. In fact, once you've

56:37

connected it once, everything else would

56:39

just be a simple matter of enabling APIs

56:42

through your Google Cloud console. So,

56:44

in this case, under credential, what I

56:45

want to do is to create a new

56:47

credential. And in order to get your

56:49

client ID and client secret, where you

56:51

want to go is to go to

56:52

console.cloud.google.com.

56:55

And depending on whether you've worked

56:57

with Google Cloud before, and by the

56:58

way, if you have and you already know

57:00

how to get connected with Nitnen, then

57:03

feel free to skip over to the next

57:05

section. But your page might look

57:06

slightly different from what I'm seeing

57:08

right now. But in any case, where you

57:10

want to go is on the top leftand corner.

57:13

You want to pick the project here. And

57:16

again, as you can see, I already have

57:17

one set up, but I'm going to set up

57:19

another one. Click on new project. And

57:21

I'm going to call it end demo KKK. And

57:24

under location, I'm going to pick new

57:25

organization here. And I'm going to hit

57:27

create. And there you go. It's creating

57:29

a Niden demo KKK project. I'm going to

57:33

click on select project and I'm going to

57:35

pull up the left hand sidebar here.

57:37

Where I want to go is under APIs and

57:40

services. I want to go to the OOTH

57:42

content screen and nothing set up yet.

57:44

So I'm going to hit get started. So app

57:47

information

57:48

and then demo KKK. I'm going to call it

57:52

the same name for the app and click that

57:54

as the email. And for the audience, you

57:57

have the choice of internal and external

57:59

depending on whether you have your

58:00

Google workspace set up. Internal is

58:03

only available if you have a Google

58:04

workspace. But in this case, I'm going

58:06

to hit external. And contact

58:08

information. I'm going to use the same

58:12

here. Hit next. Check on that and hit

58:15

continue. And we can go on create. All

58:17

right. So the next step, I want to hit

58:20

the create oath client. And what

58:22

essentially we're doing here is that

58:24

we're creating a project even though

58:26

it's not a real project. Google just

58:28

likes to use this as a structure to

58:30

identify and keep track of the oath

58:31

clients that people create. So

58:33

essentially where what we want to do

58:35

here is we want to pick application type

58:37

web application and in the name we're

58:40

going to choose nan demo kk again and

58:44

we're going to leave this as is and

58:47

we're going to add an authorize to

58:49

redirect URLs here and where we're going

58:51

to get this URL is from our workflow

58:55

or run our credential here and click on

58:58

copy and go back to our

59:02

Google console. Paste that and hit

59:05

create. So now we have the client ID.

59:07

We're going to copy that. We're going to

59:08

go back to our workflow here. So client

59:11

ID, we're going to paste that in. Go

59:13

back to our console here. And what we

59:15

want to do is to add a client secret.

59:17

And we're just going to copy that. Go

59:19

back to our work and paste the client

59:22

secret in. So just before we hit sign in

59:24

with Google here, which will activate

59:25

the connection, we're going to head back

59:27

to our Google Cloud Console and go to

59:30

branding and scroll down to authorize

59:32

domains. So in this case, what we want

59:34

to do, we want to add an authorized

59:36

domain, which is editn.cloud.

59:39

We're going to hit save. And the other

59:40

thing we're going to do is to type in

59:43

drive in the search bar and hit Google

59:46

Drive API because we want to enable this

59:48

API here. And of course, we're going to

59:50

have to enable Gmail API as well as

59:52

sheet and docs as well. But in this

59:55

case, we're going to just enable the

59:57

drive and we're going to try it out. And

59:59

once you've enabled the drive API, the

60:01

last thing you want to do is head over

60:02

to audience

60:04

and essentially publish the app. All

60:06

right, we're going to confirm that and

60:07

head back to our workflow here and hit

60:10

sign in with Google. and you're going to

60:12

be redirected to a Google signin page

60:14

and you're going to have to click your

60:16

email and it's going to say Google

60:17

hasn't verified this app, but it's okay

60:20

because it's you who's launching this.

60:22

And once you've done that, there's going

60:23

to be a couple permissions that you need

60:24

to run on the credentials. So, we're

60:26

going to select all of these and we're

60:28

going to hit continue. And there you go.

60:29

Connection successful. So, we're going

60:31

to head back to our workflow here. As

60:32

you can see, account is connected. And

60:34

once you've connected your account,

60:36

you'll be able to see the list of

60:37

folders that you have on your drive. So

60:39

in this case there's only one folder in

60:41

my drive which is KK test folder. So I'm

60:44

going to hit that and essentially we can

60:46

watch for file created for example right

60:49

so anytime a file is uploaded it's going

60:52

to trigger this. So we're going to hit

60:54

fetch test event. I didn't upload any

60:57

file so it says no data with the current

60:58

filter. So this is the folder right

61:01

here. I'm going to open that up. I'm

61:04

going to upload a dummy file, which is

61:07

an S SOP file. And there we go. So,

61:10

we're going to go back to workflow here

61:12

and rerun the test again. And as you can

61:15

see, now it's fetching the data of the

61:17

file that has been uploaded. So, cool.

61:20

For the other nodes such as Google

61:22

Sheet, Google Docs, what you need to do

61:24

is just to go back to your Google Cloud

61:27

Console and essentially enable your APIs

61:30

by typing in the Google notes or the

61:33

Google tool that you want to use. For

61:35

example, in this case, Google Sheets

61:36

API. You want to enable that and you can

61:39

go back and connect that on your edit

61:41

end and you're good to go. All right,

61:42

I'll see you in the next section. In

61:44

this section, we're going to build a

61:46

simple workflow of an AI research agent

61:48

that goes out to the internet and search

61:50

for the latest AI news and send it

61:53

across to you via email and at the same

61:56

time log those particular headlines so

61:59

that the next day the workflow is going

62:01

to check against the logs so that it

62:03

doesn't repeat the same headlines.

62:06

And this is a workflow that's going to

62:07

be super useful in this day and age when

62:10

AI news pops up every other day.

62:13

So this is what the entire workflow will

62:15

look like. We start off with a scheduled

62:18

trigger node which is preset to a

62:21

certain time of each day. Let's say 9:00

62:24

a.m. And then this goes to a Perplexity

62:27

search. So as you know, Perplexity is

62:30

one of the leading search AI agent out

62:32

there. And we're going to use that to

62:34

search the latest news or headlines that

62:36

is pertaining to AI development. And

62:39

then we're going to kick the output to a

62:42

checking agent. And what this does is it

62:44

dips into the Google sheet which locks

62:47

all the previous headlines of the news

62:48

to make sure that it doesn't repeat the

62:50

same headlines.

62:52

And after that, it's going to send a

62:54

summary of this AI news and send it out

62:57

to the intended recipient via Gmail. And

63:01

the final step is going to log those

63:03

headlines to make sure that the AI agent

63:05

stays up to date with those logs.

63:08

Now, let's build it together. The first

63:10

thing you want to do is introduce the

63:13

trigger note, which in this case is a

63:16

scheduled trigger note to kickstart the

63:18

entire workflow. With the schedule

63:20

trigger node, you can choose what the

63:22

trigger intervals that you want. In this

63:24

case, it could be days, hours, minutes,

63:26

or seconds or even longer than that. As

63:30

we want this workflow to kickstart every

63:32

single day, we want to choose days. And

63:35

the days in between triggers, we want to

63:36

keep that as one because we want it to

63:38

send out at every single day. The

63:41

trigger hour is something that we can

63:43

choose based on preference. And in this

63:45

case, we're going to choose 9:00 a.m.

63:46

because we like the news to be delivered

63:48

to us in the morning. And we're going to

63:50

keep the trigger at minute zero. And

63:52

we're going to hit execute step just to

63:54

populate the data. And there you go. We

63:57

have set our first trigger node. Now the

63:59

second note that we want to introduce

64:01

here is a perplexity note. And usually

64:04

when it comes to using thirdparty tools,

64:07

what you typically do is go to the API

64:09

documentation of the third party tools

64:12

and make a HTTP request towards those

64:15

API endpoints. However, in this case,

64:18

Perplexity has a native node that lives

64:20

within NN which you can choose from and

64:23

is easy to use. In this case, I'm going

64:26

to choose message model note from

64:28

Propexity.

64:29

And I already have my Propexity account

64:31

credential set up. But I'm just going to

64:33

walk you through very quickly how you

64:34

can easily do that. So in this case,

64:37

you're going to pick the dropown and

64:39

click on create new credential. And as

64:42

you can see is as simple as keying in

64:44

the API key from Plexity.

64:47

What you could do is go to Plexity.ai

64:50

and in the main dashboard, go to your

64:52

account and click on settings. And under

64:56

settings, if you look at the lefth hand

64:58

side tab, you want to hit API keys. And

65:02

your API keys page may not look like

65:04

this because you do need to set up your

65:06

API billing before being able to access

65:08

these keys. You can easily do that by

65:10

heading to API billing and putting in

65:12

your credit card information and topping

65:14

out the credits at a minimum of $5.

65:18

But in this case, I've already gone

65:19

ahead and do so. What we can do here is

65:21

to hit the create key button and it's

65:24

going to create an API key ready for us

65:26

to copy. Now head back to the workflow

65:30

and copy the API keys into the field.

65:34

Hit save and you have your Plexity

65:36

account registered to NAD. And there you

65:39

go. This says connection tested

65:41

successfully and your Plexity account is

65:43

connected to your NADN. And you only

65:46

need to set up the credentials one time.

65:49

Afterwards, you can simply pick from the

65:51

account credentials that you've set up

65:53

after you've connected your account or

65:55

your API keys in this case. What you can

65:57

do is to choose the operation that you

65:59

want perplexity to carry out. In this

66:02

case, it's a messenger model. And the

66:05

model that we want to work with is the

66:07

Sona Pro in this case. And there are

66:10

other models that you can choose from

66:13

such as the Sona Deep Research or Sona

66:16

Reasoning Pro. And each of these are

66:19

dependent on the use case that you want

66:20

to achieve. The deep research is going

66:22

to give you a large size of output rich

66:25

with data information and is perfect if

66:28

you're looking to do a very detailed

66:30

research about a topic. But in this

66:32

case, because we're only looking for the

66:34

headlines of the AI development news, we

66:37

want to choose Zona Probe. And as with

66:40

the other AI agent nodes, the typical

66:43

components that you have is the user

66:45

prompt as well as the system prompt. And

66:48

you can do that by adding a message and

66:51

changing the user to a system. And there

66:53

is a slight difference between the

66:55

workings of a perplexity node and those

66:58

of the other AI agent nodes. And

67:01

specifically, one of those is that

67:02

perplexity will require the system

67:04

message to come first before the user

67:06

message. So, what you want to do is to

67:08

swap this around and choose the system

67:11

prompt as the first prompt and the user

67:14

prompt as a second. Now, within the

67:17

system prompt, here's a prompt that I've

67:19

come up with. So, it says, "You are an

67:22

expert AI research analyst specialized

67:24

in tracking and summarizing the latest

67:26

developments in artificial intelligence.

67:29

Your task is to deliver concise and

67:30

up-to-date news summaries, focus on new

67:34

AI model releases, research

67:36

breakthroughs, and strategic moves by

67:37

key AI players. So that's the system

67:40

promp. As for the user prom, what I want

67:43

to do here is to prompt it effectively

67:45

so that it summarizes and extracts the

67:48

most relevant information or news for

67:50

me. And I'm going to blow this up so

67:52

that's easier for you guys to read. And

67:55

here I go. It says find and summarize

67:57

the most recent AI news within the last

67:59

24 hours. For reference, today is, as

68:03

you can see, this is intentionally left

68:05

blank because I want to drag the

68:08

variables of today. And in order to find

68:11

that, you would go to variables and

68:13

context. And this is applicable for

68:15

every single workflow. You're going to

68:17

have a default variable of the current

68:20

time as well as the current date. And in

68:23

this case, I want to drag the value or

68:26

the variables of the current date and

68:28

put it here. And this will dynamically

68:30

change the value depending on what the

68:33

date is. And so you can see on the right

68:35

hand side it says today is date time

68:39

today's date. And the reason why I want

68:41

to do that is because a lot of the times

68:44

the AI is prone to hallucinate the

68:46

timing of the day. So when you tell it

68:49

to get the most recent news, it doesn't

68:51

have context on what today is. So when

68:55

you tell it to get the most recent news

68:56

in the past 24 hours, often times it

68:59

might hallucinate the day to be a day

69:02

within last year or the year before. So

69:05

this way we ensure that the AI has

69:07

context of what today is and be able to

69:10

focus on news that are coming in from

69:12

the past 24 hours. And also I've added

69:15

in to make sure that it prioritizes

69:18

research breakthroughs and key

69:19

announcements by organizations like

69:21

OpenAI, Anthropic, Google, Meta,

69:23

Mistral, XAI and Hunking Face. And this

69:25

is just some guardrails to make sure

69:27

that it focuses on the news that are

69:29

most interesting to me which is the

69:32

development and the release of new

69:33

models of AI. Okay. So now that the

69:37

system and user message has been set,

69:39

the last thing I want to do is to go to

69:42

add option and click on the dropown

69:44

button and perplexity has a very neat

69:47

feature which is the recency filter. So

69:50

we can set the search recency filter to

69:54

just within the day. So in this case

69:57

it's going to restrict the search to

69:58

within the most recent news that have

70:00

been put out there. Okay, now we're

70:03

ready to execute the step. And as you

70:06

can see, the output is various types of

70:09

data that has to do with the research

70:11

output as well as some of the relevant

70:15

information regarding the search. I'm

70:17

not going to go through each of these

70:18

variables. But what is interesting to us

70:22

would be the number of citations here,

70:24

which means are the number of news

70:26

outlets and links that it has found to

70:28

be relevant to your search and also the

70:31

search results.

70:33

And finally, the output content which is

70:36

in natural language where perplexity

70:38

will respond to you and say there have

70:40

been several notable AI developments and

70:42

announcement for the last 24 hours

70:44

particularly concerning major strategic

70:46

moves and those reinvestments.

70:48

And what we want to do here is to pin

70:51

the data so that we don't have to rerun

70:54

that every single run is going to use up

70:56

the perplexity API token. We want to

70:59

make sure that we're being cost

71:00

effective and prudent about it. The same

71:02

thing with the schedule trigger. I'm

71:04

going to pin this data so that every

71:07

time we run through the workflow, it's

71:09

not going to exhaust a new API token and

71:12

provide new data. After the flexity

71:14

node, what I want to introduce is the

71:18

formatter agent or in this case the

71:21

checking agent. We're going to use

71:23

OpenAI in this case and we're going to

71:26

pick a simple message and model action

71:28

as part of the workflow. And as you can

71:30

see, the output of the perplexity node,

71:33

which was the previous node, becomes the

71:34

input of the OpenAI node. And again, I

71:38

already have my account all set up as

71:40

we've covered in the previous sections.

71:42

So, you should have the same account set

71:45

up here as well. And we're going to

71:48

leave the resources as text operation as

71:51

message and model. And we're going to

71:53

pick a model here. In this case, we're

71:56

going to pick a GPT

71:59

4. That should be enough to check

72:02

against the log. And just a reminder of

72:05

what we're doing here with this node is

72:07

we want this node to check against a

72:10

specific Google sheet log of past

72:12

headlines that has been recorded. And

72:15

the reason we do that is because some

72:17

headlines are going to be reported

72:19

twice, three times, four times, or

72:21

repeatedly depending on how big the

72:23

headlines is. If we don't add this

72:25

particular note within the workflow,

72:27

what happens is it's going to report

72:29

some of the headlines again and again

72:31

thinking that it's a new headlines in

72:34

the past 24 hours. We're going to rename

72:36

this news

72:39

checking agent. Now for the user prompt,

72:42

what I'm going to do is I'm going to

72:44

give it a set of prompt here and I'm

72:46

going to blow it up so that you can see

72:49

it says here's today's AI news list from

72:51

Perplexity.

72:53

So I'm going to add the summarized

72:56

content from the previous perplexity

72:58

node which is under choices under

73:01

message and the variable is called

73:03

content and we're going to just drag and

73:06

drop as you can see in the right hand

73:08

side this is the whole chunk of the

73:11

summarized news headlines that came from

73:14

perplexity

73:15

and then later on I'm going to ask

73:17

please cross check the news against the

73:19

past news lock sheet and remove

73:21

duplicates

73:22

So this is the name of the news lock

73:25

sheet that I'm going to create later on

73:27

so that they can compare and check

73:29

against such that there won't be any

73:32

duplicate news on the output. I've also

73:35

added here that we should ensure each

73:37

item has a clear impactful one to two

73:39

senses summary. So there's a bit of

73:41

summarization task here. Also after each

73:44

summary include the full unfold source

73:46

URL. This is to make sure that you're

73:48

interested in that particular news. You

73:50

can click on that URL and get access to

73:53

where it's been cited from. Use bold to

73:56

highlight company names or major

73:57

updates. This is just for readability

74:00

and add spacing between news items for

74:02

readability as well. And at the end,

74:05

I've added tip perplexities output news

74:07

are all duplicates from the past news

74:09

lock sheet. Respond appropriately with

74:11

no notable AI development news in the

74:13

past 24 hours. is just to make sure that

74:16

it's only generating content when there

74:19

are new headlines and not recycling

74:22

headlines again and again. So that is

74:24

the user prom. Meanwhile, we need to add

74:28

a system prompt. In this case, we choose

74:31

an add message and we're going to choose

74:33

a system message or a system prompt. And

74:37

in the system prompt, we're going to do

74:39

a simple prompt. And I'm going to blow

74:41

it up to say you are an AI news format

74:46

and your role is to process, check and

74:47

clean AI news result fetch from

74:50

perplexity to ensure readability and

74:52

that past headlines are not repeated.

74:54

I'm going to add when checked

74:57

against

74:59

Google sheet tool named past

75:04

and use lock. So this is just to ensure

75:06

that it knows that the tool exists and

75:09

that you're supposed to use that to

75:11

counteract the news headlines and make

75:14

sure they're not repeated. Okay, cool.

75:16

So now we are going to add a tool here

75:19

which is essentially the Google sheet

75:22

tool. All right, click that. And I've

75:25

already have my Google Sheets account

75:26

connected, but we've covered how to

75:28

connect your Google Sheets or Google

75:30

Docs or your entire Google account

75:33

through a Google Cloud console in the

75:36

previous section. So, you should have

75:38

the same setup. Under the tool

75:40

description, we're going to set

75:41

automatically because we're going to let

75:43

the AI decide when to use this tool. At

75:46

the same time, we're going to have to

75:47

determine some of the configuration

75:49

here. And under resource, we're going to

75:51

want to to refer to a sheet within the

75:54

document. And the operation is to get

75:58

rows because we're not trying to create

76:01

new sheets. We're not trying to update

76:03

the sheets. We're simply trying to check

76:05

against the sheets if the headlines are

76:07

repeated. So, we're going to use get

76:10

rows. Now, we need to fetch a document.

76:12

But just before we do that, we need to

76:14

create the actual sheet where all the

76:16

information of the past headlines are

76:17

going to be logged. So I've created a

76:20

sheet here called the passi news log and

76:23

it's a simple sheet with a date column

76:26

and a headlines column. So the date is

76:28

going to correspond to the date of the

76:30

headlines. And the headlines basically

76:32

is going to say the content of the

76:34

headlines in one or two sentences. Now

76:37

let's go back to the workflow. Once you

76:39

have the sheet set up, you can then find

76:41

the sheet just by selecting from list

76:45

and clicking on the right one which is

76:47

pass AI news lock. And then later on you

76:50

can choose from the sheet the correct

76:52

sheet in this case which is sheet one. I

76:54

only have one sheet in the document. So

76:57

that's the correct one. You should be

76:58

able to see now

77:01

and that's it. And just before we go, we

77:04

want to rename this to pass AI new law

77:09

because we want to make sure that the AI

77:11

knows that this is the particular Google

77:13

sheet that we're referring to when we

77:15

told it that there's an attached tool to

77:17

it. Now, going back to the workflow,

77:20

we're going to run this particular node

77:22

here based on the configuration that we

77:25

filled it. You can see based on the data

77:27

that came from perplexity

77:29

is checking it against the news log

77:32

which in this case is an empty sheet. So

77:34

there's no headlines to be checked

77:36

against but still doing that. And

77:38

finally the output is here's the

77:41

formatted AI news which is Julius AI has

77:45

successfully raised 10 million funding

77:46

round amongst other things. In this

77:49

case, the next node that we want to

77:51

introduce is the email node because

77:54

right now all the output lives within

77:56

the workflow. Just like in the previous

77:58

section, we're going to attach a Gmail

78:00

node. But in this case, we're going to

78:02

still choose the send a message action.

78:05

But the difference, as you can see, is

78:07

that we're not attaching the Gmail as a

78:10

tool for the checking agent to leverage

78:13

on. Now, why is that? And what's the

78:15

difference between attaching Gmail as as

78:17

a tool versus Gmail as a subsequent node

78:21

to the AI agent node? And the difference

78:23

is that when we attach the Gmail tool to

78:26

an AI agent note, the AI agent has the

78:28

liberty to choose whether or not to

78:30

leverage the tool or use the tool. In

78:33

scenarios where AI agent hallucinates,

78:36

they might actually use the tool when

78:37

they don't need to. They might not use

78:39

the tool when they actually need to.

78:41

Having Gmail note as a note that is

78:43

chained up to the workflow like this

78:45

ensures that any then runs through the

78:47

note regardless of what the AI agent

78:50

thinks. In this case, we make sure that

78:54

the note is run no matter what. And this

78:56

is extremely useful when we're sure that

79:00

each time of the day we want to receive

79:02

an AI news output into our Gmail because

79:06

a note has to be run every time a

79:08

workflow is run. It is just best

79:10

practice to chain it up as part of the

79:12

workflow instead of connecting it to the

79:14

AI agent as part of the tools that it

79:17

may or may not use. Now, let's go back

79:19

to the Gmail node. And you can see that

79:21

in terms of configuration, there's some

79:23

slight differences between a Gmail node

79:25

that is chained up to the workflow

79:27

versus a Gmail tool that is attached to

79:29

the AI agent. You now don't have the

79:32

option to let the AI choose to define

79:35

the recipient, the subject, and the

79:37

message. you actually have to hardcode

79:39

it into the node. And I use the word

79:42

hardcode, but actually you're simply

79:44

dragging the variable and dropping it

79:46

into the field. The fields appear to be

79:48

the same as when you have a Gmail node

79:51

attached to your AI agent. So, we're

79:52

going to leave it as that. But here's

79:54

where it gets a little different. So the

79:57

recipient that you want the output of

79:59

the Gmail to be sent to should be

80:02

consistent because you want this to be a

80:04

daily operation sent to the same set of

80:07

email addresses. So in this case I'm

80:09

going to put my email and under the

80:12

subject I'm going to put daily AI news

80:18

digest.

80:20

And this doesn't need to change because

80:21

I know what it is. And email type

80:24

because it's all going to be text. I'm

80:26

going to choose text and the message is

80:29

actually from the content here. And as

80:32

you can see the content itself, it says

80:34

here is formatted AI news. So this is

80:36

not ideal in the way that I want to

80:39

receive the summary. So we can actually

80:41

prompt this a little bit better. So once

80:44

we're done with this, let's go back to

80:46

the news checking agent and under the

80:49

user prompt and we blow it up. Let's add

80:54

an output format rule output in text

80:58

only and start the

81:02

news

81:03

daily news summary with here is today's

81:09

AI development news

81:13

today is and I'm going to just track the

81:17

today variable and drop it here. So, I

81:21

wanted to start this way so that it

81:23

makes sense. Every time I open up the

81:24

email, it's going to say, "Here's

81:26

today's AI development news. Today is

81:29

that particular day." So, I know that

81:30

I'm getting the latest news from the

81:33

headlines. Okay, now that's done. As you

81:36

can see, when I make any changes, it's

81:38

going to turn the node yellow. So, we're

81:40

going to run it one time and let's see

81:42

what the output now is. Here's today AI

81:46

development news. Today is 7th, sorry,

81:49

today is 29th of July, 2025. And then

81:52

the news. So, okay, that's the format

81:54

that I want it to be. I'm going to open

81:56

that up again and pin it so that I don't

81:59

have to rerun it every time. And within

82:01

the Gmail, this is the content that I

82:05

want to drag into the message field. So,

82:06

that's good. And remember in the

82:08

previous sections, we know that it's

82:10

going to automatically come with an

82:12

added attribution within the email. So

82:14

what we want to do is just click add

82:16

option and it toggle off the append

82:18

attribution and then we're going to try

82:21

to execute step. As you can see there

82:24

has been some actions and then email has

82:26

been sent. So let's go to our email

82:28

inbox and see what we've received. So

82:31

this is the email that I've received and

82:33

it says here's today's AI development

82:35

news. Today is this date. Of course this

82:37

formatting is not ideal. You can go into

82:40

the prom engineering again and try to

82:43

format it in a more readable way, but

82:47

essentially in terms of formatting, it's

82:49

much easier to read now. And of course,

82:51

these are the news, the headlines,

82:54

and if you push it to production, you're

82:56

going to be receiving these emails day

82:59

in day out and keeping yourself updated

83:01

with the latest AI development news.

83:03

Going back to the workflow because we're

83:05

not quite done yet. Even though we're

83:07

now receiving the daily news update and

83:10

I'm going to pin this as well is that

83:13

remember the AI news log that we've

83:15

introduced here. We want to make sure

83:16

that whatever news that have been

83:18

researched in the past 24 hours has been

83:21

logged into the sheet. That way the very

83:24

next day the agent or the news checking

83:27

agent can then check the log and make

83:29

sure that the headlines are not repeated

83:30

again. and the agent will always be

83:33

checking against past news to make sure

83:35

that it's not repeated. So in this case,

83:37

what we want to do is we want to

83:39

introduce a Google sheet node. Click on

83:42

Google sheet. As you can see, there are

83:45

a couple actions that we can do with

83:46

Google sheet. And what we want to do

83:48

here is to choose an append row in

83:52

sheet. And the reason why we choose

83:55

append row in sheet is because we want

83:56

to add new information on an existing

83:59

sheet. So we've got the Google Sheets

84:01

account connected already as usual as

84:03

what we've covered in the past section

84:06

and under resource we want to access the

84:08

sheet within the document and the

84:10

operation we want to do is to append

84:13

rows or append new information. Now

84:15

under document what we want to choose is

84:18

the past AI news. So just type in the

84:22

title and you can choose that. The

84:25

alternative is actually you can link it

84:27

by typing in or copy and pasting in the

84:30

URL or the ID of the document. And once

84:34

you've typed that in, you can then

84:36

choose the list of sheet. And again, we

84:38

have only one sheet. So the sheet sheet

84:40

one and once you've done that, you'll be

84:43

able to see the columns that now exist

84:47

within that particular sheet. And within

84:50

the sheet that we've created right now,

84:52

it's completely blank because there are

84:54

no headlines. They've been logged in. We

84:56

have two columns, which is date and

84:57

headlines. And if you go back to the

84:59

workflow,

85:01

there's the date column and there's a

85:02

headlines column. So now what we do is

85:04

we want to populate this particular rows

85:07

in under these particular columns with a

85:09

specific data. So in this case, we want

85:12

to choose under variables and context

85:14

the today variable and drag that and

85:16

drop it under date. So that way it's

85:19

going to always populate the date column

85:21

with the value of today's date. And

85:25

under headlines, what we want to do is

85:28

we want to choose under the news

85:30

checking agent the output of the

85:32

content. And now as you can see when you

85:34

drag variables from other nodes, you

85:36

don't necessarily always have to drag it

85:38

from the previous or the immediate

85:40

previous nodes. You can drag it from

85:42

nodes that were from previous instances

85:46

in the chain. So in this case, I'm going

85:49

to drag the content variable from a news

85:51

checking agent and just drop it into the

85:54

headlines.

85:55

Now with that done, I'm going to try the

85:57

execute step to see what it does. And as

86:01

you can see, it's telling you that it's

86:03

added the date component or rather

86:06

today's date into the date column and

86:09

headlines into the headlines column. And

86:12

of course, the way the logic works is

86:14

it's going to pick the topmost row that

86:16

is unpopulated within the sheet. So it's

86:19

always going to pick the next available

86:21

rows as you go. So now that that's done,

86:25

we've ensured that there is some form of

86:27

loop in terms of the way the workflow

86:31

checks for new headlines. And now with

86:34

the workflow, we've created a form of a

86:36

loop where the perplexity agent every

86:39

day would go out and search for the

86:41

latest information on AI development and

86:44

a news checking agent checks against

86:46

past logs of the headlines to make sure

86:49

that the information is not repeated

86:51

before sending it into your Gmail. All

86:54

of that gets stored within a Google

86:55

sheet every day. And then the next day,

86:58

the same operation is going to happen

87:00

again. The research agent is going to go

87:02

out into the internet and check the most

87:04

recent and relevant news for you. And

87:07

then we also have the agent checking

87:09

that against the Google sheet together

87:13

with the news that were locked in the

87:14

previous days before sending it out into

87:17

your Gmail. So that way you can be sure

87:20

that the headlines that you receive

87:22

every day are really the most relevant

87:24

and the most recent news that you can

87:26

get. There are many other options of

87:28

tools that you can use along this

87:29

workflow. Perplexity is just one of the

87:32

options even though in my opinion it's

87:33

one of the most advanced out there when

87:35

it comes to search. OpenAI can be

87:38

replaced with any other LLMs as well.

87:40

And you can replace a Gmail with a

87:42

Telegram node or Slack node or any other

87:45

platforms that you spend your most time

87:47

on. In any case, I hope this section

87:49

provides a good understanding of how a

87:52

general logging mechanism of the

87:54

workflow can possibly work. So that's

87:57

the end of the section and I'll see you

88:00

in the labs. In this section, we're

88:01

going to have a little bit of fun. So

88:03

since we now know how to create an AI

88:04

agent on end and how to connected to our

88:07

Slack accounts, we're going to try to

88:09

see if we can have the AI agent fully

88:11

take over our profile and respond on our

88:12

behalf, aka do our job for us. So in

88:16

this case, this is the same workflow

88:17

that we've built in the past section.

88:19

The first thing we need to do to have

88:20

the AI agent respond on our behalf is to

88:22

go to our Slack API and reconfigure some

88:25

of the OOTH permissions.

88:28

And what we want to do here is simply go

88:30

down to the scope section. And in the

88:33

past section, we filled up all the bot

88:34

token scopes as required. But in this

88:37

case, we're going to scroll down to the

88:38

user token scopes and fill it up with

88:40

the required scopes. And these are the

88:42

list of required scopes that are

88:43

typically recommended in order for it to

88:45

function well. So as you can see we have

88:47

channels history to read the channels

88:49

just in case we want to respond and

88:51

interact uh within the channel and we

88:53

obviously we have the channels read chat

88:55

right groups history as well as some

88:58

read and history permissions and this

89:00

all depends if you want the bot to

89:01

respond on your behalf in a channel in a

89:03

direct message or in a group message.

89:07

So once you're done adding all the new O

89:09

scope under the the user token scopes,

89:11

what you need to do would be to

89:12

reinstall the app. There will be a

89:14

notification popup that comes up here.

89:16

I'm just going to show you right here.

89:17

So if I were to add

89:22

let's say an admin apps read. What will

89:24

happen is it's going to ask me to

89:26

restore my app over here.

89:28

So just go ahead and do that and you're

89:30

good to go.

89:32

And just before we go under the bot

89:34

token scopes, you do need to add one

89:35

more permission which is essential which

89:37

is the chat write.customize.

89:40

And this just allows the bot to send

89:42

messages on your behalf.

89:45

All right. So once you're done, you want

89:47

to copy the user o token and go back to

89:50

your n workflow.

89:53

And here you're going to create a new

89:54

credential which I've done here.

89:57

But essentially what you need to do just

89:59

do the same thing. pick access token and

90:02

paste the user O token here.

90:06

I'm not going to click on save because

90:07

I've already created one, but it's the

90:09

same exact process as we've done in the

90:12

previous section. And once you're done

90:13

with that, you want to toggle over to

90:14

the event subscriptions. And what you

90:16

want to do is just unfollow the

90:18

subscribe to events on behalf of users.

90:21

In this case, also add these four event

90:23

subscriptions, which is message

90:25

channels, message.groups, message, and

90:28

message.mppim. This just covers most of

90:31

the use cases that you might need. And

90:33

if you want to know a little bit more

90:34

about the permissions and how it all

90:36

works, I encourage you to go into the

90:38

Slack documentation and to check it out

90:40

yourself.

90:42

But now I'm going to go back to our end

90:44

workflow and reconfigure some of these

90:46

nodes. So in the first trigger node,

90:48

previously we have the bot/ app mention.

90:51

Now I want to add on just any event as

90:54

one of the event triggers. The reason

90:56

for that is I want to make sure that I'm

90:57

capturing direct messages when my team

91:00

members are chatting with me. And I also

91:02

toggled on the watch o workspace. The

91:04

reason for that is again I don't want to

91:07

be just watching a single channel or a

91:09

single chat. So when I toggle this on

91:12

any message received will constitute a

91:14

trigger event which would then run the

91:16

workflow. And scrolling down under

91:17

options what I want to do is also add

91:19

the username or ids to ignore.

91:22

In this case, I'm going to fetch my own

91:23

ID to ignore my own messages because

91:26

right now how it works is it's going to

91:28

trigger on any event, which means it

91:30

will also include any messages that have

91:32

sent to any channels as a trigger event.

91:35

So, how do I get the user ID?

91:38

Well, you can get your user ID from your

91:39

Slack channel by just simply clicking on

91:42

profile

91:45

and clicking on the three dots here. And

91:47

you're going to be able to copy the

91:48

member ID. So that's the ID and you can

91:52

go back to your NDN workflow to the

91:54

trigger node

91:56

and essentially paste the ID under the

91:58

expression for the username or ID string

92:00

ignore.

92:03

Cool. For this demo, I've created a

92:04

fictitious workspace. I've added one of

92:07

my colleague Michael Forester into the

92:09

channel. And obviously this is not the

92:11

real Michael. This is just a profile

92:12

that I've created. And again, as you can

92:14

see, this is just a demo workspace. And

92:17

I'm even on a free account on the left

92:19

hand side. You can see that it's 23 days

92:21

left on the trial.

92:24

And what I've done was I created another

92:26

account as a fake Michael Forester. And

92:29

it says Michael F, which is me.

92:32

It says Michael F, which is this

92:34

account. And the left hand side, you see

92:36

this is my account. And the reason why

92:37

I'm setting up these two accounts is so

92:39

that I can show you how it all works

92:40

when my team member chats with me on a

92:42

direct message. So now I'm going to try

92:44

to complete the reconfiguration of the

92:46

entire uh workflow. And what I'm going

92:49

to do here is go into the agent node and

92:52

everything stays the same with the

92:53

exception that I've added a system

92:54

message here. So essentially what I'm

92:57

telling the agent is that you are

92:59

maronei a team member at corkcloud and

93:01

your job is to impose a maronei as well

93:03

as you can and respond to his team

93:04

members message on the behalf sound

93:06

friendly and natural in a typical tech

93:07

working environment. So as you can see

93:10

is a pretty broad system message. Um,

93:12

you can obviously prompt this much much

93:15

better, but I'm going to just start with

93:17

that.

93:18

But one more thing that I'm going to do

93:19

is I'm going to go to a chat model and

93:21

I'm going to just toggle this to GPT5

93:23

because it just came out.

93:30

And GP5 in my experience has been great

93:32

at responding to complex queries um and

93:35

using tools when required.

93:39

So I'm going to leave it as that. The

93:41

simple memory is still attached.

93:45

And on the last note, which is a send

93:47

message note, I made two changes here.

93:49

And the first one is, as you can see,

93:50

the channel ID, I made it into dynamic

93:53

ID. It's showing an error here right now

93:55

because it doesn't have any data

93:56

populated just yet. But essentially, I

93:58

want to make sure that this is going to

94:00

be dynamic because the workflow now is

94:02

triggered by any events watching the

94:04

entire workspace. So, it's not

94:05

necessarily going to be restricted to

94:07

one single channel. So it could be a

94:08

direct message, it can be a group

94:09

message as well as a channel message. So

94:11

I just want to be able to dynamically

94:13

reply to that specific channel where I'm

94:16

interacting with people on.

94:20

And the second thing that I'm going to

94:21

change is under option, I'm going to say

94:24

send as user. I'm going to put in my

94:27

username here which is Mark. And that's

94:29

it.

94:34

Okay. So I'm going to hit execute

94:35

workflow here.

94:38

As you can see, it's waiting for a

94:39

trigger event. So, I'm just going to go

94:41

back to our Slack app.

94:53

In this case, I'm going to use the

94:55

Michael Porser account here to chat with

94:57

me. So, I'm going to say, "Hey, Marone."

95:02

All right. So, let's go back to the

95:03

workflow. As you can see, it's getting a

95:05

trigger. It's hidden the AI agent and

95:08

it's responding to my message. So, let's

95:10

see what kind of message or response

95:12

that I'm receiving back. It says, "Hey,

95:15

what's up? Need an update on anything

95:16

AWS/resour

95:18

or is there something that can help

95:19

unblock?" All right, sounds pretty

95:21

helpful. And I think it threw this in

95:22

because I said, you know, pretend to be

95:24

somebody in CodeCloud. So, CodeCloud's

95:26

associated with cloud technologies, etc.

95:29

So, it's probably just throwing that in.

95:31

Okay. So, I'm going to go back to the

95:32

workflow. I'm going to hit execute

95:34

workflow again. So that it's now waiting

95:36

for my trigger.

95:38

And using Michael's profile, I'm going

95:40

to type.

95:43

How was your weekend? And I'm just going

95:45

to stay social. Not going to talk about

95:47

anything about work. So there you go.

95:50

And it's responding pretty good. Kept it

95:53

low-key. Got some rest. Did a bit of

95:54

planning over the week. I also sketched

95:56

out a couple of tweaks for the AWS and

95:58

the zero pipeline that I'll share after

96:00

the stand up. Cool. All right. Um, it's

96:02

adding it own flare. So, not sure about

96:04

that. Uh, probably need to work on the

96:06

prompt a little bit, but cool. Um, so,

96:11

so it's working quite well now.

96:13

Responding as myself, even though I

96:15

don't normally sound like that. Um, I

96:17

don't necessarily work intimately with

96:19

all the cloud stuff, but I'm just going

96:20

to go back to the workflow here. And I

96:22

want to try to make the agent a bit more

96:24

useful and give it more context uh than

96:26

it currently is. And this is where I'm

96:28

going to try to introduce a pretty

96:30

simple rack system. And in this case,

96:32

I'm just going to add a tool and this is

96:34

just going to be a Google doc tool

96:39

because the reason why I'm doing that is

96:40

I realized that people are going to

96:42

start asking me about projects etc.

96:45

So what I did was I created a document

96:48

which is a Google doc document and

96:51

essentially this is completely

96:52

fictitious but it outlines kind of the

96:54

update of the project progress in terms

96:57

of some of the stuff that we're trying

96:58

to do and this is again completely fake

97:01

but it's just a document so that we can

97:03

see how it all works. All right cool I

97:07

have this Google doc drafted out. This

97:09

is where I put all the fictitious

97:10

updates. So I'm going to connect that

97:11

here and under the Google doc node what

97:14

I want to do is is a get operation

97:18

and I'm going to go back and just copy

97:20

link which is a URL link of the cloud

97:21

project status

97:23

and I'm going to paste that URL here.

97:27

All right. So I'm not going to do

97:28

anything with this node. This is just an

97:30

attach tool. I'm going to go back to the

97:31

AI agent and under system prompt I'm

97:34

just going to add a tool description.

97:37

Use the Google doc tool when asked about

97:45

project updates.

97:50

Cool. All right. So, I'm going to just

97:51

make that change. I'm going to save that

97:54

and I'm just going to click on execute

97:56

step here and go back. Ask Michael.

98:02

I'm gonna ask

98:05

how

98:06

is the is your project going.

98:12

As you can see, it is kicking it to the

98:15

air agent. It is thinking it is going

98:17

into the document checking the latest

98:19

update of the project before kicking it

98:22

back into Slack. So, let's see what it

98:24

responds to me.

98:27

So, there it goes. says quick as your

98:28

project I made objective interactive lab

98:31

design done 65% complete ETA 15th

98:34

October 2025 now let's see if that's

98:37

correct

98:45

so this says that the lab is 65%

98:47

complete and expected completion 15th

98:50

October 205 so it's successfully

98:52

fetching the relevant information into

98:54

the message however as you can

99:00

So, it's giving me the right

99:01

information, but as you can see, it's

99:02

missing a little bit of a human touch.

99:04

So, that's probably something I can work

99:05

into the assistant prompt for it to

99:07

respond more like me and more humanlike.

99:09

But at rarely, this gives you a good

99:10

understanding on how you could leverage

99:13

AI agent. Maybe not to impersonate

99:15

yourself just yet, but in a lot of other

99:17

use cases. As you can see, once you've

99:19

created a rag agent like this, this

99:21

could function as your technical support

99:23

or just a bot FAQ.

99:27

And this could work for HR, legal,

99:30

marketing, whatever the case may be, or

99:32

even customer support. But there you go.

99:34

This section is to really just

99:35

familiarize yourself with the select

99:37

permission logic and how you can

99:38

potentially leverage that to its full

99:40

extent.

99:42

Again, this is just for fun, and we're

99:44

not recommending you to replace your day

99:46

job with an actual AI agent just yet.

99:49

See you in the next one. In this

99:50

section, we're going to walk you through

99:52

how to set up the Slack API and connect

99:54

it to your NAN. And then we're also

99:57

going to go through the various

99:59

different permission levels as well as

100:00

the logic in terms of integrating Slack

100:02

into your NAND workflow. If you're

100:04

already familiar with how Slack API

100:06

works, feel free to skip this section

100:08

and go to the next one. So, first where

100:10

you want to head to is the Slack API

100:12

page, which is api.slack.com.

100:15

And once you've signed in, you want to

100:16

go to your app section and hit create

100:18

new app. And what you're going to do is

100:20

come up with an app name. In this case,

100:21

I'm going to call it the end devilbot.

100:23

And pick your workspace and hit create

100:25

app. Now, what you want to do in your

100:26

bot configuration page is first head

100:29

over to permission. So, there are a

100:30

couple things to keep in mind here in

100:32

terms of the architecture of the bot.

100:34

So, first we need to configure the

100:35

permission level that we want to grant

100:37

the bot on Slack. In this case, we want

100:39

to scroll down to the permission scope

100:41

and we're going to hit add an O scope.

100:43

In this case, a couple of basic

100:44

permissions that I want to start off

100:46

with. So the first being the obvious

100:48

which is the app mentions read. So this

100:50

is going to let the bot read any

100:51

messages that you've sent to the bot

100:53

directly by pinging it. And the second

100:55

one that we want to add will be the chat

100:58

write which allows it to send messages

101:00

as any demo bar. And just to make sure

101:02

that it could read our messages on

101:04

channel we want to add channels read and

101:06

group read as well. Lastly we want to

101:08

make sure that they could actually see

101:10

the name of the people that are sending

101:12

the messages. So we're going to add the

101:13

user profile read as well as users read.

101:16

So that should be enough for now. And so

101:17

we're going to go ahead and install the

101:19

bot to our Slack workspace, which is

101:21

called the Slack automation lab. And

101:23

we're going to click allow. And again,

101:24

you need to be an administrator of your

101:26

Slack account to be able to do this. As

101:28

you can see now, we've got a bot user O

101:30

token generated. So we're going to hit

101:32

copy on this because we're going to use

101:33

this to create a Slack credential back

101:35

in our NN workflow. So going back to our

101:38

N& sheet, the first trigger node that we

101:39

want to introduce here is the select

101:41

trigger node. And I want to make sure

101:43

that we pick from the selection of the

101:45

trigger nodes and not the action node.

101:47

In this case, we're going to hit the

101:48

onbot app mention. And I already have a

101:50

Slack vential preset up here, but I'm

101:52

just going to recreate another one just

101:54

so that I can show you how it's done.

101:55

So, we're going to name this Slack and

101:57

then demo code cloud. And I'm going to

102:00

paste the key here and hit on save. As

102:03

you can see, the green bar shows up.

102:05

Connection tested successfully. So, I'm

102:06

going to go back to the API page here.

102:08

And right now I'm going to go to the

102:09

event subscription because I want to

102:11

make sure that's listening to the events

102:13

that I've specified. I'm going to toggle

102:14

this to on. And for the request URL is a

102:17

URL that I'm going to grab from my endn

102:19

workflow. So I'm going to hit the web

102:21

hook URLs here. As you can see, there

102:23

are two URLs provided. The first one is

102:25

the test URL and then we've got the

102:27

production URL. The production URL is

102:29

the URL that you're going to use when

102:30

you toggle into production. But right

102:33

now, because we're doing it on staging,

102:34

we're just going to take the test URL,

102:36

copy it to our clipboard, and go back to

102:38

the API page and paste the URL. And as

102:41

you can see, when you do that, it's

102:42

going to say that your URL didn't

102:43

respond with the value of the challenge

102:45

parameter. This is because you need to

102:47

go back to your Slack trigger and

102:49

execute step and your workflow will be

102:52

ready to receive the trigger. That's

102:53

when they will send back the challenge

102:54

parameter to the Slack API. Now, in this

102:56

case, before we can hit execute step, we

102:59

need to populate the field, which is the

103:00

channel to watch. And there are a couple

103:02

ways to pick your channels. As you can

103:04

see, you can either get it from this, by

103:05

ID, or by URL. So, we're going to pick

103:08

from this. Click on all Slack automation

103:10

lab. So, this is a general channel that

103:12

anybody can join. So, I'm going to hit

103:14

execute step. I'm going to go back to

103:15

the API page and hit retry. And this

103:18

time, it's going to say verify. And just

103:20

before we go, I want to make sure that

103:21

subscribe to the right event. So, under

103:23

subscribe to bot event, I want to add

103:25

bot users event. In this case, I want to

103:27

add at mentioned, which is just making

103:29

sure that it's listening every time that

103:31

this been mentioned. So, I'm going to

103:33

hit save changes, and we should be ready

103:35

to go. All right. And then, going back

103:36

to our Slack page, as you can see, it's

103:38

listening for an event. And we don't

103:40

have an event right now because

103:41

nothing's been typed into our Slack

103:43

channel. And here, for it to listen to

103:44

the channel, we need to add the app into

103:46

the channel. And the way to do that is

103:48

to go up to the member section here.

103:50

Click on that. And where you want to go

103:51

is toggle over to the integration bid

103:54

and select the app that you want to add.

103:58

So there you go. The N demo bot is now

104:00

added to the channel.

104:03

Now we're going to just do a trial run

104:05

by hitting execute step.

104:08

And that's going to wait for any

104:09

triggers while listening to the channel.

104:12

And going back to the channel, what I'm

104:13

going to do is I'm going to ping-demo

104:17

and I'm just going to simply say hello.

104:19

So I'm going to go back to my NN. As you

104:22

can see, it's now populated the SL like

104:24

trigger node. Now for SLE, there's quite

104:26

extensive amount of output payload and

104:29

the information that gets passed on to

104:31

the triggers. So of course the most

104:34

important bit is the text itself or the

104:36

content of the text which is hello.

104:39

But as you can see, you're also being

104:40

passed other types of data like the user

104:42

ID as well as the channel ID and the

104:45

event timestamp. So all of those are

104:47

going to be useful depending on the use

104:49

case of your workflow.

104:53

Now once you've set it up, next note

104:56

just to play around with this is we're

104:59

going to add an AI agent node

105:05

and we're going to pick define below

105:08

and we're going to drag from Slack

105:11

the text content

105:15

and we're going to add a chat model.

105:21

In this case, I'm just going to pick

105:22

follow mini.

105:26

I'm also going to add some memory

105:28

because I want to make sure that it's

105:30

going to be able to recall contextual

105:33

knowledge or information from previous

105:34

compositions.

105:36

So, in this case, the session ID that I

105:38

want to pass to it is a channel ID.

105:48

I'm not going to attach any tool for now

105:50

because essentially what I want to

105:51

happen is just for the AI agent to be

105:53

able to chat with me on Slack. So I'm

105:55

going to add the third node which is the

105:57

Slack node and this time

106:01

I'm going to allow it to send message

106:03

back. So pick the right select

106:04

credential in this case the select and

106:06

demo codecloud

106:08

resources message operation as send. And

106:11

we're going to choose send message to

106:13

channel and again we have to provide the

106:15

channel ID. Of course, you can pick from

106:17

the list, but in this case, I'm going to

106:19

pick from the ID. And how do we get this

106:21

ID? In this case, we want to go back to

106:23

the select channel.

106:25

As you can see, if I click on the title

106:28

of the channel or the name of the

106:29

channel

106:32

and I scroll down here, you're going to

106:34

see channel ID C099 AJ1 KP46. So, I'm

106:39

going to copy that and go back to my

106:41

edit end workflow and paste that in. So

106:44

I'm just trying to show you that there

106:46

are two ways to do this which is one by

106:48

the list and the second one by the ID.

106:50

Actually there is a third one which is

106:52

by URL

106:54

but typically selecting by ID will work

106:56

for all cases.

107:03

Okay. So I don't have the parameters yet

107:05

to drag into the message text. So I'm

107:07

going to execute previous nodes.

107:11

And there you go. There's an output

107:12

which is hello. How can I assist you

107:13

today? I'm going to drag that, put it

107:15

into the message text,

107:18

and I'm just going to hit execute step.

107:20

All right, let's go back to our Slack

107:23

channel. And as you can see, it's

107:25

replying to me on the channel saying,

107:27

"Hello, how can I assist you today?" And

107:30

it also says automated with this

107:32

workflow. So, we just want to click on

107:35

add option. We want to click on include

107:37

link to workflow and toggle that off.

107:40

So, I hit execute step again.

107:45

And when you go back to a slack channel,

107:46

it says, "Hello, how can I assist you

107:48

today without the automated open end

107:50

workflow appended to it?"

107:53

Cool. So that's a simple way to connect

107:55

your Nen workflow to Slack. And of

107:57

course, there's a lot more cool stuff

107:58

that you can do on Slack, such as

108:00

configuring the Slack model and also

108:02

being able to reply as a user. I'll see

108:04

you in the next section. So one of the

108:06

most useful features of ENN is the

108:08

ability to tap into thirdparty tools to

108:10

be used on your workflows. And these

108:13

could include tools that are not

108:14

natively available on NDN. And the ways

108:16

you would do this would be through API

108:18

calls, MCPS or webO calls. So in this

108:21

section, we're going to run through the

108:23

foundational workings of a typical HTTP

108:25

request nodes as well as how to make API

108:28

calls in three different scenarios. So

108:30

the first scenario, we'll do a simple

108:32

API call to a public API endpoint that

108:35

doesn't need any authentication in order

108:37

to get some fun little facts about cats.

108:40

And on the second scenario, we're going

108:42

to make an API call to the open

108:43

weathermap.org

108:45

to get you the latest information about

108:47

the weather on a specific city that you

108:49

requested. And finally, we'll make an

108:52

API call to a web scraper called

108:54

Firecrawl that could intelligently

108:56

scrape any web pages that you like. So,

108:59

we're going to go ahead and build the

109:00

workflow here. And the first trigger

109:02

note that we want to choose in this case

109:04

is a trigger manually note. The reason

109:06

why we're choosing this is because it's

109:08

a simplest one. And in this case,

109:10

because there's no input that we want to

109:12

transfer into the HTTP node at this

109:14

point, we're going to start with this.

109:16

So, we're going to hit execute flow to

109:18

see what sort of data that populates it.

109:20

So, we're going to open it up. And as

109:22

you can see, it's an empty payload in

109:24

the output data. And this is good enough

109:26

because the next node is just the HTTP

109:28

request node which does not need any

109:30

input. So in this case I'm going to hit

109:32

the HTTP request node. There are a

109:34

couple options that you can choose from

109:35

in terms of what type of HTTP request

109:37

that you're trying to make. The most

109:39

popular ones obviously the get and the

109:41

post. In this case we're going to stay

109:42

with get. And the public API URL

109:44

endpoint that we want to hit is this

109:47

catfact ninja/fact.

109:49

So there's no authentication required

109:51

and we're just going to hit execute

109:52

step. And as you can see, it returns a

109:54

data output of a random fact about cats.

109:57

So the random fact that we fetch is two

109:59

members of a cat family are distinct

110:01

from all others. The clouded leopard and

110:03

the cheetah. The clouded leopard does

110:05

not roar like other big cats, nor does

110:07

it groom or rest like small cats. Okay,

110:09

so that's very interesting. So now we

110:11

like to do something a little bit more

110:13

useful by calling an API on the weather

110:15

app. So in this case, we're going to

110:18

delete the HTTP node and reintroduce a

110:20

HTTP request node again. In this case,

110:23

we're going to choose post. And the

110:26

public endpoint that we want to hit in

110:27

this case can be found at

110:29

openweathermap.org/curren.

110:31

And you scroll down to the built-in API

110:33

request by city name. You'll be able to

110:35

copy and paste the API endpoints

110:38

straight into your workflow. So going

110:39

back to the workflow, we're going to

110:41

paste this. And before I do anything

110:43

else, I'm going to blow this up. And as

110:45

you can see, there are two variables

110:47

here. The first one being the city name

110:48

and the second one being the API key. So

110:51

for the city name, we want to pick a

110:53

city that we're interested in. In this

110:55

case, London. And for the API key, we

110:57

need to key in a valid API key, which

111:00

can be obtained from the website. So

111:02

going back to open weathermap.org,

111:04

the first thing you need to do is to

111:06

sign up and ask for free. And once

111:08

you've signed up and gone through a

111:09

little fun onboarding questionnaire, you

111:11

can head to my API keys and you'll be

111:13

able to view all the API keys that you

111:15

have with Open Weather. And you could

111:17

also generate a new API key by clicking

111:19

on this button over here. But in this

111:20

case, we already have one set up. So

111:22

we're just going to copy that and paste

111:24

it back in the workflow. Cool. Now that

111:26

we filled that in, we're going to

111:27

execute step and we're going to toggle

111:29

this into the schema format. As you can

111:31

see, the temperature, the maximum, the

111:33

minimum temperature, as well as a cloud

111:35

coverage situation. Now, as you might

111:37

have noticed, I've actually input the

111:39

API key in the API endpoint itself. But

111:42

a better practice within Noden is to set

111:45

up an authentication that's going to

111:46

persist even after you're done with a

111:49

note, which means you can use the same

111:52

credentials or API keys in a separate

111:54

note if you happen to need to work with

111:58

this particular app. So, I'm going to

111:59

show you how that's done. In this case,

112:01

I'm going to hit authentication, select

112:04

generic credential type, and choose

112:06

header O. And why did I choose Brit? And

112:08

the reason why I'm choosing header

112:10

because the API keys itself is contained

112:13

within the header section. Now, we want

112:14

to set the new credential. So, we're

112:16

going to name this credential open

112:18

weather map demo. Okay. And under name,

112:22

I'm going to do X-API-K.

112:25

And under value, I'm going to paste the

112:27

same API key. I'm going to hit on save.

112:30

And there you go. My credential is saved

112:32

under the name open weather map demo.

112:34

Now after setting it up as a header O,

112:36

what I can do now is actually remove the

112:38

API key section from the API endpoint.

112:41

Now let's run that. As you can see, it's

112:44

giving me back the same information. The

112:46

only difference is now my API keys lies

112:49

within the preset header authentication

112:51

section here. And if I were to create

112:53

another node that works with

112:55

openweathermap.org, or I can simply use

112:57

the same header off credentials wherever

112:59

I go. So this is just a very easy way to

113:02

go through your workflow. Great. So now

113:04

we're done with the weather app. What we

113:06

want to do is to try a different

113:07

scenario where we're actually calling an

113:10

API endpoint of a web scrape. So let's

113:12

delete this node. And what we want to do

113:14

is again to reintroduce the HTTP request

113:17

node. And this time we're going to make

113:18

a post request to a web scraper endpoint

113:21

called firecraw. So we're going to

113:22

rename this firecrawl scrape. So when

113:25

you head to fcraw.dev, this may look

113:27

different depending on whether you've

113:29

signed up or not. I'm already signed in,

113:30

so this page will look slightly

113:32

different for you, but essentially to

113:34

sign up, you just have to go through a

113:36

fun little onboarding questionnaire and

113:39

pretty much it's straightforward for you

113:40

to sign in. And once you're in, you want

113:42

to head over to the dashboard and even

113:44

on a free plan, you'll be given an API

113:46

key to start scraping with. So where I

113:49

want to go from here is to go to the

113:50

docs because this is where all the API

113:53

documentation is going to rest at and go

113:55

to the API reference. In this case, I

113:57

want to scrape. So this is the post

113:59

request for scrape. I'm going to hit on

114:02

try it and copy the curl command up

114:04

here. And before copying the curl

114:05

command up here, what I'll do is I'm

114:07

going to put in the URL of the page that

114:10

I want to scrape. So in this case, the

114:11

page that I want to scrape is actually

114:13

TechCrunch, specifically the Gen AI

114:16

section of the news. So, I'm going to

114:17

copy that and go back and just paste the

114:20

URL here. As you can see, this going to

114:22

change the JSON body of the URL to the

114:26

URL that I've intended here. Of course,

114:28

you can do this manually within the

114:30

workflow itself as you import the curl

114:32

command, but I thought it easier to just

114:34

do it beforehand. And we can just copy

114:36

the whole curl chunk and go back to the

114:38

workflow here and click on import and

114:40

paste the entire curl command. As you

114:42

can see, there's a post method to this

114:44

particular URL. every one of these

114:46

fields are configured the way we needed

114:48

to. And under authentication, we're

114:50

going to set this up again the same way

114:52

we've set it up for the open

114:53

weathermap.org. So, we're going to hit

114:55

generic credential type and then we're

114:57

going to pick header o because that's

114:59

where the authentication credentials

115:02

lies for this particular API call. And

115:04

then we're going to create a new

115:05

credential. So, we're going to call this

115:07

credential fire crawl codecloud demo.

115:10

And just before I go any further here, I

115:12

want to go back to the documentation and

115:14

look at the the header within the curl

115:16

command. In this case, what we're going

115:18

to pay attention to is the name of the

115:20

authentication here, which is

115:21

authorization. And we're going to type

115:23

in authoriz. And under value, we see

115:26

that it's got bearer space API token.

115:29

So, what we want to do is go back. I'm

115:31

going to toggle this to expression so

115:32

you can see what I'm typing in is bearer

115:34

the API token which you're going to get

115:36

by going to the dashboard and copying

115:38

the API key here and pasting that into

115:41

the field. So there you go. Hit save and

115:43

that credential is going to be saved.

115:45

Okay. So going back you notice that

115:47

under the send headers portion is toggle

115:49

to live and it actually has the same

115:52

thing which is authorization which we

115:54

don't need anymore because we already

115:55

have it all set up here. So we're just

115:56

going to toggle that off and then focus

115:58

on the body right here. So, we've

116:00

already set the body up earlier before

116:02

copying it over. So, there's nothing

116:04

much we need to do here. So, we're just

116:06

going to hit execute step. There you go.

116:08

So, there a couple of output data here.

116:09

As you can see, the main content is

116:12

really under the markdown. And I'm going

116:13

to toggle this to a table format so it's

116:15

easier to see, but it basically feeds

116:17

you the headlines of the news of the day

116:19

that has to do with Gen AI. So, as you

116:21

can see, Tech Crunch covers the latest

116:23

news, da da da, headlines only, learn

116:25

more, and here are the links to those

116:27

headlines. And yeah, so this is a simple

116:30

scrape. So it's going to scrape all the

116:31

information within that particular web

116:33

page. But of course, you can format this

116:35

more within firecrawl itself. But in

116:38

this section, I just want to show how

116:40

easy it is to set up an API call to call

116:42

a web scraper or any other type of apps

116:45

that might be interest or might be

116:46

useful to your workflow. So, building on

116:48

to that, I wanted to show what happens

116:51

when you have a post request that

116:53

requires a get for you to obtain the

116:55

result of your post request in the first

116:58

place. So, we're going to stick with

116:59

firecrawl. In this case, under fire

117:01

crawl, there are a couple of features

117:03

that we can use. So, one of them is

117:05

extract. So, how's extract different

117:06

from scape? Well, extract is an AI

117:08

powered way for Firefly to extract

117:10

structured data from either single page,

117:12

multiple page, or even an entire website

117:15

and any of the subdomains or subpages

117:17

associated to those websites. So, it's

117:18

an extremely useful tool for us to use.

117:21

So, what we want to do here is head over

117:23

back to the docs. Where you want to go

117:25

is go to the documentation tab and

117:28

scroll down to extract. There's a pretty

117:30

good curl example that we can make use

117:32

of here. So I'm going to copy the entire

117:34

curl command here and just go back to

117:36

the workflow and hit import curl. You

117:38

can see all the fields here that needs

117:40

to be configured are all added. And what

117:42

I want to do first here is to select the

117:44

same authentication that I've set up

117:45

previously under generic credential type

117:48

and header o and firecrawl codecloud

117:50

demo. So as you can see since we have

117:52

already set up the header off here we

117:54

can toggle this off and move on to the

117:56

JSON body. Now the reason why I picked

117:58

the example is because this already been

117:59

pre-filled. So it's easy for us to run

118:01

this example so that you don't have to

118:03

really fill in every single sections or

118:06

variables that is required to be filled.

118:08

What's happening is that the prompt

118:10

that's given to extract the company

118:12

mission whether it supports SSO whether

118:13

it is open source or whether it is in Y

118:16

combinator from the page. So that's the

118:18

prompt for extract. So it's going to try

118:20

to get relevant information surrounding

118:23

that prompt. So let's go back. We're

118:24

going to stick with that and we're going

118:25

to hit execute step. So when we do that,

118:28

as you can see that the note says it is

118:30

executed successfully. What it means is

118:32

it's pushed through the post request to

118:34

the API endpoint successfully and it's

118:36

currently processing the post request.

118:38

Now because it's extracting for a couple

118:39

different pages and specific

118:41

information, it's going to take some

118:42

time. So what we want to do now is wait,

118:44

right? So and how do we wait within the

118:46

workflow? Well, you can actually

118:48

introduce what we call the wait node. So

118:50

introducing the weight node, you can

118:51

actually configure the amount of time

118:52

that you want to wait. So in this case

118:53

we're going to hit 30 seconds and of

118:55

course the weight unit is in seconds and

118:57

you can change that to minutes, hours

118:58

and days depending on your needs but

119:00

we're going to wait 30 seconds here.

119:02

Going to remain rename this to that.

119:04

Okay. So I'm going to hit execute step

119:05

as well. So the data is populated here.

119:08

But what I want to do with the fire

119:10

crawl extract post is that I want to pin

119:11

the data because every time I run this

119:14

particular node, it's going to exhaust

119:15

my API token. It's going to consume my

119:17

API token and I want to make sure that

119:19

the pre-populated data stays here for

119:22

every execution run. So that's going to

119:24

be super helpful for me. As we wait for

119:26

the wait note to complete the load, what

119:29

we want to do here is to introduce the

119:31

HTTP get request. So going back into the

119:33

firewall documentation, what you want to

119:36

do is scroll down for that example and

119:38

here's the curl get command that you can

119:40

copy and go back to your workflow. Hit

119:43

import curl. And basically all the

119:46

required field has been set up. So if

119:49

you look at the documentation, you'll

119:50

see that the extract ID comes from the

119:52

slash after the endpoint. So what we

119:55

want to do is introduce the same here.

119:57

Use a slash and toggle that to

119:59

expression and drag the request ID, the

120:01

post request ID into the field here. And

120:04

of course, this is the same post request

120:05

ID as what the previous nodes was

120:09

passing the same data into the weight

120:11

node. and hence we're using that in the

120:13

field here. So essentially what we're

120:15

telling it to do is to get the result of

120:18

this particular post request ID and

120:20

return those results here. So I'm going

120:22

to rename this firecrawl get request.

120:26

All right, things are pretty much set up

120:28

here but we have to again select the

120:30

same authentication method in this case

120:32

the firecrawl codecloud demo and then

120:35

we're going to hit execute step. So

120:36

there you go. As you can see, move this

120:39

into a schema format and it's extracting

120:41

the information that we want, which is a

120:42

company mission. At the same time, it

120:45

also tells us the status which is

120:47

completed. And this is important because

120:49

as you wait, I know we put in 30 seconds

120:52

here. It's not necessarily going to be

120:54

completed within 30 seconds. So, let's

120:56

say after the post request, we wait 30

120:58

seconds and then the get request

121:00

launches and the status is not yet

121:02

completed. What will happen then? Well,

121:03

the workflow will break down. there will

121:05

be an error and the workflow will stop

121:07

working. So to avoid that scenario, what

121:09

we want to do is to create what we call

121:10

a loop and in this case we're going to

121:12

add an if node so that we can continue

121:15

that logic. And what we'll do here is

121:18

under firecrawl get request we want to

121:21

take the status and drag it into this

121:24

field here. And because this is going to

121:26

be strings as you can see completed is a

121:28

string. If the string is equal to

121:31

completed, as you can see, I test run

121:33

that and basically this is true. It

121:36

falls into the true branch. It's

121:37

populated. False branch is not

121:39

populated. It's true because the JSON

121:41

status here is completed as per the

121:44

requirement. So if that's true, it's

121:47

going to continue and maybe send that

121:49

completed extracted data onto something

121:51

else such as maybe sending it to let's

121:54

say a Gmail send a message note and we

121:57

can populate the content which is in

121:59

this case a company mission let's say

122:01

and the message and I'll just say here

122:03

company mission right so two let's say

122:06

I'm just going to pick maronecloud.com

122:08

so it doesn't matter because this is

122:10

just an email or any other output node

122:12

that you want to set However, if the

122:15

status is false, what it means is that

122:17

after waiting for 30 seconds, when we

122:19

try to get the result of the post

122:21

request, it's still not ready. So, what

122:22

happens is under the false branch, we're

122:24

going to wait another 30 seconds because

122:27

we want to wait until it is ready. Okay?

122:29

And we're going to wait another 30

122:30

seconds and we're going to loop this

122:31

back onto the firecrawl get request. As

122:35

you can see, the yellow highlights

122:37

coming out because there are some

122:38

changes that we made to the node which

122:40

is connecting it with another potential

122:42

previous node. This loop will avoid the

122:44

breakdown of the workflow in the case

122:46

where after waiting for 30 seconds the

122:48

request is not ready and it's checking

122:50

that by the if note because if the

122:52

request is not ready the status is going

122:54

to say in progress and therefore it's

122:58

not completed and it's false it'll wait

123:00

for another 30 seconds and try to get

123:02

the result again. And if still not ready

123:04

by then it's still going to go into the

123:06

false branch and it's going to wait

123:07

another 30 seconds and going to try to

123:09

get it again. So it's going to be in

123:10

this loop until it is ready and hence

123:13

the status is completed and becomes true

123:15

and then it sends out as an email. So

123:17

this is how we can use loop to avoid a

123:20

breakdown of workflow when it comes to

123:22

an API request that requires time to

123:24

process and it requires a second step of

123:27

getting the result from the app. So

123:29

we're just going to run this again just

123:30

to make sure it's populated. But

123:32

essentially that covers the principal

123:35

workings of how we would go about

123:37

building a workflow with HTTP request

123:40

node to access API endpoints of other

123:42

apps that are not necessarily native to

123:45

any. In this section, we're going to

123:47

build from scratch a text to image

123:48

workflow. This is how the workflow is

123:50

going to look like in the end. The goal

123:52

of this workflow is for you to begin

123:54

with a text input specifying how you

123:57

would like the image to be created in a

123:59

simple format and that will be passed on

124:01

to an image prompt agent that would then

124:03

create an effective prompt to be passed

124:05

to an image generation model in a

124:06

platform called Wspeed AI. Now waist

124:08

speed AI is just one of the many

124:10

platforms that host video or image

124:12

generation model like V3 or Cadets. A

124:15

little caveat, it is a paid platform and

124:17

to be able to use the API calls, you'll

124:19

need to top up a minimum of $10. Now,

124:22

we'll walk you through how to set up a

124:23

post API call to waist speed and how to

124:26

post a get API call to get the result of

124:29

the image from waist speed and then how

124:31

to set up an if loop just to make sure

124:33

that the workflow doesn't break down

124:35

when the image or video isn't ready yet.

124:37

And finally, we're going to pass on the

124:39

result or the output of the image into a

124:42

simple Gmail. And the output node can be

124:44

replaced by any form of outputs such as

124:45

Telegram, WhatsApp or Slack. Now,

124:48

starting from scratch, the first trigger

124:49

that you want to hit in this case is the

124:51

chat trigger node. And again, we're

124:53

choosing the chat trigger node because

124:54

we want to be able to tell the workflow

124:56

a certain specification of the image

124:58

that you want to be generated. And this

125:01

doesn't have to be super detailed

125:02

because again, you will have an AI image

125:04

prom agent that's going to help you

125:06

craft an effective prompt. But in this

125:08

case, I'm going to just populate the

125:10

chat note data with a simple create an

125:13

image of a cat flying through oops of

125:19

rainbows. All right. And that data is

125:22

now populated in the payload. As you can

125:24

see, the chat input is created image of

125:25

a cat flying through groups of rainbows.

125:27

Cool. All right. And the second node

125:29

we're going to introduce here is the

125:31

OpenAI node, which is just a message

125:33

model node. In this case, I'm going to

125:35

rename this image prompt generation AI.

125:39

All right. So, the OpenAI account has

125:41

been connected as per as how we've gone

125:43

through in the previous section. And

125:45

we're going to choose a model here. And

125:47

we're going to pick GPT4.1. And of

125:50

course, we're going to drag the chat

125:51

input into here as the user input. And

125:54

what I'm going to do here is I'm going

125:55

to add a system message just to specify

125:58

exactly what I want this node to do. And

126:00

here's the system message that I've

126:01

prewritten. I'm going to toggle this to

126:03

expression so I could blow it up. As you

126:05

can see, I did not come up with this

126:07

system prompt. What I did was I went to

126:09

chatgpt and essentially asked it to act

126:12

as a prom engineer and come up with an

126:14

effective system prompt for a text to

126:16

image generation prom engineer. So I

126:18

recommend that you do that for all

126:19

production cases because LLMs are just

126:23

much better at coming up with system

126:24

prompts than we are. and you don't

126:26

really have you can do it yourself, but

126:28

I would say that starting off with an

126:30

LLM and iterating it and improving the

126:33

version afterwards is a much easier way

126:35

to go about it. So, as you can see, it

126:37

says you're an expert text image

126:39

generation prom engineer working inside

126:40

and in an automation. Your only task is

126:42

to generate clear, vivid, and effective

126:44

prompts to be passed to an image

126:45

generation API. I'm not going to read

126:47

the whole thing, but essentially it's

126:49

just giving it details on how we want

126:51

the output of the video or rather the

126:54

image prompt generation to be. Cool. All

126:56

right. So, I'm going to hit execute step

126:57

here and see what kind of output it

126:59

gives me. So, as you can see, the

127:01

content has been outputed is an image

127:03

prompt generation, which is a playful

127:04

cat soaring gracefully through vibrant

127:06

glowing hoops made of rainbows suspended

127:08

in the sky surrounded by fluffy clouds,

127:10

dynamic motion, bright and whimsical

127:11

lighting. magical atmosphere highlight

127:14

detailed digital art fantasy

127:15

illustration style. So great. So it's

127:18

describing the image the background as

127:20

well as the style that we want it to be

127:22

in. Cool. All right. So after the image

127:24

generation prompt where we want the

127:26

output to go is to HTTP request node to

127:29

way speed API. So I'm going to rename

127:31

this a simple way post because this is

127:34

going to be the post API that we're

127:36

going to call to waist speed. So going

127:37

back to waist speed again if you guys

127:39

are new to this platform what you need

127:41

to do is sign up it's pretty easy and

127:42

straightforward there's going to be some

127:44

onboarding questionnaires and stuff like

127:46

that but it's pretty easy

127:48

straightforward to sign up and I'm

127:49

already signed in here where I will go

127:51

is I will go to the explore models here

127:54

and as you can see there are many models

127:56

that you can try and play around with

127:58

but in this case what we want to do is

128:00

we're going to checkbox the text to

128:01

image so that we are only given the text

128:04

to image models here and we're just

128:06

going to take the first one which is

128:08

Cream by my dance and we're going to

128:10

toggle over to the API documentation

128:12

here. As you can see there's there's a

128:14

post curl command that I can easily copy

128:17

and import to my workflow. So going back

128:19

to the workflow I'm going to click on

128:20

import curl. I'm going to paste the

128:22

entire curl command here. And there you

128:24

go. The fields are populated. The next

128:26

thing I want to set here is the

128:27

authentication method which as we've

128:29

gone through. We're going to choose the

128:30

generic credential type header off. And

128:33

I already have a waist speed credential

128:35

set up here. But what I'm going to do,

128:37

I'm going to create a new credential so

128:39

that it's clear on how to set this up.

128:41

So I'm going to call this waype

128:44

credential demo. And under the name,

128:47

just to be sure, I'm going back to the

128:48

API documentation here. And under name,

128:51

so the name should be authorization. And

128:53

the value is bearer API key. And what I

128:56

want to do is I'm going to head over to

128:58

the API key section. And as you can see,

128:59

I already have an API key generated. But

129:02

what you can do is you can actually

129:04

create a new key and hit that create key

129:07

and it'll just give you a new API key

129:09

for you to copy from. But in this case,

129:11

I'm going to use the existing one. Going

129:13

to copy that. I'm going to go back and

129:15

under value, I'm going to toggle this to

129:16

expression so you can see it. I'm going

129:18

to type beer space API key and I'm going

129:21

to hit save. Cool. So now that's set up.

129:24

Under the hitter section, I can toggle

129:25

this off because the authentication is

129:28

already contained within the hitter

129:31

section. So I can just remove that and

129:33

focus on the body section. Now there are

129:35

a couple fields here under the body

129:36

section and most of it is just how you

129:38

want the image to be the ratio as well

129:41

as the size here. But what we want to

129:43

pay attention to is really in this case

129:44

the prompt because we want to make sure

129:46

this prompt comes from the content

129:48

output of the image prompt generation

129:50

AI. And we're going to drag this here.

129:52

And there you go. The rest of the stuff

129:54

I'm going to just leave as is. But of

129:56

course as you were generating it you can

129:58

specify it to your requirements. But in

130:00

this case, I'm going to hit execute

130:01

step. And there we go. We got it out. So

130:04

the next note I'm going to introduce

130:05

here is just a simple weight node. Now

130:07

this is image generation. So which means

130:09

it's going to be pretty quick. As you've

130:11

seen the status here is already created.

130:13

But in this game, I'm just going to wait

130:14

15 seconds for it to complete. And I'm

130:17

going to hit execute step just to

130:18

populate this note. And just before I go

130:20

any further, I want to go back here and

130:22

pin this data so that I don't have to

130:24

rerun that every single run. It's going

130:26

to consume my API token. So I want to be

130:28

careful about that. And while that's

130:29

spinning up, what I want to do is start

130:31

setting up the get HTTP request, which

130:34

is the same node here. But in this case,

130:36

I want to go back to waist speeds API

130:37

documentation and hit the curl command

130:39

for get result and go back to the

130:42

workflow import curl. Import the entire

130:44

thing. And again, all the fields are now

130:46

configured correctly. With the get API,

130:48

we want to make sure that we're pasting

130:50

in the right request ID. As you can see,

130:51

there's a variable here called the

130:53

request ID. And we just want to first of

130:55

all toggle this to expression and then

130:57

remove this with our request ID from the

131:00

wheat node. And we're just going to drag

131:02

that right between the slashes here. And

131:04

what we're doing here basically is we're

131:06

calling the post API and way speed is

131:08

receiving the image request and it's

131:10

processing the image. We waited 15

131:12

seconds and now we're going to get the

131:13

image. But we got to tell way speed

131:15

which image that we're trying to fetch

131:17

and we're doing that by dragging the ID

131:19

into the API endpoint. Here under

131:21

authentication we're going to choose

131:23

generic credential type header O and the

131:26

same credential here and of course

131:28

similarly we can toggle off the headers

131:30

because we already have it within our

131:32

credential. Rename this to waste feed

131:34

get and we're going to hit execute step.

131:36

So we're seeing a couple things in the

131:38

payload here but essentially the URL

131:40

link of the image is the output link

131:42

here. All right. So I'm just going to

131:43

copy that and paste it to see what the

131:46

image looks like. So this is what the

131:48

image looks like. is a cat jumping

131:49

through hoops of rainbows. That looks

131:51

pretty cool. All right, now that that's

131:53

done, again, what happened here is that

131:55

the get API was called after the status

131:58

has been created. So, the result exists

132:00

and is valid. But what about the case

132:02

when speed is taking its time to create

132:04

an image or when it's a video generation

132:07

model, it's probably going to take

132:08

longer than just 15 seconds. So, in the

132:10

scenario where the get API call is made,

132:13

when the image is not ready, it's not

132:15

going to give you any result. in fact

132:17

it's going to bug out or error out. In

132:19

that case, a workflow is going to stop

132:20

without rerunning it because essentially

132:23

it's just going to error out on this

132:24

node. So what we want to do to avoid

132:26

that scenario is we want to create an if

132:28

loop. In this case, what we want to do,

132:30

we want to drag the status string from

132:32

the waist get http request and

132:35

essentially say if it's equal to

132:37

completed, then it is true. I'm going to

132:39

hit execute step here and it goes into

132:42

the true branch because the status has

132:44

been completed. So the if branch acts

132:46

like a logic node where it can filter

132:48

out the output based on the logic that

132:51

you've set. So in this case the logic

132:52

that I've set is for the status to be

132:55

completed and that's true. It falls

132:57

under the true branch and it can

132:59

continue. In this case I'm going to set

133:01

the Gmail note. Send a message. I've

133:04

already set up my Gmail account.

133:06

Essentially I'm going to send this to

133:08

myself. The subject is image generated.

133:11

And I'm going to drag the now variable

133:14

just so that we can tell by the title

133:16

when this is created. And email type can

133:18

be text because I just want the link in

133:20

the email. Nothing fancy. And the

133:22

message we're going to drag the output

133:24

here so that the link to the image is

133:27

sent to me via email. All right. So I'm

133:29

going to hit execute step. As you can

133:31

see, the email has been sent. And let's

133:32

see what the email looks like. So the

133:34

email looks like this. Nothing fancy.

133:36

It's just passing on the link to my

133:38

email address to my email inbox. I'm

133:40

going to hit that and open it up. And

133:43

here we go. That's the image. So going

133:44

back to the workflow or to the if loop.

133:46

So this is what happens if the status is

133:48

completed. The image is ready and it

133:50

gets passed on to the Gmail. However, if

133:52

the image is not ready, it's going to

133:54

fall into the false loop because the

133:56

status would now say that it's not

133:58

completed. It's probably going to say

133:59

that it's processing. It's going to fall

134:01

into the false branch. In this case, we

134:03

want to wait for another 15 seconds or

134:05

any other arbitrary weight amount. We're

134:08

just going to name that wait another 15

134:10

seconds. Okay. And I'm going to hit

134:11

execute step. In this case, nothing is

134:14

going to happen because it's under the

134:15

false branch. 15 seconds. And after we

134:18

wait another 15 seconds, what we want to

134:20

do is to loop it back to the get node or

134:22

the HTTP request node here. If the image

134:26

is not ready, the status is going to say

134:28

incomplete or in progress and that's

134:30

going to fall into the false branch and

134:32

it's going to wait 15 seconds and it's

134:34

going to try to get the image again. And

134:36

then if it's still not ready, what

134:38

happens is it's still going to fall into

134:39

the false run. It's going to wait

134:40

another 15 seconds and try to get it

134:42

again. And it's going to stay in this

134:44

loop until the image is ready. And then

134:46

it's going to fall into the true run and

134:48

kick it out into the Gmail. Cool. I hope

134:50

there's a clear explanation on how to

134:52

build this workflow from scratch. In the

134:54

next section, we're going to build a

134:55

texttovideo workflow where is somewhat

134:58

similar to what we've built here with

134:59

the exception that it's going to

135:01

generate a video for us. I'll see you

135:03

there. In this section, we're going to

135:04

build a text to video workflow, which is

135:06

similar to the previous section, which

135:08

was a text to image workflow with some

135:11

minor but important distinctions. Now,

135:13

just like the previous workflow, we're

135:15

going to start off with a chat trigger

135:17

so that you can pass on the

135:18

specification of the video that you want

135:20

to create and that's going to be passed

135:23

on to a video prompt pigeon instead of

135:24

an image prompt pigeon. This time we're

135:27

going to do a text to video model

135:30

on V3.

135:32

in your labs, you're going to already

135:34

have these two nodes set up.

135:36

And again, the only difference in this

135:38

case from the previous node

135:40

is the system prone that we've

135:42

pre-populated in the video prom agent.

135:44

So you don't have to. So if you toggle

135:46

the expression, and I'll blow it up so

135:47

you can see it. It is now saying that

135:50

you're an expert tech video generation

135:52

prom engineer working inside an

135:54

automation. And again, you can generate

135:55

this yourself through GPT or any LLM of

135:58

your choice to act as a prompt engineer.

136:01

I'm not going to read the whole thing,

136:04

but it is pretty similar to the text to

136:07

image generation prop engineer before.

136:13

Now, continuing to build the workflow,

136:14

the next node that you want to hit here

136:15

on your labs will be the HTTP request

136:17

node again.

136:21

And this time, we're going to waste feed

136:23

and explore models. We're going to type

136:25

in VO3

136:27

and we're going to filter out text to

136:29

video category only.

136:31

And as you can see, as you can see,

136:33

there are two models here. V3 fast and

136:36

regular V3. In this case, we're going to

136:38

pick V3 fast.

136:40

>> Google V3 Fast is now on WSE AI. It'll

136:44

blow your mind and go try it now.

136:48

>> So, just for a warning for V3, it is

136:50

pretty expensive to run. And as you can

136:52

see, the request itself will cost me

136:54

$3.20 per run,

136:57

but as you can see from the sample

136:59

video,

137:01

it is pretty high quality. So, let's try

137:03

it out.

137:09

So, toggle to the API documentation

137:10

section. And what you want to do here,

137:13

very similar to what we've done in the

137:14

past section, is to copy the entire curl

137:16

command and go back to the workflow.

137:18

We're going to import the entire curl

137:20

command here.

137:21

And again this is a post method

137:24

and we're going to say waste post

137:29

for the name of the node

137:31

authentication. We've set it up earlier

137:33

in the previous section. So we're going

137:35

to just select the same generic header

137:38

off with waist speed credential demo

137:42

and we're going to toggle off the send

137:43

headers and focus on the send body

137:46

section. And just before I complete set

137:48

this up, I'm just going to populate the

137:50

previous nodes just by saying create

137:54

a video of

137:57

of five gorillas

138:00

on a boat

138:03

having a great fishing trip.

138:08

Okay, so I'm going to type that into the

138:10

chat note. It's going to pass it on to

138:11

the video prom agent. And let's check

138:14

out what's the output from the video

138:15

prom agent. It says a lively cinematic

138:18

scene of five gorillas on a wooden

138:19

fishing boat in the middle of a sunlit

138:21

lake. Laughing cheering as a reel in big

138:24

thrashing fish splashes of water in a

138:26

golden sunlight. All right, looking

138:28

pretty good. Uh, so let's try that out.

138:34

And we're going to open up the HTTP

138:36

request node here. Going back to the

138:38

JSON body section of the way speed post

138:40

node.

138:42

What we want to take a look here, as you

138:43

can see, there are a couple parameters

138:45

that were not mapped correctly. And this

138:47

sometimes happens when the curl command

138:49

either wasn't formatted correctly or

138:50

wasn't parsed over in the correct way.

138:53

So, in this case, what I want to do is

138:54

I'm just going to use JSON um the raw

138:57

body JSON. And in this case, I'm just

139:00

going to copy the entire body of the

139:03

JSON curl command and go back to the

139:05

workflow and paste the entire body. And

139:09

from here, essentially what I can do is

139:10

I'm going to replace this with the

139:12

string content from the video promotion.

139:15

Right? So I'm just going to delete that.

139:19

And I'm going to drag this again. I need

139:22

to toggle this expression. Let's blow

139:24

that up and drag this content over in

139:28

between the prompt. And you can take a

139:30

look at the right hand side for the

139:32

result, which is yes, the prom is now a

139:34

lively cinematic scene of five gorillas

139:35

on a wooden fishing boat in the middle

139:37

of a sun lake. Cool. Aspect ratio,

139:40

duration, and whether or not to generate

139:43

audio. And these are formats and

139:45

configurations that you guys can decide

139:47

based on your requirements. But this is

139:49

good to go for now.

139:52

So I'm going to hit execute step here.

139:57

And as you can see,

140:01

the post request is successful.

140:04

So, as what we've done with the image

140:05

generation workflow, what we want to do

140:08

here is to wait 15 seconds. And we might

140:12

need a little longer than that because

140:13

it's a video and it's high quality as

140:15

well. So, it might take longer than 15

140:17

seconds, but we're just going to put 15

140:20

seconds for now. And we're going to

140:21

execute stat to populate that. Of

140:23

course, what I want to do is to pin this

140:25

data here. Again, it was is quite costly

140:27

for me to run the API call. So I want to

140:30

make sure that I don't have to

140:31

repopulate this data again and again.

140:37

So the wait 15 seconds is done. The data

140:40

has been parsed through. What we want to

140:43

do now is to introduce the HTTP request

140:47

get node. And again going back to the

140:50

API documentation here. I'm just going

140:51

to copy this.

140:53

I'm going to import curl

140:56

the entire thing. and it's going to just

140:59

configure all the fields that are

141:01

required here. I'm going to toggle this

141:02

to expression and the same thing as we

141:04

did before, we're going to replace this

141:05

request ID with the video ID so that it

141:09

knows which video to fetch as a result.

141:13

And then in terms of authentication,

141:15

we've already set it up before. So,

141:16

we're just going to pick the same one.

141:18

Toggle off the send headers part because

141:20

we already done the authentication on

141:22

the previous bit.

141:24

Rename this to

141:27

waist speed get

141:30

and I'm going to hit execute step.

141:35

So as you can see the status has been

141:37

completed. So we can actually go over to

141:40

the next node. But before I do that,

141:41

again, just as what we've done in the

141:43

image generation workflow, what we want

141:46

to do is introduce an if loop because

141:49

just in case that we run the get API and

141:54

the status is not completed, meaning the

141:57

video is still processing that the

141:59

workflow doesn't all break down. Right?

142:01

So, we're going to drag the status here

142:04

and we're going to say that if it's

142:05

equal to completed

142:09

that it is true. So, we're going to

142:10

execute the step and there we go. It

142:12

goes to a true branch. But what we're

142:15

interested here is that in the case

142:16

where it's not true, which means it's

142:20

not completed, that we're going to wait

142:21

for another 15 seconds.

142:30

All right, I'm going to set that as 15

142:32

seconds. Okay,

142:34

and we're going to loop this back to the

142:37

get HTTP request node. And what

142:39

essentially we've done is that in case

142:41

when the status is not completed, it's

142:43

going to go into the false branch. It's

142:45

going to wait another 15 seconds and

142:46

it's going to try to get the video. And

142:49

if it's still not done, it's going to

142:50

fall under the false branch. It's going

142:52

to wait another 15 seconds and try to

142:54

get the video again. It's going to go in

142:56

a loop until the video is ready and

142:58

becomes true. And then we can kick that

143:00

out to a Gmail note or a Telegram node

143:02

or whatever node in terms of output that

143:05

you want to send it to. All right. So

143:07

then going to configure this send a

143:09

message again. It's going to be the same

143:11

thing as before.

143:22

I'm going to name it video generated

143:26

on this particular time. I'm going to

143:27

give it a time stamp.

143:29

Drag the now variable. Whoops. Toggle

143:32

that expression. Drag the now variable

143:34

here. And there we go. We're just going

143:38

to do text here because I only want to

143:39

pass the output,

143:43

which is this one over here. Okay.

143:51

And I'm going to hit execute step.

143:54

And there you go. Message sent. Let's

143:56

look at our email and check out the

143:57

video. So, this is the video link. I'm

143:59

going to click on that. It's going to

144:01

download the video. I'll open that up.

144:06

Okay.

144:16

All right. So, that was a video.

144:17

Probably could have done better with the

144:19

prompt for the videos. Could have told

144:21

it to say something or do certain

144:23

actions and stuff like that. And of

144:25

course, if you have any particular

144:26

requirements in terms of the style of

144:28

the video, you can always tell it that.

144:29

Right now, it's still a little 3Dish to

144:32

me. So I definitely can do a bit better.

144:35

Of course, this is V3 fast. If you want

144:38

higher fidelity, you can definitely pick

144:40

a higher model like just a full V3. But

144:42

again, it is rather costly. V3 is one of

144:45

the costliest model out there for video

144:47

generation. So alternatives to that

144:49

could be CDAN or cling or want which are

144:52

model that available in the same

144:54

platform which is wavepeed.ai

144:57

and of course a lot of other platforms

144:58

as well. So that's a quick build along

145:00

for the text to video workflow. And in

145:02

the next section, we're going to do an

145:04

image to video workflow just to complete

145:06

the whole set. And we can explore how to

145:08

actually implement or use that in a real

145:10

life use case such as a marketing use

145:12

case or just for fun. See you in the

145:15

next one. So in this section, we're

145:17

going to build an image to video

145:18

workflow from scratch. Just because for

145:20

an image to video workflow, we're going

145:22

to need two inputs. One being the base

145:24

image that the video is going to base

145:26

off. Second being the video generation

145:28

prompt that we want to send over to the

145:30

video generation model so that it can

145:32

effectively craft videos based on the

145:36

image as well as your requirements and

145:38

specifications. Because of the

145:39

multivaried inputs that's required here,

145:41

we do need a centralized way that we can

145:43

communicate with the workflow. In this

145:44

case, a platform that we'll use for this

145:46

demo is Telegram or any other

145:48

communication platform that you spend

145:50

most of your time on. So in this case,

145:52

for this workflow, it gets triggered by

145:54

either one of two things. It could be a

145:57

Google Drive trigger, which is when you

145:59

might upload an image like this into a

146:01

designated Google Drive and that would

146:03

trigger the event and the workflow and

146:06

that's going to send a Telegram chat

146:08

message to you saying that an image has

146:11

been uploaded and please provide the

146:13

video idea that you want the video to

146:14

look like and you can chat basically to

146:17

the bot from this Telegram chat saying

146:19

create a video of a cat jumping through

146:21

the hoop based on this image and

146:22

parachute opens to safely land the cat.

146:25

So essentially when you reply to this it

146:27

would then activate the second part of

146:29

the workflow which is a telegram trigger

146:31

and it's going to kick that input into

146:33

open AI. So in this case a video prom

146:35

generator that will pass the video

146:37

prompt to CES API which in this case

146:40

we're going to again use waistfeed as a

146:43

platform that will provide the services.

146:44

So it's going to make a post API call

146:46

based on the image that we have. And the

146:48

way it would do that is as you can see

146:50

in the first part of workflow after

146:52

we've received a telegram message. What

146:54

the workflow does it also uploads a

146:56

particular image ID onto a Google sheet.

146:59

And in this node, the agent basically

147:00

dips into the Google sheet to get the

147:03

latest URL in case you have more than

147:04

one image and pass both the video prom

147:07

and the image URL to Cance post API. And

147:10

then the rest remains the same, which is

147:13

a wait note followed by a get HTTP

147:16

request with an if loop just to make

147:18

sure that it doesn't break down in case

147:19

the video has not done processing yet.

147:22

And then when it's done processing, it's

147:24

going to kick it out to either a Gmail,

147:26

but this case, I think it might actually

147:28

be better if we just replace this with a

147:30

Telegram node. So there you go. We're

147:32

going to build this from scratch because

147:34

it's an entirely new workflow. So let's

147:36

jump right into it. So the first node

147:38

you're going to start off with is the

147:39

Google Drive note. And the trigger we

147:41

want to pick is basically on changes to

147:43

a specific folder. And essentially what

147:45

this note does is going to monitor the

147:47

folder every minute to detect any

147:50

changes to that folder. And the changes

147:52

that we're talking about specifically in

147:53

this case is an upload file. So this is

147:55

when you drop a photo into that specific

147:57

drive. Right? So we're going to pick in

147:59

this case I have a folder ready which is

148:02

called image to video codecloud. All

148:04

right. So I'm going to pick that from

148:05

the list and watch for file created.

148:08

Right? So I'm going to click on fetch

148:09

test event and because initially we I've

148:12

already uploaded a file just a minute

148:14

ago. It's reading this as a trigger with

148:17

the corresponding output payload. As you

148:19

can see, there are a couple of data

148:21

here, but where we really want to pay

148:23

attention to is a web URL link which is

148:26

publicly available and the variable

148:28

title is web content link. So, this is

148:30

the link that we'll be most interested

148:32

in. And just so that everything works

148:34

well, we want to make this publicly

148:35

accessible. So, make sure that you

148:37

change the settings to be publicly

148:39

available at least during the testing

148:41

period. So the second node that we want

148:43

to introduce here is a telegram node

148:45

because what we want to do essentially

148:47

is when a photo has been dropped into

148:49

the folder, we want to get a

148:50

notification on the telegram chat. So

148:52

this is a simple send a photo message on

148:56

telegram and I already have my telegram

148:58

account set up but I'm going to show you

148:59

very quickly how you can do that on

149:00

telegram and what you want to do is type

149:02

in on the search bar on top here

149:04

botfather and you want to make sure that

149:05

it's the correct one which is the one

149:06

with a blue tick and it's going to lead

149:08

you to this page. And what you want to

149:10

do is hit on start and it's going to

149:12

have an automated/start

149:14

uh message here. And what botf father is

149:17

a place where you create your bots

149:18

within telegram. And this is not to be

149:20

confused with the air agents that we're

149:22

creating on nn end. This is simply a bot

149:24

that listens in to the communication

149:26

within telegram and be able to pass on

149:28

that as a trigger to your end workload.

149:31

Now in this case I'm going to create a

149:33

new bot. So I'm just going to choose a /

149:34

newbot and I'm going to need to name the

149:36

bot. So I'm going to say end it in test

149:40

demo one and I also need to give it a

149:42

bot name. So the same thing I'm going to

149:44

add it a bot. So there you go. This is a

149:47

link to chat with the bot. The bot has

149:48

been created and this is the API key

149:51

that we can copy. So I'm going to click

149:52

on that. I'm going to go back to my

149:54

workflow and in this case I'm going to

149:55

create new credential under telegram.

149:58

I'm going to name it telegram demo. And

150:00

I'm going to essentially paste the

150:01

access token here and hit save. And

150:04

there we go. We're going to look out for

150:05

the green bar here. So connection has

150:07

been tested successfully. All right. So

150:09

now that's connected. The other

150:10

configuration of the fields that we want

150:12

to do here is to fill in the chat ID.

150:15

And for the chat ID, what we want to do

150:17

here is we want to call another telegram

150:20

trigger, which is the second trigger

150:22

that will activate a separate part of

150:23

the workflow where then you can input

150:26

the video ideas that you want into the

150:28

workflow. So in this case, I'm going to

150:30

scroll down. I'm going to go to triggers

150:32

and then I'm going to pick the onssage

150:34

trigger. So the credentials that it's

150:37

connected with is Telegram demo which is

150:38

the same credential and the trigger is

150:41

on message. I'm going to hit execute

150:43

step. As you can see it says there's

150:44

problem running the workflow. Please

150:46

resolve outstanding issues before you

150:47

activate it. It has nothing to do with

150:48

this node because as you've noticed the

150:51

problem actually lies in the other node

150:53

that I was setting up halfway. So I

150:55

didn't finish it because I needed the

150:56

chat ID and in order to get the chat ID,

150:58

I was going to use a trigger note to

151:00

call the chat and then paste the chat ID

151:02

over here. That's what I was trying to

151:04

do. But so in the in room what I'm going

151:05

to do, I'm just going to fill this in

151:06

with a dummy content which is just a

151:09

string which is a test test and that

151:11

should solve the issue and let me run

151:13

this node. Right? So I'm going to

151:14

execute step and it's waiting for a

151:16

trigger event. So I'm going to go back

151:17

to my telegram chat and again I'm going

151:19

to click on this link. So it's going to

151:21

lead me to a chat with the bot and I'm

151:24

going to say test.

151:26

Oh, okay. So let's start. So, so as you

151:30

can see, it's capturing the very first

151:31

message here. And because it was just a

151:34

test workflow, it stops after the first

151:36

iteration. So, the first iteration was /

151:38

start is not capturing the second one,

151:39

which is the test here. Cool. All right.

151:42

So, let's head back here. And so, that's

151:44

the Telegram trigger. Let's get the chat

151:46

ID here, which is the one here. And

151:48

because this is going to be the same

151:49

chat ID throughout, as that's the way

151:52

you would communicate with the bot. So,

151:54

I'm just going to copy the chat ID here,

151:55

which is the ID of interest. And because

151:57

this is going to be the exact same chat

152:00

that we're going to communicate with

152:01

this workflow, I can just basically copy

152:04

the ID and paste the ID hardcoded into

152:07

my chat ID here. So it doesn't need to

152:09

be dynamic. All right, cool. So we want

152:11

to drag now the photo variable into this

152:14

field. But again, because I ran the

152:17

Telegram trigger earlier, it is now

152:19

rewriting my previous node, which is the

152:21

Google Drive trigger node with nothing.

152:22

So, what I want to do is just go to

152:24

executions and go to my very first

152:26

execution. And as you can see, this is

152:28

populated with the previous data. And

152:30

I'm going to copy that to editor. And

152:32

now it's going to populate this. And

152:34

then what I want to do is go to the

152:35

Telegram node. And I'm going to drag the

152:38

URL of the image that rests on the

152:41

Google Drive. So, we're going to scroll

152:43

right down here. And this is basically

152:45

the URL that is publicly available now

152:47

that I have shared the permission to

152:50

anyone being able to be with the link.

152:52

So I'm going to just drag this into this

152:55

field. And what I want to do is also

152:57

just add a caption to this because what

152:59

I want to say is you've uploaded this

153:03

photo to the Google folder. Kindly

153:07

provide the video idea that you want to

153:11

generate from this image. Okay. So, I'm

153:15

going to just blow this up so that you

153:18

can see it. You have uploaded this photo

153:20

to the Google folder. Kindly provide the

153:23

video ID that you want to generate from

153:24

this. Okay. Type here from this image.

153:27

So, cool. I'm going to hit execute step

153:28

and see what it does. So, all right.

153:31

It's sending uh some type of stuff into

153:33

my Telegram chat. So, let's see. Now,

153:35

it's saying this. is sending me the

153:36

correct image which is the cat going

153:38

through the hoops. It says you've

153:40

uploaded this photo to Google folder.

153:41

Kindly provide the video idea that you

153:43

want to generate from this image.

153:44

Perfect. So that's how I want to be

153:46

notified of any photos that anyone or

153:48

myself have dropped onto the folder. So

153:50

I'm going to just move this up. And the

153:52

next node just finishing this cart

153:54

workflow would be a Google sheet node

153:57

because what we wanted to do is we

153:59

wanted to append a row in sheet or

154:02

rather log the information about the

154:04

photos that have been uploaded into the

154:06

Google sheet. Right? So this could be

154:08

we're going to name it image log. All

154:11

right. So it is simply logging it to a

154:13

sheet that I've created here which is

154:15

called image to video log and it's just

154:18

a simple two column sheet. The first

154:20

column being the image URL. The second

154:21

column being the date when it was

154:23

uploaded or created. And I'm going to

154:25

make sure that the sharing permission is

154:27

public as well. And I'm going to head

154:29

back here and I'm going to choose from

154:30

the list image to video log. So there's

154:33

only one sheet there. So it's going to

154:34

be sheet one. And it's going to read out

154:36

the two columns which is the image URL

154:38

and date. For the image URL, what we can

154:41

do is that we can go to the Google Drive

154:43

trigger and drag the same web link from

154:46

Google Drive and just paste it here. And

154:48

for the date, we want to scroll down to

154:49

variables and we're going to drag the

154:52

now variable. All right. Cool. We're

154:53

going to hit execute step. All right.

154:55

And then we see that now the second

154:57

image has been uploaded at this at this

155:00

time. Cool. All right. So that's done.

155:02

That part workflow is completed. What we

155:04

want to do now is to move on from create

155:07

a video agent. And in this case, I'm

155:10

going to use open message and model

155:12

node. And this is essentially a video

155:14

agent. And I've got my open air account

155:17

already connected as before. I'm going

155:19

to choose the model here which is GPT

155:21

4.1. And again I want to execute the

155:24

previous node. Okay. And right now it

155:26

says test not very meaningful for me to

155:29

run the test run. So what I want to do

155:31

is I want to open up the node and

155:32

execute step so that it can get new

155:35

information. And then I'm going to go

155:36

back to my Telegram chat and I'm going

155:38

to create a video of a cat jumping

155:42

through oops of rainbows and opening up

155:47

a parachute towards the end so that it

155:52

can land safely. Okay, so I'm going to

155:54

send that message through Telegram. As

155:57

you can see, it's getting the text which

155:59

is create a video of a cat jumping

156:00

through hoops of rainbows and opening up

156:02

a parachute towards the end so that it

156:04

can land safely. Cool. Okay. So, let's

156:06

go back to the video prompt agent. Now,

156:07

this is populated. What I want to do is

156:09

I want to direct this text into the

156:11

prompt field here, which is the user

156:13

prompt because that's the video ID that

156:15

it's going to work on for it video

156:16

prompt. So, next thing I want to do is I

156:19

want to define the system prompt for

156:21

this node. And again, it is pretty

156:22

similar to the previous sections where

156:24

it was a text to image or a text to

156:26

video model. And you can definitely make

156:28

use of tragic PT to effectively create

156:30

and craft the system message for a video

156:33

prompt engineer. But what I want to show

156:35

here is I'm going to blow this up, put

156:37

it on expression so that you can see is

156:40

the output format. So this time I didn't

156:43

put in much effort in terms of trying to

156:45

really prop engineer the system message.

156:48

So as you can see there are two main

156:49

tasks for this particular node which is

156:51

the first one being create an effective

156:53

video prompt to a prompt video

156:55

generation model based on users input

156:57

which is great and then the second one

156:59

is to make sure that the output is

157:01

separated into two JSON objects which is

157:03

what the next node is going to expect

157:06

from this particular node. And the first

157:08

one being the video prompt itself which

157:09

is going to be in strings. And the

157:12

second one is the image URL which is the

157:14

image that the video generation model is

157:18

going to create a video based off. And

157:20

in order to fetch the image URL, we need

157:22

to make sure that we're attaching the

157:24

right tool for the agent to look at. And

157:26

in this case is a Google sheet attach

157:28

tool. And what I want to do is write to

157:30

get rows. And from the list is going to

157:33

be the same image to video block sheet.

157:36

And we're going to pick the only sheet

157:38

that exists in that project, which is

157:40

sheet one. And there we go. So

157:42

essentially what I'm telling it is to go

157:44

back to the system prompt again that is

157:46

going to fetch this value from the last

157:48

row of the attach Google sheet locked.

157:50

All right. So we just want to name it

157:52

just for clarity. Google sheet lock.

157:55

Okay, cool. So we're going to run this

157:57

and execute the step and see what the

157:59

output is. And again, even though we

158:01

have already specified the output in the

158:04

system prompt, we just want to make sure

158:05

that we toggle on the output content as

158:07

JSON just to make sure that it's

158:09

outputting it as two separate JSON

158:10

objects. And as you can see under the

158:12

content here, there's the prompt strings

158:14

and the image URL strings which which

158:17

are the two expected parameters in the

158:19

next node. So the next what we want to

158:21

do here is just a HTTP request node. And

158:24

again, we're going to import curl here.

158:26

Go back to waist speed and under models

158:28

this time what we want to do is we want

158:30

to check the category image to video and

158:33

there are a couple of options that we

158:34

can choose from but in this case we're

158:35

going to choose the first one which is

158:36

Cance version one. All right cool uh and

158:40

we're going to toggle into the API docs.

158:42

Copy the curl command which is the post

158:45

method here. Go back and essentially

158:47

paste the entire thing and it's going to

158:49

configure everything here.

158:50

Authentication we've set it up

158:51

previously. So we're going to just

158:54

choose the same authentication here. And

158:56

for that we can toggle off the header

158:58

part because the authentication has

159:00

already been included in the

159:01

credentials. And what we want to change

159:03

here the same thing we have couple

159:05

parameters that we can set that has

159:07

relation to the type of videos and what

159:09

kind of requirements that's required.

159:11

But what we want to do here is we're

159:13

going to just change the image URL. Drag

159:16

it to this value here. And under prom,

159:18

we're going to replace that with the

159:20

prom strings and also drag it, drop it

159:22

here. We can specify the duration. I

159:24

think 5 seconds is good for now. But

159:26

yeah, bunch of other stuff that you can

159:28

configure as well. And then we're going

159:29

to remake rename this to cens.

159:34

And we're going to hit execute step.

159:37

And there you go. The post request is

159:39

successful. And as always, we want to

159:43

wait

159:48

until the video is ready for us to get.

159:51

Right? So, we're going to change this to

159:52

15 seconds. I'm going to hit execute

159:54

step. And as that's running, we're going

159:56

to introduce the get HTTP request node.

159:59

See, that's get. And again, for this,

160:01

we're going to go back to the

160:03

documentation and copy the get curl

160:06

command and just import curl. Paste the

160:08

entire thing and it's going to be

160:10

configured. And as usual, we're going to

160:12

replace this request ID with the ID that

160:15

we got from the weight node earlier. So

160:17

I'm just going to paste it there. And

160:18

authentication, same thing. We're going

160:20

to pick the same credentials. We're

160:22

going to toggle off the headers here.

160:24

And we're good to go. So as you can see,

160:26

it's waited 15 seconds. And we're going

160:28

to try to get and the status is

160:30

completed. So that's good. So 15 seconds

160:32

was enough uh to get that. But just in

160:34

case again, um that's not ready yet. We

160:37

want to make sure that workflow does not

160:39

break. So we're going to pick the if

160:41

node creating an if loop. So we're going

160:43

to say that if the status is equal to

160:47

I'm just going to copy and paste make

160:48

sure that it's word for word the same

160:50

completed then it's going to fall into

160:52

the true branch and it's going to go to

160:54

the next node. But in case it's not

160:57

completed it's going to go into the

160:59

false branch. It's going to wait another

161:01

15 seconds. Right? I'm going to execute

161:03

a step. Actually it doesn't matter

161:04

because it's going to fall into the true

161:06

branch. I'm going to reconnect it back

161:08

to the get requests because in case it's

161:10

not ready yet, it's going to wait

161:12

another 15 seconds. You got to try to

161:14

get it and it's going to wait another 50

161:15

seconds and try to get it again until

161:17

it's ready. So, in this case, it's

161:18

already done. Uh, what we want to do is

161:21

to just move on to the next node because

161:23

it's true. And the next node that we're

161:26

going to append here would be the

161:27

Telegram node. And this is going to send

161:30

a video node. Telegram send a video. And

161:34

again, chat ID, you know, it can be

161:36

dynamic or you can just hardcode it into

161:38

the thing. But in this case, because I

161:41

already have it plugged in, I can just

161:43

drag the ID here. In terms of video,

161:45

we're going to get it from the if node.

161:48

Got the output right there. Going to

161:49

drag it in here. Okay. And we're going

161:51

to hit execute step. All right. As you

161:53

can see, sending over some stuff on the

161:54

Telegram. So, let's go to Telegram and

161:56

check it out. There we go. As you can

161:59

see, there's a video of the cat jumping

162:01

through the hoops of rainbows and

162:04

parachute opens and it lands safely.

162:07

Cool. All right. Of course, a lot of

162:08

things can be done again with the

162:09

prompt. We could define what kind of

162:11

style we want it to be. This is a bit

162:13

more cartoonish. If we want it to be

162:15

hyperrealistic, that's possible as well.

162:17

So, we just need to include all of that

162:19

in the system prompt. All right. I hope

162:20

that's a clear build of the image to

162:23

video workflow. So, in the next section,

162:25

you're going to practice building it

162:26

from scratch yourself in the labs and

162:28

I'll see you in the next one. And just

162:30

very quickly before we go, I just want

162:32

to point out that I'm building this in

162:34

one single workflow or one single

162:36

worksheet. Um, but what you can do, you

162:37

can actually separate these two

162:39

workflows into two separate worksheets

162:41

and it'll just make it much easier for

162:43

you to troubleshoot and iterate as you

162:44

go. So just in case if there's any error

162:47

and the workflow is bugging out or

162:49

erroring out, you can actually zoom in

162:52

to any one of the workflows that are

162:54

bugging out instead of looking under the

162:56

hood for these two workflows at a go. So

162:58

if you separate it into two workflows,

163:00

you're going to know which workflow is

163:02

working down and you could go into the

163:04

relevant workflow and find out what's

163:06

not working within the notes. With that,

163:10

I'll see you in the next section. In the

163:11

previous section, we talked about how we

163:13

can access way speed API to call on text

163:15

to image or text to video generation

163:17

model like V3. Now, obviously that's not

163:20

the only way to access V3. And perhaps

163:22

one of the most straightforward way is

163:23

to access it through the Google Cloud

163:25

Platform itself. Now, the way you can do

163:27

that is to go to

163:27

cloud.google.com/vertex-ai

163:30

and go over to the documentation. So,

163:32

Vert.Ex AI is a place where V3 is hosted

163:34

on Google Cloud. And in the

163:35

documentation, you can scroll down to

163:37

the full HTTP request with which you can

163:39

populate the HTTP node to call on the

163:41

API. Now, in order to get started, you

163:44

can actually go down to the simplified

163:46

version in the sample request. Let's go

163:48

back to the workflow now and build a

163:50

simple workflow where we can call on the

163:51

text to video generation model in V3.

163:54

So, the first node that I'm going to

163:55

call on here is just a simple trigger

163:56

manually node. And the reason for that

163:58

is because we just want to try the HTTP

164:01

post node and make sure that everything

164:03

works well before then we can replace a

164:05

trigger node with something that

164:07

actually populates the API calls with a

164:09

prompt and all the information that you

164:11

want to pass on to via 3. So we're going

164:13

to run this node just to populate it. So

164:15

the next thing we want to do is we want

164:17

to obviously call the HTTP request node

164:20

here and the method that we're going to

164:22

choose is post. We're going to go back

164:24

to VO on Vert.Ex AI API documentation.

164:27

So, what I want to do is go down. As you

164:29

can see, there's a whole HTTP request

164:31

curl command here that you can copy and

164:33

paste. However, it's got a lot of stuff

164:35

that I might not necessarily need to

164:38

configure for my videos, at least not at

164:40

the start. We want to keep it simple as

164:41

this is the first run of the demo. Just

164:43

want to let you guys see how to set it

164:44

up simply. So what we want to do here is

164:47

to scroll down a little bit further and

164:49

you see that there is a sample request

164:52

that we can copy right here and this is

164:55

the URL endpoint which we want to copy

164:57

and let's go back to the credential here

164:59

and we're going to copy the URL and

165:02

there are two things that we need to

165:03

configure the project ID as well as the

165:06

model ID. So in fact what you can do is

165:08

you can configure this directly from the

165:11

Google cloud console. For example, if

165:12

your project ID is naden- ak-demo

165:16

the project ID you can get from your

165:18

Google cloud dashboard. It's not the

165:19

project number is a project ID. So you

165:21

can just copy this and go back there.

165:23

Actually I mistyped that. It should be

165:25

demo-k. And the second thing you need to

165:28

configure is the model ID. And you can

165:30

scroll up and see the different types of

165:32

model ID that you can choose from here.

165:34

So in this case, I am going to just copy

165:37

VO3 fast generate the 01. There we go.

165:42

I'm just going to copy that and go back.

165:44

Actually, I'm just going to just paste

165:45

the entire thing and we're good to go.

165:48

Delete the post. And there we go. That's

165:50

the URL right there. Now, the way the

165:51

authentication is done is going to be

165:53

different from the way we would call

165:54

platforms like way speed or any other

165:57

platforms that uses the typical header

165:58

off API. With Google Cloud, what you

166:00

want to do is you want to choose

166:02

predefined credential type. And under

166:04

the credential type, there's a specific

166:06

credential that you want to choose which

166:07

is Google OOTH. So that's Google O2 API

166:12

which you want to choose. And right now

166:13

I already have an account set up. But

166:15

what you want to do is to create a new

166:16

credential. And as you can see, it's

166:18

very similar to the way we've set up our

166:20

OOTH earlier. You have the OOTH redirect

166:22

URL here, which you have actually

166:24

already pasted before in the Google

166:26

Cloud Console. But just to make sure

166:27

it's the same URL, you can do that. But

166:29

what you need to grab is the client ID

166:31

and client secret. So, what you need to

166:33

do is just simply go back to your Google

166:35

Cloud Console and go to credentials and

166:39

go over to your client IDs here and

166:41

you've got your client ID. Paste that in

166:44

and go back again for your client

166:45

secret. And depending on the setting,

166:46

sometimes your client secret will be

166:47

unavailable once you've created them and

166:50

you're supposed to copy them and store

166:51

it somewhere safe. So, if you set this

166:52

up previously and you stored it

166:54

somewhere, you can use the same client

166:55

secret and paste it into your end

166:57

credential. However, if you haven't,

166:59

what you can do is you can delete this

167:00

and add another secret. So that's an

167:02

option as well. So again, as you can

167:03

see, there's authorized redirect URLs

167:05

here, which is essentially the same with

167:07

what we see on the NN credentials right

167:10

here. And we're good there. So we're

167:12

just going to paste the client secret

167:13

there. And just before we go, this is

167:15

where the first run, the way we set it

167:16

up in the OT previously, we need to add

167:18

a scope here. So essentially, the scope

167:20

grants the bearer all the services in

167:22

the Google Cloud Platform that is

167:24

authorized by the IM. Since Vertex AI is

167:26

part of cloud AI platforms APIs, it

167:29

requires an OA token with this scope in

167:31

order to accept the request. Now again,

167:33

before we go back to the workflow and do

167:34

that, what you want to do is you want to

167:36

make sure that you go over to Vert.ex AI

167:40

API

167:41

on your Google Cloud console and make

167:43

sure that you enable this API. And once

167:46

you're done, you can simply click sign

167:47

in with Google and it'll lead you to a

167:50

login page. And what you want to do is

167:52

just allow the access and you're good to

167:55

go. So you can see the green bar shows

167:57

up and the account is connected. So you

168:01

can toggle out now. And what you want to

168:02

do now is toggle over the sand body

168:04

section. And this time we want to choose

168:07

use JSON body. And using JSON body,

168:10

we're going to just go back to Vertex AI

168:13

and copy the entire JSON body which is

168:16

the simplified version here. And even

168:18

before you do that, you can configure a

168:19

couple things in in terms of the prompt

168:21

for the video and the output to the

168:23

storage URI if you are using the GCS

168:26

bucket and you want the video output to

168:28

be stored in that bucket and the sample

168:30

count. So I'm going to take that but at

168:32

the same time I'm going to configure a

168:34

couple of fields as well such as the

168:36

aspect ratio as well as how long I want

168:38

the video to be. And I'm going to just

168:41

paste the entire trunk here so you can

168:42

see. And it's basically similar to the

168:44

simplified version. is just that I've

168:46

added a couple things such as the aspect

168:48

ratio and the duration of the video and

168:51

I have actually removed the storage URI

168:53

because I didn't set up a GCS bucket and

168:55

basically I just wanted to return the

168:57

video in B 64. So if you remove the

169:00

storage URI Vertex AI is going to output

169:03

the video in B 64 format back to end. So

169:07

now that's all set up what I want to do

169:08

is just hit execute step and there you

169:11

go. I've got a name here and basically

169:13

this is sending over to Vert.Ex text AI

169:15

and VO3 is generating the video as we

169:18

speak. What we want to do next is sorry,

169:20

let me just rename this V3. Okay. All

169:23

right. And we're going to wait for 15

169:26

seconds. Going to run that. And

169:28

essentially what we're trying to do is

169:30

we're waiting for the video to be

169:31

generated before we pull for the result.

169:34

Right? And we're going to create another

169:37

HTTP request node as that's waiting all

169:40

for video from text. All right. And the

169:45

method is also a post in this case. And

169:49

essentially you can get the endpoint URL

169:51

from the pull long running operation

169:53

right here in the same documentation.

169:56

But I'm just going to go back and paste

169:57

the entire URL here. And under

169:59

authentication is the same thing.

170:01

Predefined credential type. We're going

170:03

to pick the same credentials with the

170:04

same account under the body. This one's

170:08

going to be simpler because we're just

170:09

getting the results back. And basically,

170:11

we just need the operation name.

170:17

And we're going to hit execute step. As

170:19

you can see, it's got an output, but it

170:20

says the unsupported output video

170:22

duration 5 seconds because I specified

170:24

it to be 5 seconds. So, supported

170:26

duration are eight for text to video

170:28

feature. So, I'm just going to go back

170:29

and change this to eight. All right. So,

170:31

I'm going to rerun this.

170:35

Run this again. And we're going to hit

170:37

execute step.

170:40

All right. So, we're getting an output

170:41

here. And it says that the node contains

170:43

6 MBs of data. It's going to slow down

170:46

the browser temporarily, but we're just

170:47

going to hit show data here because I

170:49

want to show you the entire format of

170:50

it. As you can see, because we didn't

170:52

specify a bucket for the destination of

170:54

the output, it's actually returning a

170:56

base 64 format into nen. Now what we

171:00

want to do next is obviously convert

171:02

this into binary. So we can download it

171:04

and do with it whatever we want. Send it

171:06

over to whichever platform, email etc.

171:09

But here what we want to do just before

171:11

I move on is to introduce an if loop as

171:13

usual because we want to make sure that

171:15

just in case the status isn't done that

171:18

it's going to go into a loop and try to

171:20

fetch it when the video isn't completed.

171:22

So I've actually picked the wrong type

171:23

there. So it should be boolean. We're

171:25

going to make sure that when it's true,

171:27

which is, you know, it's done, that that

171:30

goes into the true branch. And if it

171:32

falls, it goes into another wait note.

171:34

And this is going to be wait another 15

171:36

seconds. All right, it's running again.

171:39

And we're going to loop this back to the

171:41

poll. I'm going to run this. Anyway, it

171:43

wouldn't run because in this case, it

171:46

shouldn't run. It should fall into the

171:47

true branch. Yep, there you go. Goes the

171:49

true branch goes green. So this is going

171:52

to a set for edit field note because

171:57

essentially we want to set this output

171:58

which is B 64 to strings. We're going to

172:03

just call this B64 and we're going to

172:06

hit execute step there. And the output

172:08

is rather large. This is coming back as

172:09

strings. And the last step here we want

172:12

to do is to convert file. And we're

172:15

going to pick move B 64 string to file.

172:17

So there we're going to hit show data

172:19

there. We're going to drag this

172:20

parameter right here into B 64 input

172:23

file and we're going to output the file

172:26

as just a typical default name data.

172:28

Going to hit execute step there. So, as

172:30

you can see, we're getting an output

172:31

here which is a data. Now, we can just

172:34

download the file and that's essentially

172:36

the video. Let's open that up.

172:48

All right. So, we've got now a serene

172:51

walk down the beach. It's a good video

172:52

by VO3. As you can see, the audio comes

172:54

with it as well. Looks pretty good.

172:56

Yeah. So, we've got the output here. And

172:58

what you want to do with the output,

172:59

obviously, you can upload it to drive or

173:01

just try to send it somewhere. But this

173:04

may not be the way you want to download

173:05

the video exactly from Vert.ex AI. But

173:07

what I want to show is just the way that

173:10

you would set all of this up and be able

173:12

to call Vertex AI and the VO3 model to

173:15

generate text to video through this

173:17

workflow. Of course, the easiest way is

173:19

to create a GCS bucket and just put the

173:21

destination storage as that bucket so

173:23

that you can just retrieve it from there

173:25

later on. This is just a simple build to

173:27

let you guys know how it all works. But

173:29

there are a couple of reasons why we're

173:31

using platforms like misfeed AI instead

173:33

of just Vert.ex AI. The reason for that

173:35

is because there are other models that

173:37

potentially we want to use. Again, VO3

173:39

is one of the costlier models even

173:41

though it's one of the best out there in

173:43

producing high fidelity videos and

173:45

audio. But for everyday use models like

173:48

Cance and one and cling might be more

173:51

suited to your use case depending on

173:52

your needs. So for that reason platforms

173:55

like way speed or foul and there many

173:58

other platforms like that actually

173:59

provide an easy onetop platform where

174:01

you can access all different types of

174:03

models without the need of setting up

174:05

all the credentials and API calls just

174:08

to access that particular model. Hope

174:09

this is a good explainer. I'll see you

174:11

in the next section. Now, let's quickly

174:13

talk about a decision you'll face when

174:14

starting out with nitn. Should you use

174:16

nitn cloud or should you self-host it?

174:19

Both have their perks and a few

174:20

trade-offs. Starting out with nitn cloud

174:23

gives you the benefit of simplicity. You

174:26

just sign up and start building. No

174:28

service, no docker setup, no headaches.

174:30

Updates and security patches are handled

174:32

for you. So, you're always on the latest

174:34

version as long as you updated on your

174:35

admin panel. Plus, if you run into

174:37

trouble, you've got official support and

174:39

uptime monitoring already built in.

174:42

But the trade-off, cost and control.

174:44

With cloud, you're paying subscription.

174:46

And while it's worth it for convenience,

174:48

you don't get full control over the

174:49

environment, and you're limited to the

174:51

TSM resources that N&N offers. On the

174:54

flip side, self-hosting means you get

174:56

total freedom. You decide how much power

174:58

your instance has, where it's located,

175:00

and who has access. It can be much

175:03

cheaper if you already have a server

175:04

space or even want to run it locally for

175:06

personal projects. and you have the

175:08

flexibility to integrate with internal

175:09

tools and databases that you might not

175:11

want to expose to the public cloud. To

175:13

me, this is probably one of the biggest

175:14

positive points of self-hosting is that

175:17

for some companies and businesses, it's

175:18

simply not a choice, but a security

175:21

requirement to make sure you host it in

175:23

the service. They're not exposed to

175:24

external environments. But again, the

175:26

drawback would be maintenance. You're in

175:28

charge of keeping the server alive,

175:30

updating addit, and handling security

175:32

patches. If something breaks, there's no

175:33

safety net, and you've got to fix it

175:35

yourself. So to sum it up, choose ended

175:38

and cloud if you want the easiest path

175:39

to get started and don't want to deal

175:40

with infrastructure. Self-hosted if you

175:43

want full control, tighter integrations,

175:45

and airtight security.

175:49

Honestly, there's no wrong answer. It

175:50

depends on your use case. And if you're

175:52

just learning, cloud is usually the

175:53

fastest way to go. But if you're

175:55

building production workflows and want

175:57

flexibility, self-hosting gives you the

175:58

keys to the kingdom. I'll see you in the

176:00

next one. This section we're going to

176:01

cover the sub workflow tool and how we

176:03

can use it to segmentize complex

176:05

workflows into easy separate segments so

176:08

that it's simple for us to troubleshoot

176:10

and iterate on.

176:12

Now in this case I'm going to take the

176:14

workflow that we've just built in the

176:15

previous section which is the image to

176:17

video workflow. And there's a different

176:18

architecture in the way we built these

176:21

workflows. And I've said previously

176:24

the current workflow has two triggers in

176:27

one single worksheet. And that might not

176:29

be ideal for error handling because when

176:31

the workflow errors out, you have to go

176:33

into both the workflows to check which

176:36

node that the error has occurred in.

176:39

Now, one way to do that is to split the

176:41

workflow into two.

176:43

So in this case, I'm going to take this

176:45

as the main workflow.

176:47

As you can see, the trigger is a Google

176:49

Drive trigger, which is triggered when a

176:51

photo was uploaded to the drive. And

176:53

once triggered the next note, it's going

176:55

to send a telegram message with the

176:57

photo from the drive and ask users for

176:59

the video prompt or the video ideas that

177:02

user want to generate the video based on

177:04

this image. And after that, it's going

177:06

to lock the image URL in the Google

177:08

sheet. Now, what we're going to do here

177:11

is we're going to add a execute

177:13

subworkflow node.

177:16

Essentially, I'm going to show you how

177:18

to set that node up and how it links to

177:21

another workflow and how the entire

177:23

logic works when it comes to the

177:24

interface.

177:28

So, in this case, when you choose an

177:29

execute workflow node, you're going to

177:31

have to choose which workflow that's

177:32

going to activate or trigger. And in

177:35

this case, you can't actually just

177:36

choose it from the list. And here, I've

177:39

created another workflow and I've called

177:40

it sub workflow demo. Essentially what

177:42

it is is a copy paste of what we had in

177:44

our image to video workflow and just the

177:46

top part and we're going to edit that

177:49

later on. So going back to the main

177:51

workflow after you created the sub

177:52

workflow you can simply choose that

177:55

from the list

178:01

and essentially it's going to call the

178:03

sub workflow whenever this workflow is

178:05

triggered right after it logs the image

178:07

URL into the sheet.

178:11

Now what you want to do here is to go to

178:13

the sub workflow. In this case we're

178:14

going to replace a trigger node

178:18

to a execute subworkflow trigger. And as

178:21

you can see it says here when executed

178:23

by another workflow. So we want to

178:24

choose that one.

178:27

So there are a couple things that you

178:28

can configure when it comes to how you

178:30

want the information from the main

178:31

workflow to be transferred to the sub

178:33

workflow.

178:35

Now, in this case, we're just going to

178:36

choose accept all data for now, just to

178:38

show you the distinction of what happens

178:40

when you configure the data.

178:45

All right.

178:46

So, I'm going to take this I'm going to

178:50

and I'm not going to plug it into the

178:52

rest of the workflow just yet because I

178:54

just want to show you what kind of data

178:56

would be populated when this workflow is

178:58

executed.

179:01

So, let's try to run this workflow and

179:03

see what we get.

179:08

All right. So the execute workflow has

179:11

run successfully.

179:14

And one thing about the sub workflow

179:15

trigger is that when a workflow is

179:17

triggered by another workflow, you don't

179:20

actually need to publish it here. And as

179:23

you can see, it says that the execute

179:24

workflow trigger does not require

179:26

activation as it is triggered by another

179:28

workflow.

179:30

So cool. So we're going to hit

179:32

executions here.

179:34

to see the logs of the execution.

179:40

As you can see, I've executed the main

179:42

workflow and it's receiving data here in

179:45

the

179:46

in the trigger workflow. I'm going to

179:49

hit copy to editor so that I can see

179:51

what kind of data has been populated for

179:52

this node.

179:54

All right. So, I'm just going to blow

179:56

this up slightly so that you can see. So

179:59

what I've done is I picked acceptab all

180:01

data as the input data mode and

180:03

essentially the data that's been pushed

180:04

forward is the image URL and the date.

180:07

So these are the two data points that's

180:08

coming off here. And that's in part

180:11

because the data point that we've

180:13

received from the previous Google sheet

180:15

node in the output payload. We're only

180:18

saying two components that are being

180:20

pushed out

180:22

which is the image URL and the date and

180:24

it's simply passing those to the other

180:27

workflow.

180:28

So if you want more information to be

180:30

passed into the next workflow, well then

180:32

you're going to have to configure the

180:33

note to make sure that it's passing on

180:35

the relevant information that you want

180:37

to the other workflow.

180:41

Now going back to the other workflow,

180:43

I'm going to just push this back a

180:44

little bit. And in this instance, we

180:47

want to choose define using fields below

180:49

just to see what the difference is

180:50

between accept all data. And when we

180:52

choose define using the fields below, we

180:54

want to make sure that we define the

180:55

fields. So for example, in this case, we

180:58

want the image

181:00

URL as one of the data,

181:03

right? And let's say we don't really

181:05

need that and we don't really care about

181:07

it. Okay, so we're only specifying just

181:10

the image URL.

181:12

So let's go back to the main workflow

181:14

and this time we're going to only run

181:15

this execute workflow node and see what

181:18

it sends across to the subworkflow. So

181:20

as you can see, node is executed

181:22

successfully. Let's go back to the

181:23

subworkflow and check the execution

181:25

logs.

181:28

As you can see, there's a new one coming

181:29

in here.

181:32

And we're just going to open that up.

181:36

Now, you see the difference is that it's

181:38

only going to output the data in the

181:40

format that you've defined here. So,

181:42

we've defined the image URL field only

181:44

as strings and it's only going to pass

181:46

that data property in the output

181:48

payload,

181:50

which means the date output is not

181:52

included in the payload here. So that is

181:54

a main difference when you're defining

181:56

using the fields below as opposed to

181:57

just accept all data. And this is

181:59

extremely useful when you want to format

182:01

the input in a specific format or in a

182:03

specific way in the next node for the

182:05

rest of the workflow.

182:10

So all right, so let's go back to the

182:11

editor. In this case, we're going to

182:13

change back to defined using fields

182:16

below, but we're going to add in this

182:17

case the date variable as well. I'm just

182:21

going to hit execute step again. And

182:23

there we go. There no inputs data. So

182:25

what it's doing is it's outputting these

182:28

two variables with no content inside.

182:31

Cool. All right. So now when we look at

182:33

the architecture, what's going on here

182:35

is that the two workflows are activated

182:38

when a photo is uploaded on Google

182:40

Drive. The user is notified on Telegram

182:42

that a photo has been uploaded together

182:44

with an image of the photo and that

182:46

image is then logged into an image log

182:49

sheet and then the sub workflow is

182:50

called. So in the sub workflow, this

182:52

trigger is the start of the entire chain

182:54

of workflow here. And what we want to do

182:56

in now is to chain it up with the video

183:00

prompt agent here.

183:04

Want to do just before we chain it up

183:06

with the video prompt agent here is we

183:07

want to add a telegram note. And if you

183:10

remember what we need to do is we want

183:12

to get a text input from the user

183:13

describing what they want to do with the

183:16

video. just an idea on how they want to

183:18

generate the video because they don't

183:20

need to come up with the entire prompt.

183:21

We already have a video prompt agent as

183:23

part of the workflow. They just need to

183:24

tell us directionally what they want the

183:27

workflow to do for the video. So in this

183:30

case, we're going to select the send

183:32

message and wait for response node.

183:36

So we're going to make sure that we pick

183:37

the right Telegram account. And in this

183:40

case, I'm just literally going to go

183:41

back to the main workflow. I'm going to

183:43

copy the chat ID because it's going to

183:44

be the same chat ID that we use.

183:48

Right? Again, you can make this more

183:49

dynamic by passing it through the

183:52

execute workflow node and passing it

183:53

into the sub workflow. But in this case,

183:55

because it's going to be the same chat,

183:57

I'm just using I'm just hard coding it

183:59

into the fields. So now the previously

184:02

the telegram node will show the photo to

184:04

the user and say that you've uploaded

184:06

this photo to the Google Drive. Kindly

184:08

provide the video ID that you want to

184:10

generate from this image.

184:13

So, we're we don't need this part

184:15

anymore

184:16

because this is just going to be a

184:18

notification message. Instead, we're

184:20

going to have the message sent here in a

184:22

subwoofer.

184:25

So, we're going to make this a question,

184:28

which is could you provide the video ID

184:30

that you want to generate from this

184:32

image? Question mark. And we want the

184:34

response type to be free text. All

184:36

right.

184:38

Cool. So I'm going to hit execute step

184:39

here

184:41

just to see what we get on the telegram.

184:45

And as you can see the message that we

184:47

got from telegram is could you provide

184:48

the video idea that you want to generate

184:50

from this message. All right. And

184:52

there's a respond tab here which I can

184:54

click on.

184:56

I'm going to open that link

184:58

and basically type in my response for

185:00

the video generation ideas that I want.

185:03

So create a video of a cat flying

185:10

through hoops of rainbow

185:14

and landing

185:18

on a field of

185:21

sunflowers

185:24

with purple

185:27

unicorns

185:28

running in the background. All right.

185:31

So, we're just going to do that and hit

185:34

submit.

185:36

And cool. All right. So, let's go back

185:38

to the workflow. And as you can see,

185:40

receiving, and this is kind of hard to

185:43

see, but I'm going to just drag this to

185:45

the left a little bit. Um, as you can

185:48

see, the input that we're getting is

185:50

create video of a cat flying through

185:51

hoops of rainbows and landing on a field

185:52

of sunflowers with purple unicorns

185:54

running in background, which is a text

185:55

that we've input into the field there.

185:59

Cool. All right. Okay, so we've got an

186:00

output data here and what we want to do

186:02

now is to chain up the output. And let

186:05

me just blow this up again so that you

186:08

can see.

186:11

So we're going to chain that up with the

186:13

video prompt agent.

186:15

And now we're going to have to configure

186:17

this very quickly and just drag the text

186:19

from the telegram again. Drag this here

186:22

to the right. the text which contains

186:24

the video ID into the prompt here

186:31

and this is going to be the user prompt.

186:34

Essentially, in terms of system message,

186:36

it's going to be the same because we

186:38

still want it to create an effective

186:39

video prompt to prompt the video

186:41

generation model based on users input

186:43

and also to output only the following

186:44

two values in JSON format which is the

186:47

prompt that it generates as well as the

186:50

image URL that it fetches from the

186:51

Google sheet. So everything else stays

186:54

the same in terms of the rest workflow.

186:56

The only difference is we've now changed

186:58

the trigger to this workflow from a

187:00

telegram onssage trigger into a executed

187:03

by another workflow trigger.

187:07

So the rest of the workflow is going to

187:08

run the same.

187:11

And what I'm going to do now is I'm

187:13

going to do a top to bottom run of the

187:15

master workflow and the sub workflow as

187:17

well. So I'm going to hit save here and

187:19

I'm going to hit execute workflow here.

187:24

It runs the execute workflow and it's

187:26

spinning. The reason for that if we go

187:27

back to the telegram, it says that I've

187:30

uploaded this photo which is correct.

187:32

And again, it says, could you provide

187:33

the video ID that that you want

187:35

generated from this image? All right.

187:37

So, I'm going to respond here. It's

187:39

waiting for my response.

187:41

And again, I'm going to say create a

187:43

video of a cat

187:47

flying.

187:52

Oops. of rainbow

187:55

and landing on a bed of sunflowers

188:02

with purple unicorns

188:06

running in the background. Okay, I'm

188:09

going to hit submit here

188:12

and go back to the workflow.

188:22

It's not showing anything right now, but

188:23

if you go to the execution logs, you can

188:26

see that it's getting an input. It's

188:28

getting a trigger and it's running the

188:31

workflow.

188:33

So, it might take a little bit because

188:35

it's going through basically the HTTP

188:38

API post call to Cance or Way AI and

188:42

then it's going to wait 15 seconds. It's

188:44

going to do a get API call and then if

188:46

that fails, it's going to run the loop

188:47

again and try to get it again. And once

188:50

it has the result, it's going to send it

188:52

to the Telegram chat. All right, so

188:54

we're just going to head back to

188:55

Telegram.

188:59

As you can see, the video is ready.

189:05

And I got to say, this one is not as

189:07

good as what I expected it to be.

189:08

probably something to do with the prom,

189:10

but it's getting the main gist right,

189:13

which is, you know, the cat is landing

189:15

onto a bed of sunflowers with unicorns

189:17

running in the background, but the style

189:18

is a little bit inconsistent. And of

189:21

course, that we have to go back to the

189:23

system problem and fix that. But the

189:24

point of this is to show you guys the

189:26

difference between how the architecture

189:28

could work. Essentially does the same

189:30

thing. But the difference is that you

189:32

have now split the one bake workflow

189:35

into two separate workflows. And this

189:37

will help you when it comes to

189:38

troubleshooting and iterating because

189:39

you can re literally look into debug all

189:42

the errors occurring in the main

189:43

workflow or in the sub workflow. So it

189:46

just makes your life easier when you're

189:47

trying to error handle uh if anything

189:49

goes wrong. I hope that's a pretty clear

189:52

explanation on how we can use the

189:53

execute subworkflow note as well as the

189:56

executed by another workflow trigger and

189:58

how it all works. I'll see you in the

190:00

next section. In this section, I'm going

190:02

to run you through how to selfhost n on

190:04

your local machine through Docker

190:06

Desktop. So, the easy way to get started

190:08

is to go to GitHub and grab the NN-IO/

190:12

selfhosted AI starter kit. And what you

190:15

want to do is just go over the

190:16

documentation a little bit. And

190:18

depending on what kind of machine that

190:20

you're on, I'm currently on a Mac. So,

190:22

this is the documentation that I want to

190:24

be referring to. As you can see, it's a

190:26

simple git code and you can get started.

190:28

But just before you go ahead and do that

190:30

from your terminal, what you want to do

190:32

is you want to make sure that you have

190:33

your Docker Desktop downloaded

190:35

installed, and if you haven't done that

190:36

before, what you want to do is to go

190:38

over to docker.com and basically

190:40

download the Docker Desktop app into

190:44

your machine. And I already have my set

190:45

up here. So, just go ahead and download

190:47

that if you haven't done so. And also,

190:49

you want to make sure that you have Git

190:50

installed on your local machine. So,

190:53

once you've got everything set up, and

190:54

again, I'm on Mac machine. So what you

190:56

want to do is scroll down here where

190:58

we're going to clone the entire package

191:00

together with Olama within because

191:02

that's the LLM that's going to run on

191:03

your local NN. And if you already have

191:06

Olama set up on your machine, there is a

191:08

documentation separately for that. But

191:10

in this case, what we want to run is the

191:12

command lines right here. And we're just

191:14

going to copy that to our terminal. And

191:17

it's basically a git clone. And once

191:18

we've cloned that repo, what we want to

191:20

do is cd into the self hosted AI starter

191:25

kit and set up the environment file.

191:29

Make sure that it in there. There we go.

191:33

And what we're going to do now is to do

191:36

a docker compose command. And this time

191:39

is the CPU one because this is a max

191:42

silicone. And once you run that, it's

191:44

going to take some time to run depending

191:46

on how quick your internet connections

191:49

are. And as you can see, it's

191:50

downloading a couple of things. You

191:52

know, Postgress, Quadrron, Nadan, and O

191:55

Lama, which are the basic building

191:57

blocks that you need to work with

191:59

locally.

192:01

Now, that took a little while, but now

192:03

it's completed. So, what we want to do

192:04

now is just to go to the local host

192:06

here, 5678, which is the default local

192:08

host that you can access your NAND

192:12

instance on. So on my browser, I'm going

192:13

to go over to localhost 5678. As you can

192:16

see, it's going to ask for some email

192:18

credential. Sign up. So you can do any

192:20

email you want because it's all

192:21

basically running on your local machine.

192:23

All right. And once you're signed in,

192:25

you're going to see that there is a demo

192:26

workflow because part of the repo there

192:29

is a demo workflow template that's been

192:30

loaded up. So you can just open that. As

192:32

you can see, the template loads up with

192:34

a basic LLM chain with the Lama chat

192:38

model. And for this basic chain to work,

192:40

you have to plug in a fallback model. So

192:42

in this case, I'm just going to add

192:44

another lama model here. Choose the same

192:47

credential there. Just leave it as that.

192:50

And basically, we can just say, "Hey,

192:53

how's it going?" And it's going to

192:55

access the Oama chat model. And again,

192:57

all of this is basically being powered

192:59

on your local machine and is running on

193:03

the doc desktop. But I'll show you in a

193:05

little bit what that looks like. But

193:07

here you go the notice executed

193:10

successfully and it's responding with

193:12

this output.

193:14

And on the docker desktop as you can see

193:16

there's one container running which is a

193:17

self-hosted AI starter kit and under

193:19

that you can see postgress quadrant and

193:21

n is running and if you would stop the

193:24

operation of a container and you head

193:27

back to edit end instance you can see

193:29

that the connection is lost here. So it

193:31

is entirely running on your docker

193:32

container and powered by your local

193:35

machine. So going back to our docker

193:36

desktop here I'm going to hit on start

193:38

and on the left hand bar you see there

193:39

are images and these are images that you

193:41

can download you know latest versions

193:42

you can see you can search for nan

193:44

images and download the latest version

193:46

of nan and and basically run that and

193:50

then there's the volumes here which is

193:51

where all the persistent data is stored

193:53

and I'm not going to go into the weeds

193:54

of how it all works together because we

193:56

do have a separate course for docker to

193:58

address that but going back to the

194:00

container if you will want to do any

194:02

configuration changes and stuff like

194:04

that what you can do is actually open up

194:06

the configuration here and actually

194:08

configure the doc compose file depending

194:10

on the kind of requirements that you

194:12

need such as user credential persistent

194:14

storage and port configurations. But

194:17

that's it for a super quick one on how

194:19

to host edit on your local machine with

194:21

docker. So in the past section we talked

194:22

about how we can host nitn on our local

194:25

machine with docker desktop. And in this

194:27

section we're going to go through how

194:28

you can run nitn on an ec2 instance with

194:31

docker running on top of it. So an easy

194:33

way to get started is to go to

194:34

codcloud.com/playgrounds.

194:36

So this is where we host all our

194:37

playgrounds including AWS Azure GCP and

194:41

Azure data. So what we want to do here

194:43

is to go to AWS and spin up the AWS

194:46

sandbox playground by hitting launch

194:48

now. And it's just going to take a

194:50

couple seconds to start up the

194:52

playground. And there we go. We've got

194:53

our credentials here. What we want to do

194:55

is just copy this to our browser. And

194:57

that's going to lead us to a sign-in

194:58

page on the AWS console. And what we

195:01

want to do is to go back to the

195:03

playground and copy out the username and

195:06

the password. And you can just fill that

195:07

in to gain access to the AWS console.

195:10

And there you go. Got a console ready to

195:13

go. And what we want to do here is head

195:15

over to EC2 instance and hit instances.

195:18

And as you can see, because it's a new

195:19

session, there are no instances just

195:21

yet. So we're going to hit launch

195:23

instance. And I'm going to call this

195:24

instance nan-demo.

195:27

And we're going to choose YUbuntu

195:29

here. And we're just going to leave

195:31

everything else as is for now. And what

195:34

we want to choose here is a T2 medium.

195:37

And again, this is just to cater for the

195:39

fact that we're going to have lama as

195:41

part of the packet that's going to come

195:43

in. As you recall in the past section

195:44

when we did the git repo clone, it's

195:46

going to be quite sizable. So we just

195:48

want to make sure that we're catering

195:49

for that. And of course the storage

195:51

later on, we want to configure that to

195:53

be quite sizable as well. So with a key

195:56

pair login, we can create a new key

195:58

pair. And we're also just going to call

196:00

this an demo key. It's an RSA type

196:04

withPN

196:06

format. So let's create key pair. And

196:08

that's just going to get saved in my

196:10

download folder. And under network

196:12

settings, we'll make sure that it is

196:14

allowing SSH traffic from anywhere for

196:16

the time being. And at the same time,

196:18

under configure storage, we want to

196:20

change this to 30 just to ensure that

196:22

it's got enough to cater for the packet.

196:26

Right. And everything looks okay to go

196:28

right now. So, we're going to launch

196:30

instance. And there you go. The instance

196:32

was successfully launched. It takes just

196:34

about a minute. And as you can see, the

196:36

instance has been spun up. And right

196:38

before we go to our terminal to start

196:40

setting things up, what we want to do is

196:42

just go into the instance here and make

196:45

sure that the security group is

196:48

configured. So, there you go. We're

196:49

going to the security tab here and hit

196:52

the security groups. And under the

196:54

inbound rules, we would just want to

196:56

make sure that we add the SSH inbound

196:59

that's coming from the end port. So this

197:02

is custom TCP and it's going to be port

197:05

5678, which is the default port. And

197:08

we're going to allow it to be able to be

197:12

accessed from anywhere. Now, we're going

197:14

to save rules. And there we go. That's

197:16

been added. And going back to our

197:17

terminal, we're just going to type in

197:19

chmod 400 and then dash demo- key, which

197:24

was our p key. So just to make sure that

197:27

we can gain access with the right

197:29

pairing key. And we're going to SSH into

197:31

the instance by typing in this command

197:33

line which includes the PAM key as well

197:36

as the Ubuntu

197:38

public IP address which we can get from

197:40

the EC2 instance console. and just copy

197:43

that and go back to the terminal. Paste

197:46

that in and we're going to run that. So,

197:49

this is a typical message that's going

197:51

to show up for the first time if we're

197:52

trying to SSH into it. So, we're just

197:55

going to hit yes. It's going to just

197:57

double confirm because the key is not

197:59

known by any other names. But since

198:01

we're the ones who created the instance,

198:03

we know that this is safe. So, we're

198:05

going to hit yes. So, the next thing we

198:06

want to do is to update the system and

198:08

install Docker.

198:16

We're going to run install there. Next,

198:18

we're going to enable Docker as well as

198:21

add the user group. Make sure that we

198:24

can get access to column Docker. And

198:26

what we're going to do next is just to

198:28

install Docker Compose version two. And

198:30

just to make sure that it's installed

198:32

properly, we're going to run Docker

198:34

Compose

198:37

version command. Sorry. It should be

198:39

Docker Compose just version without the

198:42

dash. Oops, there's a typo there. So,

198:44

should be Docker Compose

198:48

version.

198:50

All right. So, that's version two. Good.

198:52

We can move on. So, the next few steps

198:54

are going to be very similar to what we

198:55

did in the last section when we're

198:57

trying to self-host on Docker. So, we're

198:59

going to do a g clone of the g repo. And

199:03

we're going to change our directory to

199:04

the self hosted AAI starter kit. Okay.

199:09

And we're going to copy the environment

199:11

file. Let's make sure that that's

199:14

created. All right. And just before we

199:17

run the docker compost command, we're

199:20

just going to do some minor change in

199:23

the environment file because sometimes

199:26

the security settings might not allow

199:27

you to access the destination due to

199:30

certain types of security protocols. And

199:33

what we want to do is in this case

199:34

because we're just trying to show how to

199:36

set it up is to disable some of that.

199:38

So, what we want to do here is to just

199:40

paste in the line edit and underscore

199:43

seccure cookie equals false. And once

199:46

that's done, we're going to exit. We're

199:48

going to hit yes. And there we go. So,

199:51

I've just cleared that up. And what

199:54

we're going to do here is just run the

199:56

docker compose command

200:01

and basically to pull in the same way as

200:04

the previous section all the nan files

200:07

the packet and everything that's

200:09

contained in the git repo into the EC2

200:13

instance.

200:16

So it's going to take a little bit of

200:18

time as is pulling all of that. So now

200:21

that's done. We want to check if the

200:24

container is up and running. So, we're

200:25

going to type in the docker ps command.

200:28

And as you can see, the containers are

200:31

running. So, let's go to a public IP

200:33

address. And the way you can do that is

200:35

to go to your AWS console and copy the

200:38

public IPv4 address. And once you do

200:40

that, you can simply type it in with the

200:43

port 5678 at the end.

200:46

And there you go. That's your Nident

200:48

instance. And as you can see, uh this is

200:50

the first time logging in. So there's no

200:52

persistent data of credentials. So I can

200:55

just type in any credentials just like

200:57

this and

201:01

it's going to spin up the edit instance.

201:04

As you can see, this is a dashboard that

201:06

you would normally see on your end

201:08

cloud. It's just announced it's hosted

201:10

on an EC2 instance on AWS. Everything

201:13

else works the same way. If you go into

201:15

your demo workflow, the same template is

201:18

going to spin up as a default demo

201:20

template which is attached to Olama. And

201:22

again, Olama was part of the package. So

201:24

it was downloaded and basically loaded

201:26

up within the EC2 instance. Cool. So if

201:30

you were to go back to your instance

201:32

right here,

201:40

you can see the usage in the instance

201:43

tab.

201:45

And that was probably when we were doing

201:46

the git clone. But of course, as you

201:49

move on and operate the NDN environment

201:52

within the EC2's instance, you're going

201:54

to see the consumption data over here.

201:58

Okay. So that's a straightforward way to

202:00

show you how you can quickly set up an

202:02

EC2 instance and get your end

202:04

environment connected and hosted on

202:06

there through Docker. And again, if

202:08

you're not familiar with how EC2 and

202:10

Docker works, we do have courses on

202:12

that. So you can go and check that out

202:14

for sure. And of course if you rather

202:16

just skip all the hassle of not just

202:18

setting up the environment and the

202:19

infrastructure to run this uh but also

202:21

at the same time maintaining it you

202:22

could just use any cloud. They do charge

202:24

a monthly pricing but at the same time

202:26

you do get a lot more convenience if

202:28

you're just starting out with automating

202:30

things through an end and you don't need

202:32

to spend a lot of time worrying about

202:34

all the other stuff in terms of

202:35

maintenance of the infrastructure. So

202:37

let me know what you guys think and what

202:38

you guys prefer. Otherwise I'll see you

202:39

in the next section. Congratulations,

202:42

you made it to the end of the course.

202:44

Think back where we started at the

202:45

beginning of this course. Notes and sub

202:47

workflows may have felt abstract. Now

202:50

you've not only learned the

202:51

fundamentals, you've actually built

202:53

intelligent productionready workflows

202:55

from scratch. That's a huge step. Along

202:58

the way, you've explored how to connect

203:00

APIs and services with HTTP request node

203:02

and build AI powered agents that

203:04

automate real world tasks like email

203:06

replies, research, and even select

203:08

conversations on your behalf. You've

203:10

also learned how to generate images and

203:11

videos directly from text or media

203:13

inputs and also use sub workflows to

203:16

organize complex systems into reusable

203:18

modular components. And you've also

203:20

learned how to host NDN anywhere from

203:23

edit at cloud to docker or AWS EC2

203:26

giving you the flexibility of your own

203:28

environment. We then move into a non

203:31

territory rag agents with pine cone back

203:33

to databases enabling memory and

203:35

contacts in your automations. You've

203:37

also learned how to use MCPS in your NDN

203:39

workflows to scale and reuse some of the

203:42

building blocks. And of course, we've

203:44

gone through the best practices in terms

203:45

of error handling, retries, and

203:47

leveraging the end template marketplace

203:49

to accelerate your builds. By now,

203:52

you've seen how automation goes beyond

203:54

just saving time. It's about

203:56

orchestrating systems, extending the

203:58

reach of AI, and freeing yourself from

204:00

repetitive work so you can focus on

204:02

higher value tasks. So, where do we go

204:05

from here? First, get building. The

204:08

workflows we've covered are a strong

204:10

foundation, but end is also limitless in

204:12

its flexibility. Try connecting new

204:15

APIs, experiment with custom nodes, and

204:17

layer in new additional agents. The more

204:20

you explore, the more powerful your

204:22

automations become. Second, share and

204:25

learn with others. At CodeCloud, you're

204:27

part of a global community of learners,

204:29

engineers, and automation enthusiasts.

204:32

Ask questions, showcase your workflows,

204:34

and draw inspiration from others

204:36

projects. Collaboration is one of the

204:38

best ways to grow your skills. And if

204:40

you like to share feedback, ideas, or

204:42

just to show me some of the cool stuff

204:43

that you built, feel free to get in

204:45

touch with me. I'd love to hear what you

204:47

build with ended.

204:49

Third, think about your own environment.

204:52

Could you integrate end into your DevOps

204:54

pipelines, replace manual reporting with

204:56

AIdriven summaries, or deploy customer

204:58

support agents that scale your

205:00

businesses? Whatever your role is,

205:02

automation is a lever that you can pull

205:04

to create real impact. A final thought,

205:07

remember automation isn't about

205:09

replacing people. It's about augmenting

205:11

what you do. By letting Editen handle

205:14

the repetitive, the mechanical, and the

205:16

time-conuming, you create more space for

205:18

creativity, strategy, and innovation.

205:21

So, keep experimenting and keep building

205:24

and keep pushing boundaries on what's

205:26

possible with Na. Thank you for lending

205:28

with me and with Docloud. I'm Maronei

205:30

and I can't wait to see what you build

205:32

with that event.

205:44

So I want to run through the differences

205:45

between running your instance on end

205:48

cloud versus the lab playgrounds that we

205:50

have on the code cloud course. Now the

205:52

first thing you'll notice as you go into

205:53

the lab is that you still have to

205:54

provide your email and your first name,

205:56

last name credentials as well as a

205:58

password in the instance. And don't

206:00

worry, this is not saved anywhere. So

206:01

you don't actually have to save the

206:03

credential and the password somewhere.

206:04

You can use different emails and

206:05

password for each of these instances. So

206:07

once that's filled in, you can hit next.

206:10

And there's going to be a series of

206:11

onboarding questions here which you

206:12

don't need to fill. So you can hit get

206:14

started. And in the same way for the

206:16

paid features information, you can just

206:17

skip. And there you go. And now you're

206:19

in the admin dashboard that is very

206:20

similar to the N9 cloud environment.

206:23

However, there's still some nuances and

206:24

some differences as we go into the

206:26

workflow. So here what we're going to do

206:28

is we're going to click start from

206:30

scratch. And the very first workflow

206:32

that you're going to build is the email

206:34

AI agent. And the first trigger to that

206:37

is a chat trigger. And I'm going to

206:38

speed through this because this is very

206:40

similar to the workflow that we've done.

206:41

But I want to just show you quickly the

206:44

differences of using the codecloud

206:46

keyspace as well as the open AI API key.

206:50

So if you're using the codecloud

206:51

keyspace, what you want to do when you

206:53

select your model, for example, in this

206:55

case, I'm just going to select the open

206:56

AI chat model. And what I want to do is

206:59

to follow the instruction in the left

207:01

hand bar right here. And you see that

207:03

there's a link to code key. I'm going to

207:05

just hit that URL. And I'm going to hit

207:08

launch now. and it's going to lead me to

207:11

this dashboard right here. I'm going to

207:13

click start playground and there we go.

207:14

So, this is the dashboard on CodeCloud

207:16

Keyspace. And what I want to pick is the

207:19

OpenAI GPT 4.1. And in this case, I'm

207:22

going to just copy the API key here. I'm

207:25

going to go back to my workflow. And

207:27

here, I'm going to hit create credential

207:29

and I'm going to paste the same exact

207:30

API key. I'm going to skip the

207:32

organization ID. And for the base URL, I

207:35

want to make sure that I'm replacing

207:36

this with the base URL that I obtained

207:40

from the CL keyspace. Go back to my lab

207:44

and paste the base URL right here. Okay,

207:46

so I'm going to hit save right now. As

207:49

you can see, it says connection tested

207:50

successfully. However, there's a couple

207:52

of things I want to point out here. Cuz

207:54

if you pick from list, as you can see,

207:56

the list doesn't really match the known

207:58

models of GPT. This is because it's not

208:00

really working based on the UI that's

208:02

been built. So if you try to run it

208:04

based on the chat ID or chat model that

208:06

we've selected, I'm just going to run a

208:09

hello message here. It's going to go to

208:11

the AI agent, but it's going to error

208:13

out. So what you need to do is go into

208:15

the chat model and instead of picking

208:17

from list, you want to go by ID and as

208:19

suggested from the instruction on the

208:21

left hand bar here, you want to copy the

208:23

OpenAI/GPT-4.1.

208:27

Just copy that and paste it all word for

208:30

word. And let's do a test run again. And

208:32

this time it should actually be able to

208:34

access the appropriate model. So that's

208:37

if you're using the CodeCloud keyspace

208:40

API keys. Now what I want to point out

208:41

is the difference between this and using

208:44

the OpenAI API key is that it's much

208:46

more straightforward when you use OpenAI

208:48

API keys. So to show you the difference,

208:50

what I'm going to do here is I'm going

208:52

to create another new credential and I'm

208:54

going to call it OpenAI account too.

208:56

This time I'm going to head over to

208:57

platform.openai.com.

208:59

And what I want to do is head over to

209:01

API key section. I'm going to create a

209:03

new secret key in and then email

209:06

integration. I'm going to hit create

209:08

secret key and I'm going to copy the API

209:10

key and head back to my lab. I'm going

209:13

to paste the API key and leave the base

209:15

URL as is and I'm going to hit save. So

209:18

there you go. So connection tested

209:19

successfully. And the difference is

209:21

instead of by ID, I can now just pick

209:23

from the list and it should load up the

209:26

correct list of models that I might

209:28

possibly want to use. So for example, if

209:30

I choose GPT 4.1, it's just going to be

209:32

that. And we're going to run this node

209:34

again. And as you can see, it's calling

209:37

the correct model right here. Okay. So

209:40

the next thing I want to point out is

209:41

when you add your Gmail note on the next

209:43

workflow or Google tool for that matter

209:46

the difference between doing that in our

209:47

labs versus edit cloud is that on edit

209:50

cloud you often see when you create a

209:52

new credential with Gmail that you can

209:54

actually have a button which you can

209:56

sign in directly using your Google

209:57

account if your browser happens to be a

209:59

Google Chrome browser. However with the

210:01

labs what you need to do when you create

210:03

a new credential is you actually need to

210:05

connect it with the oorthth method. So

210:07

what you want to do is you want to head

210:08

over to console.cloud cloud.google.com.

210:11

And the first thing you want to do is

210:13

create a new project. And for the new

210:16

project, I'm going to name it in email

210:18

app. All right. So, I'm going to leave

210:20

this as no organization. I'm going to

210:22

hit create. And it's going to take a

210:23

couple seconds to create the project.

210:26

And once that's done, I'm going to

210:28

select the project. And as you can see,

210:29

it says edit an email app project. And

210:32

the very first thing I want to activate

210:33

is I want to go to Gmail API. So what

210:36

I'm doing right now is I'm creating a

210:38

project because that's how Google

210:40

recognizes each of these oath access

210:42

that we're giving it. But the way the

210:43

security works is you need to enable the

210:46

particular tool that you want to use

210:48

within the project. So in this case I

210:49

want to use Gmail. So I want to make

210:50

sure I enable the Gmail API. All right.

210:53

Once that's enabled, I want to head over

210:55

to OOTH consent screen. And right now

210:58

there's no OOTH consent that's set up.

211:00

So I'm going to just hit get started.

211:02

And in this case, I'm going to have to

211:04

call it an app name as well. So I'm just

211:07

going to say it is an email

211:10

app. Okay. User support email. Going to

211:13

put this.

211:15

And we're going to select external. And

211:17

by the way, each of these steps is

211:19

documented on the left hand side panel

211:20

of the lab. So you don't have to

211:22

memorize any of these. But we're going

211:24

to go to next. And under contact

211:26

information, just going to put my email

211:29

here. And once you're done, just hit

211:31

continue. and create. So, just before we

211:34

go, I just want to go to audience and I

211:36

want to add a test user here, which is

211:41

an email that you're going to use to

211:44

send out basically the emails that you

211:47

want the agent to send out. So, in this

211:48

case, it's maronei atcloud.com. We're

211:51

going to save that. And lastly, we're

211:52

almost there. We're going to go to API

211:55

and services again, and this time we're

211:56

going to go to credentials. And what we

211:58

want to do is to hit create credentials

212:01

with OOTH client ID. And application

212:03

type we want to choose web application.

212:06

And under name you want to name it

212:08

Canadian email oath client. And then

212:13

we're going to add the authorized

212:14

redirect URLs which we can obtain from

212:17

our lab. So going back into the lab

212:19

here, you see that this is the OOTH

212:21

redirect URL. and we're going to copy

212:23

this and we're going to head back and

212:24

just fill this in and we're going to hit

212:27

create. As you can see, we now have the

212:29

client ID and client secret to the app

212:31

that we just created. So, we're going to

212:33

copy this and head back to client ID.

212:36

Paste it in. Client secret. Copy that.

212:39

Paste in the client secret. And you'll

212:41

see now that there is a sign in with

212:43

Google button that pops up. So, what you

212:44

want to do is just hit that and a Google

212:46

login pop-up will show. And then you

212:48

just want to correctly select the email

212:50

address. And you'll say that Google

212:52

hasn't verified this app. But because

212:54

you're the one who created it, you know

212:55

it's safe. So we're just going to hit

212:57

continue. And we're going to select all

212:59

because we want to have the agent be

213:01

able to do all these actions with our

213:03

Gmail. So we're going to hit continue

213:04

now. And as you can see, it says

213:06

connection successful. And we're good to

213:08

go. So just wait a couple seconds here

213:10

within the labs and it's just going to

213:12

load up. And there you go. I already

213:13

have my credential set up here. And we

213:17

just want to run an execute step to show

213:18

you that everything is working. So in

213:20

the workflow, you'll see that we've

213:22

chosen to define all of this by the

213:24

model. So we're going to let the AI

213:26

agent define this. And we're going to

213:28

start chatting and say, "Hi, can you

213:31

send an email to codingcloud.com

213:37

to just say hello?" All right. So we're

213:40

going to hit this. So as you can see now

213:42

the workflow has run and it's actually

213:44

sent an email to my Gmail. So let me

213:46

take a look. And there you go. It says

213:48

hello mer just want to say hello best

213:50

regards. Okay. So obviously it's not

213:52

very sophisticated because actually in

213:53

the AI agent we didn't even specify any

213:55

system prompt. So the whole point is

213:57

just to show you the main difference

213:59

between running the environment on end

214:01

cloud and within our playgrounds

214:03

specifically covering the part where

214:05

code key is being used as well as when

214:07

you're going to access any Google tools

214:09

Gmail, Google Sheets and stuff like

214:10

that. You do need to go to your Google

214:12

cloud console to set up the project, the

214:15

app, and the ooth clients in order to

214:17

access Google services with the

214:19

workflow. And as you explore Nit in the

214:21

course, you're going to realize that

214:22

there's going to be some differences

214:24

between running edit cloud and edit

214:26

within your own self-hosted environment.

214:28

For example, the availability of

214:29

community notes, supported versions, and

214:31

a few other features that might only be

214:33

available on edit cloud. So if you're

214:36

facing any issues any part of that

214:38

build, just keep in mind that it might

214:40

be because you're using a self-hosted

214:42

version or a lab hosted version from

214:43

CodeCloud. And it's not necessarily a

214:45

limiting issue. There's always a

214:47

workaround for that. It's just something

214:48

to keep in mind about

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.