n8n Tutorial – Zero to Hero Course
FULL TRANSCRIPT
Master the future of process automation.
Nin is an incredibly powerful open-
source platform that enables you to
integrate APIs and orchestrate
intelligent workflows without the usual
coding headaches. This course from
Maronei will guide you from the
foundational concepts of nodes and
architecture to deploying advanced real
world systems. You'll master essential
skills like connecting various services,
configuring API keys, and handling
complex data flows. Also, the course
goes into cutting edge AI integration,
teaching you how to build advanced
retrieval automated generation or rag
agents and coordinate multi-agent
systems. By the end, you'll be able to
automate sophisticated business
processes, giving you a competitive edge
in DevOps, AI, and data engineering.
Automation is no longer just a buzz
word. It's reshaping how teams work,
collaborate, and scale. At the heart of
this revolution is Nen, an open-source
extensible automation platform that lets
you connect APIs, orchestrate workflows,
and even build intelligent AI agents
with ease. From sending simple automated
emails to creating advanced multi-agent
rack system, NAN empowers you to
streamline operations, boost
productivity, and unlock whole new
levels of innovation. Welcome to NATE
zero to hero course by Cookcloud. I'm
Maronei and together we'll go from
beginner to advanc as we explore how to
build powerful automations with NIET.
Whether you're a DevOps engineer, an AI
enthusiast or someone from a
nontechnical background looking to
automate real world processes, this
course is designed for you. We begin
with the introduction section where you
get familiar with Naden itself, the
playground environment and the code
structure. You'll also understand the
objectives so you know exactly how each
step builds towards your automation
skill set. Next in the foundations of
edit end we'll cover the essentials
understanding notes inputs and outputs
and we'll talk about data types in editn
and finally how wful run under the hood
using an endn's default logic. You'll
also configure API keys for services
like OpenAI, Anthropic, and CodeLab
Keyspaces so you're ready to integrate
AI into your workflows right from the
beginning. From there, we jump into
hands-on AI agent workflows. This is
where the fun begins. You'll build your
first email AI agent to drop responses
on autopilot. Create a multi-agent
research workflow that pulls knowledge
from tools like proplexity and open AAI,
turning hours of work into minutes.
Explore the HTTP request node. your key
to working with APIs from web scraping,
data fetching or calling other external
agentic tools. Experiment with creative
workflows from text to image, text to
video, and even image to video
automations using bleeding edge models
like Google's V3 and Cand. And yes,
we'll even build a Slack workflow that
lets AI reply to your co-workers and
your managers on your behalf. So while
you're sipping coffee, Edit end is
already answering questions for you, aka
doing your job. In the optional setup
section, we'll explore how we can
self-host nit with Docker and use Ola as
your local LLM, as well as hosting on
AWS EC2 in our playground environments.
This flexibility means you'll know how
to run NAD in a way that matches your
scale, budget, and preferences. Then
we'll dive into rag agents retrieval
augmented generation. You'll learn how
vector databases like pine cone gives
workflows memories and contexts.
Together, we'll build a customer sport
rag agent, the kind of workflow that
real businesses use to provide
intelligent contextaware support. We'll
also explore MCPs and see how they
compare to traditional workflows,
unlocking new possibilities for
reusability and scalability. In the
multi-wordflow advanced build, we'll
combine multiple agents into enterprise
style system, showcasing how subflows
let you manage complexity without chaos.
Finally, we'll cover retries, error
handling, best practices, and how to
leverage the editor workflow template
marketplace to accelerate your builds.
By the end of this course, you'll have
learned how to build a practical real
world automations like Slack AI agents,
multi-agent research workflows,
multimodal automations, customer support
rag agents, and advanced multiworkflow
orchestration systems. And along the
way, you'll engage with hands-on labs
designed to reinforce learning by doing
so. So, every concept becomes second
nature.
At Cook Cloud, we believe in learning by
building. And you'll be part of a
vibrant community where you can share
insights, ask questions, and grow
together.
So, ready to move from zero to hero with
Nin. Let's dive in and unlock the full
potential of workflow automation.
Hey there and welcome. In this section,
we're going to walk through the
foundations of NIDN, an open source
workflow automation tool that makes
connecting apps and services simple.
Here's what we'll cover. We're going to
do a quick introduction of NIDN and its
building blocks nodes. And then we're
going to explore AI agent architecture
and API flow. And later on, we'll
showcase how it works in an email to
select use case. I'll also show you how
input formats and expressions work in
NIDN. And finally, we're going to talk
about how Nadn handles data types. Let's
dive in. All right. So, what is NN?
Well, NN is short for notation, and it's
a free open-source workflow automation
tool. Think of it like a universal
connector. You can link different apps,
services, and even AI agents together
without writing tons of custom code. The
magic happens through something called
nodes. You set up workflows visually
step by step and edit end handles all
the data passing in between. In add
workflow with a trigger event that you
define like a new email arriving or a
web being called and then there's the
action nodes that do the work like send
an email, interact with an LLM or call
an API. By chaining these together, you
can turn everyday manual tasks into
smooth automated flows. Now, let's look
at AI agents inside Nitn. Just like any
other node, an AI agent within a
workflow will be triggered by an event
or a trigger node. And there are three
important elements that make this node
an AI agent. Namely, the LLM, which is
the brain that powers the agent. And
this could be open AI, plot or llama, or
any LLMs that you hook up to it. And of
course, the way you connect them would
be through an API access token. And then
there's a context window memory which
gives the agent context of the workflow
interaction and the tool that you can
use to complete tasks like Gmail, web
scrapers, or any tools that you connect
to it. And of course, you connect that
node to an output into either the next
node or just as a chat response. Here's
how it works. Your workflow grabs some
user input, say a simple chat trigger.
The agent thinks about that input and
based on what you've asked, whether it's
sending an email or just having a quick
chat, it decides if it needs to call
tools like Gmail or any other tools that
you hook into it through the APIs. Then
it pass the output back to your
workflow. So it's input going into the
agent, deciding whether to use the tools
or not, and then passing out the output.
And that's how you get intelligence
mixed into your automations. And here's
how that looks inside N itself. A chat
trigger kicks things off. An AI engine
agent node hooked up to GPT does the
thinking. A Gmail node can send messages
and an output node fires the result
straight to Slack. It's the same logic
but now mapped visually into the
workflow editor. Now by default addin
run nodes sequentially.
So it's trigger to the next node to the
next note and the next node. If the
nodes are connected in a line,
everything runs in order until the
workflow finishes without skipping any
nodes. Which means if along the line one
of the nodes fail, it will cause the
workflow to error out. In this case,
you'll see that there's a Tavly tool
attached to the AI agent. And because it
is only a tool hooked up to the agent,
it is optional for the workflow to run
this. And it depends on whether the AI
agent decides that it is appropriate to
call on this tool based on the
instructions received. And in the
middle, you'll see that there's a if
conditional loop. And this just ensures
that the workflow doesn't error out when
it doesn't get the result in time from
external APIs. But we'll get to the
details of that in the next few
sections. But workflows don't always go
in a straight line. You can brush them
out. So one input might trigger two
parts. Maybe one branch post to Slack
and the other goes to Gmail. In this
case, each branch runs independently.
It'll run the top one first and then the
second one sequentially, giving you two
actions from the same trigger. AI agents
need memory. And there are two main
flavors. We have the context memory,
which is a short-term memory inside the
AI's prompt. It's great for chat history
and to get the AI to stay in context.
And it is a difference between a
stateless and a stateful response from
the AI agent. And we'll run some
examples to show you the difference in
practice later on. And then there's a
vector database or documents rack
memory, which is long-term searchable
memory. Here your documents get stored
as embeddings and the agent retrieves
only what's relevant when answering.
This is a knowledge source outside of
LLM's training data. So simple memory is
leveraged for context and continuity to
make sure that the AI agent stays
context and rag is used for scale and
accuracy.
Every node in addit has an input and an
output. In the input panel, you see the
payload data that the node is receiving
from the previous node. So in this case,
it is a chat message received from the
user because the previous node was a
chat trigger node and the output is
basically the response of the LLMs or
the AI agent and the data structure
ranges from very simple with one
variable or multivariable with different
ranges of data types and additional
informations. But it is worth noting
that every node has an input and an
output data and the output data of this
node will be the input payload of the
next node and that's how the logic
strings the entire workflow together. So
this makes it super easy to see what's
flowing through your workflow and data
doesn't always look the same. So in
ended end there are three main types of
data that you'll see in the input and
output panel. So the first one is schema
which is a defined data structure and
the second one is a table format with
rows and columns handy for spreadsheets
and then of course there's that
underlying JSON which is the most common
structured and flexible way to show the
data and sometimes you get other types
like binary for files and images. It's
worth noting that these are different
representations of the same data. When
you configure a node, you can type in a
fixed value like a static string or you
can use expression which pulls dynamic
data from earlier nodes. Expressions
allow your workflow to adapt to
real-time inputs instead of being
hardcoded in. And the cool thing about
N&N is the fact that you can drag and
drop the variables or parameters into
the fields within their low code UI. Now
let's talk data types. The basic
building blocks inside NN. So in Naden
there are many five different data types
which is strings, numbers, boolean,
arrays and objects.
So strings are like text for example
hello world. Numbers as it suggests are
values like 42, 3.14 or -100. Booleans
are simply true or false statements and
arrays are essentially lists like a list
of numbers or a list of names. And
objects are structured data like what
you see in the example which contains
the user credentials including name and
email. Objects can even nest and you
often access them with dot paths and
expressions and knowing the differences
and mastering these makes everything
else click. One of the coolest things
about NN is the community nodes. These
are extra connectors built by users
extending NN way beyond the official
nodes. So, if you don't see a built-in
integration for your favorite tool,
check the community first. Chances are
someone's already made it.
Once you're in the workflow, the very
first step that you want to do is to
introduce a trigger node into the
workflow. The trigger node, as it name
suggests, is a node that's going to
trigger your entire workflow when the
event that is dependent on it is
triggered. So let's go ahead and hit the
plus button on the top right hand side
here and it will generate a list of
options or possible trigger nodes that
you can start your workflow with. Now
the most simplest one is the trigger
manually node which triggers every time
you click on it. But in this case what
we're going to choose is an on chat
message trigger node.
And once you click on a note there
usually are going to be some fields that
you need to configure. But in this case,
because it's a simple chat message
received node,
what happens is that a terminal will pop
up here where you can actually interact
with the chat node. In this case, I'm
just going to type in hello to populate
it with the relevant data.
Now, a quick way for you to
differentiate between a trigger node and
a regular node is by the lightning icon
here on the left that you see.
Now let's open up a chat message receive
trigger node to see how the data is
populated within the note.
As this is a trigger note, you only have
an output data from the node and you do
not have the input data.
Now if we covered in the previous
sections, there are three types of
representation of the same data that
you'll be able to see within each nodes.
In this case, the underlying data format
is JSON with three different information
that's contained within the payload. Now
the first information is the session ID
which identifies a session of the chat
and then we have the action message
which in this case is send message
because we're sending message through
the chat input and there's a chat input
content which in this case was what I
typed in which is hello.
Now when you toggle over to the table
section what you'll see is the same data
representation in a table format. As you
can see this session ID, the send
message action at the chat input which
is hello.
Now if you toggle over to schema, you
can see the same representation of the
data in a schema format which makes it
easy for you to drag and drop to the
next node when you need to.
Now let's go back to the workflow. Once
that first node has been set, the second
node that you want to hit is the AI
agent node. Now in this case I'm going
to click AI. I'm going to select AI
agent.
Now when you open up a node the data
from the output of the previous node
automatically populates the input of the
AI agent node. And this is true for any
types of nodes when you connect it to
the previous nodes. The output payload
of the previous node is going to be the
input of the current node.
Now in this case because the AI agent is
already connected to the chat trigger we
can leave this as a connected chat
trigger node
in terms of the prompt or the user
message it is correctly identifying as
JSON chat input which is the content
here which is hello and you can see that
hello is represented here as the actual
value of the variables.
Now with every AI agent node, what you
need to pay attention to are three key
elements that make this node an AI
agent. So the first is of course the
LLM. So LLMs are the lash language
models basically is the brain behind the
AI agent note. Now in this case there
are long list of options of LLM that you
can choose from and each of these are
good for specific use cases. In this
case, we're going to choose an OpenAI
chat model as the model that we want to
hit.
Now, I already have an OpenAI account
set up here, but I'll show you how to
connect your OpenAI account or your
OpenAI API keys pretty easily. In this
case, just hit the dropown and click on
create new credential.
And in this field, what you want to do
is to populate it with the API key that
you can either obtain from CodeCloud
Keyspace or the OpenAI developer
platform. We'll cover how you can get
the OpenAI API key from CodeCloud
Keyspace or the OpenAI developer
platform in another section. In this
case, you can also fill in your
organizational ID or it could be
optional.
Now, once you've key in your API key,
you can hit save.
And once you've connected your OpenAI
account, what you can do now is to pick
from a list of GPT models that OpenAI
has to offer. In this case, we're going
to pick GPT for Mini because it's
intelligent enough for most use cases.
Now, moving on, the second element that
we don't want to attach to the AI Asian
node, it's a memory node.
And what does memory mean for the AI
agent? Well, this is basically the
persistent memory that tells the AI
agent or gives the AI agent the context
that it requires from previous
conversation.
And I'm going to show you very quickly
what happens when an AI agent has no
memory versus when it does. Now, in this
case, we're going to type in hello,
my name is Mark.
And when I type that in, the agent now
access this open AI LLM chat model to
infer the information and respond to me
by saying, "Hello, Mark. How can I
assist you today?" Okay. And in the next
thread, what I'm going to ask is, "What
is my name?" So, since I've just told
him that my name is Mark, it should be
able to tell me what my name is.
But as you can see in the output, and
let me just move this up. It says, "I'm
sorry, but I don't have access to
personal information about users unless
you shared it with me in this
conversation. How can I assist you
today?" So, as you can see, it has no
memory context of what I just said,
which is my name is Mark, even though I
just typed it in a minute ago.
So, now let's attach a memory bank. In
this case, we're going to choose a
simple memory. And as you might have
noticed, there are various options of
different types of memories that you can
attach to the AI agent. But the simplest
way uh to get started is to click on
simple memory and choose that as the
option.
And there are a couple fields that come
with the simple memory as well. But in
this case, because our connected trigger
node is a chat trigger node, we can
leave this as that.
And the context window length basically
means how many past interactions to
model that the memory will retain. So in
this case, we're going to keep it as
five and then it will remember five
different iterations or five previous
iterations of the conversation that you
have with the agent. So we're going to
leave it as that.
Now you might notice that now the AI
agent node is showing a yellow highlight
around it with a triangular icon and
that means some changes has been made to
the AI agent and in this case the change
was the memory node being attached to
the AI engine and showing that so that
you can run the workflow again so that
the data can be repopulated.
In this case I will do the same thing. I
will say hello my name is Mark.
As you can see it's doing the same
operation but this time it's actually
dipping into the simple memory and now
are able to store that information.
So I'm going to ask it again what is my
name?
And this time as you can see it says
your name is Mark. How can I help you
today Mark? So now it retains the memory
of the information that I've just given
it a minute ago.
And you might notice the terminal in the
middle here are basically the logs of
what's going on within the node. So in
this case, what happens was a chat
message was received. The data was
passed on to the AI agent and the AI
agent is zipping into the simple memory
to check what my name is or the
information that was given earlier and
then using the open AI chat model to
return the information.
Now that the chat model and the simple
memory has been set, the third thing
that we want to attach the AI agent to
or provide the AI agent with is the
email tool. So now there are various
tools that you can attach. But in this
case, because we want the AI agent to be
an email agent, we want to attach the
Gmail note.
when you're attaching a Gmail note, you
need to set up your Gmail credentials to
be associated to the account. I already
have my Gmail account set up here.
But if you haven't set up your Gmail
account before, it is pretty easy. The
way you would do it is to create a new
credential. And if you're using an
cloud, you would sign in with Google as
an ooth to recommended way.
or if you're using a self-posting
method, you might want to choose a
service account and fill in the
necessary information that's required to
set up the Gmail account properly.
Now, as you can see, there are a couple
fields to configure here.
The first one is tool description. We're
going to leave this as set automatically
because we're going to let the AI model
determine whether they want to use Gmail
or not.
The second field is a resource. In this
case, we want to do message because we
want to be able to send message through
the Gmail tool. And the operation of
course is send. And so you can see there
are other options as well. So when we
attach a Gmail tool oftent times it
doesn't mean that we just want to send
an email. It can also do many other
operations such as reply a specific type
of email or mark an email as read or
unread or delete an email or many other
options. Now going into the information
of the email itself, there are a couple
of information that we need to pass to
the AI agent for it to determine what to
do with the email. So first of course
who is the recipient of the email. Now
because this is going to change
depending on your input. Now remember
because your input is going to be in
natural language whether that's English
or any other languages this is going to
be different every time. So what you
want to do in that case is to let the
model define what the email address is.
That way whatever your input is, the
model is going to infer the information
and try to extract the necessary
information that you think is correct
for the email address
and the subject itself is also going to
be determined by the model. We're going
to let it do that and we can do that by
clicking the same blue button here to
let it determine the subject.
When it comes to email type, in this
case, we want to choose a text email
just for simplicity. HTML is very good
when it comes to imagerich type of
email. So, if you're looking to attach
any image or any thumbnail and you want
it to be aesthetically formatted within
the email, then you would choose HTML.
But in this case, we're going to go
ahead with text.
And then on message, we want to choose
determine by model as well because that
way we're going to let the AI determine
what kind of body of the message that is
going to attach.
Okay. So now the three elements have
been configured.
We want to go back to the AI agent node
to specify certain prompts. But before
we do that, there are a couple of things
that I think it's worth highlighting. As
you probably have noticed, with every
single field that comes within a
particular node, there is a fix and
expression toggle here. Now, I just want
to highlight the difference between fix
and expression. So, because the AI agent
node is connected to the chat trigger
node, we've left this as connected to
chat trigger node. But what you can do
is actually to define what the input is.
So, when we click on define below, we
specify exactly what kind of input that
goes into the prompt or user message
prompt.
So in this case I can drag and drop the
chat input and it will fill it up as an
expression.
Now the difference between fix and
expression a fixed format is basically a
natural language format with which you
can tell the AI agent a specific but
permanent definition of what the user
message is. So for example, if I were to
put the user message as please send an
email, this is going to be the input for
every single run that I execute of this
workflow. And that's not ideal because
we want the AI agent to be able to get
the information dynamically depending on
my input. So in the first run, I might
put what is my name instead of please
send an email. So we want the prompt or
the user message to change together with
my prompt. So how do we do that? That's
when we toggle into expression. And what
happens is we can now drag the chat
input variable in this case and simply
drop them into the prompt user message.
And what happens is as you can see it
creates an expression in this case a
JavaScript expression of the variables
of the chat input. Meanwhile the actual
value of the variable is dictated here
which is what is my name question mark.
So, as you can see
on each separate run of the workflow,
it's going to populate the information
based on my input instead of a prefix
input if I were to choose a fixed
format. I hope that explains the
difference between fix and expression.
Well, and in this case, the next thing I
want to do is to go to options and under
add option,
I'll go to the dropdown and click on
system message.
Now for those of you who are not sure
the difference between a system message
and a user message, the user message is
a set of instructions or input that
comes directly from the user.
Meanwhile, the system message describes
the core function of this particular
model. So in this case, what I want to
tell it is that you are a helpful email
assistant which helps craft effective
and succinct email based on users
instruction. You also help with sending
the email by using attach email tool
when asked. Now this is a very simple
system prompt and I would suggest that
for a more effective system promp that
you go to chatgpt or llm of your choice
to prompt engineer an effective system
prompt for a particular AI agent. That's
what I would do in a production
scenario. But in this case, because
we're just running a quick demo, we're
going to stick with a simple one just so
that you understand the principle behind
it. Cool. Now that we have the system
prompt defined as well as the user prom
variable in place, we can now take it
for a spin. Now in this case, what I
want I want to do is to say the same
thing, which is hello, my name is Mark.
And as you can see, it's accessing both
the open AI chat model and a simple
memory to store the information, but
it's not necessarily using the Gmail
tool. And the reason for that is because
I did not specifically ask the agent to
send any email just yet. So as you can
see from the response, hello again Mark,
what would you like to do today? Is the
response which is also the same response
here. Now I would say I would like to
send
an email
to my boss
at
maronei@codecloud.com
which is not my real boss's email
address is actually my email address
because I don't want to necessarily send
my boss a test email about
the upcoming marketing meeting
on
26th of August
2025.
So, I'll hit that.
As you can see, it's not necessarily
using the Gmail tool straight away
because it is intelligent enough to know
that it needs a little bit more
information. So, it says, "Sure, what
would you like the subject and the
message of the email to be?" So in this
case, because I'm too lazy to come up
with the subject and email, please,
please come up with it yourself. Okay.
So I'm going to say that
and it gives you a draft of the email.
So in this case, it says subject
upcoming marketing meeting on August 26,
2025. Dear boss's name, I hope this
message finds you well. I wanted to
remind you about the upcoming marketing
meeting scheduled for the 26th of August
2025. Let me just blow this up. We will
be discussing our strategies and plans
for the upcoming quarter. Please let me
know if there are any specific topics
you would like to address during the
meeting. Looking forward to your input,
Mark. And I would like to make changes
because it says, as you've noticed, my
boss's name. So, let's say
my boss's name is Moonshot.
Okay, so now it registers that
and gives me a new draft. Dear Moonshot,
I hope this message finds you well about
the upcoming marketing meeting. Da da
da.
And then it says, shall I go ahead and
send this email to
moonshotaronecloud.com?
Again, this is not his email. This is my
email because I don't actually want this
to be the email that sent to him. So I
would say yes, please go ahead and send
it.
So once I've confirmed that, what
happens is, as you can see, the AI agent
is now accessing the Gmail tool to send
the email to actually in this case my
own email. So let's take a look what it
looks like in my inbox.
Now this is the email that's been sent
to me. As you can see, because the Gmail
account over here
is connected to my email or my Gmail,
which is marone@codecloud.com,
it is essentially sending from
marone@codpow.com
to me. That is this in the content of
course. Dear Moonshot, I hope this
message finds you well. Wanted to remind
you of the upcoming marketing meeting
schedule for 26th of August 2025.
So, everything looks nice. I can almost
send this directly to Moonshot. But what
I want to do is as you can see at the
end of it, it says this email was sent
automatically by NAND. So that's not
ideal because Moonshot will find out
that I'm actually automating this email.
So what I want to do is go back to the
NAND workflow and go to the Gmail tool
and scroll down to options. Under add
options, I'm going to click on the
dropown
and click on append edit and
attribution.
And what I want to do now is to toggle
this off. In this case, it will actually
stop attaching the message of the end
attribution in the email.
So in this case, what you can do when
you want to execute a step is to hit the
play button on top of the note so that
you don't have to rerun the entire
workflow. So let's just click on that.
Okay. So now that that's done, I want to
take a minute to explain some of the
little functions here that you see and
what they do.
The first one is the play button which
is the execute step button. So this
button is super useful when you want to
run the node without essentially running
the entire workflow again. So if you
change anything within the AI agent node
and you only want to run the AI agent
node again, you would hit this button.
It will only run the node here.
The power button as you can see here is
to deactivate the node because this is
the only node basically that does any
processing in this workflow. But as your
workflow grows more complex, you might
have five or six branches of nodes. And
some of them, if they're not in use, you
might want to deactivate. So if you
click on deactivate, what happens is
it's going to say that the node has been
deactivated. And basically, the workflow
is not going to work for any processes
that go through this particular node.
And we're going to reactivate that
again. And of course, the bin icon is to
delete the entire node together.
Now another very useful function in
within any is a pin. Now if you were to
open up the node and go to the top right
hand corner you can see that there's an
option to pin the data. Now this is
super useful because now you don't have
to key in or populate this particular
node with new data all the time.
And this is extremely valuable when
you're running a workflow that uses API
tokens or might cost you for every run
of the workflow.
So to avoid running the workflow again
and again and using or activating the
API token, what you can do instead is to
pin the data to the node so that every
single demo run or every single trial
run is using the data that's already
been populated.
So once I pin this data, every single
run of the chat input is only going to
say yes until I push it to production.
Now once you've done all this, this
doesn't mean that your workflow has been
pushed to production. So the way you
would push to production is to toggle
the activate workflow option right at
the top here.
And when you do that, it's going to give
you a warning message to say you can now
make calls to your production chat URL.
Okay. So click on got it. And there you
go. Your workflow is now in production.
You do not have to click on execute
workflow to run the workflow. And every
single message that it received is going
to run the way you set up the workflow.
So just to try that out, I'm going to
use hello.
I'm going to have to pick on pin and
send.
And there you go.
It's a new response. Now, of course,
this workflow isn't very useful because
the only way you can interact with the
AI agent is through the terminal here.
Now, how can we provide a publicly
available terminal so that more than one
users or people who don't have access to
your end to end workflow can also use
this AI agent? Well, if you open up your
chat note, what you can see is that
there is a make chat publicly available
option. And what you want to do is just
to toggle that. In this case, it's going
to give you a URL
that you can copy and open up.
And you might have to wait for a couple
seconds for it to stand up, but once you
do, this is a page that is publicly
available as long as you pick the option
to make the chat publicly available
and you can access the AI agent
as any user would. Cool. And as you can
see, there are a couple options here
that you can configure if you want to
include an authentication or if you want
to have an initial message that is
crafted or customized to your needs.
But in this case, we're going to keep it
simple because in this section, we just
want to run through the principles of
how to create your first AI agent
workflow on NN. And I hope that has been
helpful. And just before we go, you
might notice that there are three
options to choose from from here, which
is the editor version where we have been
actually building the NDN workflow on
which obviously as the name suggests
where you edit all the nodes and
configure all the nodes in the workflow,
but also the executions button
where you can check out the logs of the
executions.
As you can see, it gives you the date,
the time, and the type of execution. The
one with the beaker icons are test
executions or basically the workflow
runs that we did when it was in demo
mode. And once I push it to production,
it is an actual execution to the
workflow in production.
And one of the things you can do once
your workflow is in production is also
to copy the data that's been populated
during the execution for troubleshooting
purposes. So if you click on copy to
editor
what happens is your editor will now
show the information that is derived
during the execution of that production
workflow.
So remember in the chat we said hello
and here it is the chat input content is
hello
and similarly the AI agent response is
hello how can I assist you today which
is the response that we got from the
live chat.
Now, this is a very useful option
because it just makes your
troubleshooting as well as your
iteration a lot easier
as you build or modify workflow in
production.
So, I hope you were able to follow
through on the demo of how to build your
first AI agent workflow on Edn. In the
next section, we're going to give you a
lab where you build the exact same AI
agent within that particular lab and see
for yourself if you're able to create an
AI agent that crafts email and sends it
on your behalf. This course comes with
free labs that could be accessed right
in your browser. So, you don't need to
set anything up yourselves or use your
credit cards. The labs are challenge
based, meaning we give you a challenge
and ask you to solve it. Use the link in
the description below to sign up for
free labs on codecloud. I'll now walk
you through your first lab. In the
upcoming labs, you're going to
experience hands-on building AI agents
on NAD environments. And in doing so,
you're going to need an API access to
the LLMs of your choice. And in order to
do that, you need the API keys. And
there are a couple ways to get your API
keys. Naturally, you can go to OpenAI to
get your OpenAI API keys or platforms
like Open Routers on which you can
obtain API keys of several different
models. But those platforms require
payment information before you're able
to get the API keys even if you're just
trying out. So, one of the things that
we've done here at CodeCloud is we've
created the CodeCloud code key. And what
it is, it is the all-in-one AI
playground where you can get the API
keys to the following LLMs of your
choice. And at the time of this
recording, models like GPT40, GPT4.1,
cross on 4, Gemini 2.5, and Grog 3 are
available through the code key
playgrounds. And the code key playground
is available on various different
subscription plans, but you get the most
access with AI plans and the business
plan. Now, if you were to go to this
page, which is
codecloud.com/ai-playground/code
key with the right access, you can hit
launch now and click on start playground
and you'll be led to a dashboard where
you can essentially create and grab your
API keys by selecting the models as well
as the different types of ways or
methods with which you can access the
LLM model. Now, in this case, I'm going
to do an example of how to call a GPT
4.1 model through a code key on your NDN
environment. So, there are two ways to
do that. The most straightforward way is
to go to your end workflow. I'm going to
start the trigger node with a chat
trigger because I want to be able to
communicate with the agent and I'm going
to just populate the node with something
like hello. And we're going to introduce
a second node which is the OpenAI node.
In this case, we're going to choose a
message and model node. And what we want
to do here is to create a new
credential. And we call this code key
demo for example.
And what you want to do under the base
URL, replace this base URL with the base
URL that you can obtain from the code
key page. Copy that. Go back to the
workflow and paste the base URL in. And
of course for the API key, we're going
to take the API key here and we're just
going to paste it in and we're going to
hit save. And you'll see that sometimes
depending on the version of the NA cloud
that you're in, this red bar might pop
up. Don't worry too much about it
because I'm going to show you how you
can keep using it even if the red bar
shows up. And this is just simply
because the UI itself is not designed
for users to patch in their base URL
themselves like that. And sometimes that
causes some form of incompatibility
which causes the red bar to show up. In
this case, under model, what we want to
do is we're just going to define it by
ID because we're going to type in the
name of the model ourselves because this
is not going to be the models that are
in the list from OpenAI. And if you were
to try to grab it from the list, it's
not going to load at all. So, we're
going to hit buy ID and we're going to
manually type in open AI/GPT-4.1.
And that's the model that we want to
call here. As you can see, is GBT-4.1.
And with that model, we're going to
transfer or rather drop in the chat
input variable. So, that essentially
everything is connected. And we're not
going to add any tool at this point
because we're just going to hit execute
step. And as you can see it is coming up
with an output or response to the chat
input which is hello. It says hello how
can I help you today. So with that
simple setup you can now chat with the
model here. For example I can ask why is
the color of the sea blue and as you can
see under the chat the output is here in
the response the sea appears blue
primarily because of the way water
absorbs and scatters sunlight. Um, and
that's the response for that. Now,
that's one way to gain access to a LLM
model through code key. The other way to
do that is to call a HTTP node. So, I'm
just going to add another trigger node
here. And this time, I'm going to select
trigger manually. I'm going to move this
down right here. I'm going to add the
HTTP request node. And if you were to go
back to the code key documentation, as
you can see, we can actually toggle this
to the curl command and I'm going to
copy the entire chunk here of the curl
command and go back to my workflow. I'm
going to hit import curl. I'm going to
paste the entire thing and it's a post
API call to this particular endpoint
URL. And of course there's some
authentication here which is already
prefilled actually because I pasted the
entire curl command. Ideally we want to
set up the authentication on the headers
but we'll go over that part later on in
the next section. But in this case
without doing too much what I want to do
is I want to just hit execute step and
there you go. So under JSON body you can
see that there is a preconfigured
content which is in a single sentence.
Why should someone choose codecloud over
other platforms to learn DevOps? That is
the user prompt that has been
prepopulated there. So you can
essentially replace this with some other
variables that you receive from the
input. Like for example, if it's a chat
trigger that's connected, you can drag
those chat trigger as the user input and
drop it in here to replace this whole
line. But here I just want to show you
that it is responding to the user prom
as someone should choose code cloud over
other platforms to learn devops because
it offers highly interactive hands-on
labs and real world scenario based
exercises that accelerate practical
skills development far beyond
traditional video based courses.
So cool. These are the two ways that you
can gain access to the LLM API keys via
KOI. And as I've shown earlier, you can
choose from these various models right
now. For the next few sections, this
might come in handy for you. Otherwise,
you can always go to the actual provider
like OpenAI or Anthropic to get those
API keys. I'll see you in the next one.
So I want to run through the differences
between running your instance on end
cloud versus the lab playground that we
have on the codecloud course. Now the
first thing you'll notice as you go into
the lab is that you still have to
provide your email and your first name
last name credentials as well as a
password in the instance. And don't
worry this is not saved anywhere. So you
don't actually have to save the
credential and the password somewhere.
You can use different emails and
password for each of these instances. So
once that's filled in you can hit next.
And there's going to be a series of
onboarding questions here which you
don't need to fill. So you can hit get
started and in the same way for the paid
features information you can just skip
and there you go. Now you're in the
admin dashboard that is very similar to
the end cloud environment. However,
there's still some nuances and some
differences as we go into the workflow.
So here what we're going to do is we're
going to click start from scratch. And
the very first workflow that you're
going to build is the email AI agent.
And the first trigger to that is a chat
trigger. And I'm going to speed through
this cuz this is very similar to the
workflow that we've done. But I want to
just show you quickly the differences of
using the KCloud keyspace as well as the
OpenAI API key. So if you're using the
KCloud keyspace, what you want to do
when you select your model, for example,
in this case, I'm just going to select
the OpenAI chat model. And what I want
to do is to follow the instruction in
the left hand bar right here. And you
see that there's a link to Code Key. I'm
going to just hit that URL. I'm going to
hit launch now. and it's going to lead
me to this dashboard right here. I'm
going to click start playground and
there we go. So, this is the dashboard
on CodeCloud Keyspace. And what I want
to pick is the OpenAI GPT 4.1. And in
this case, I'm going to just copy the
API key here. I'm going to go back to my
workflow. And here, I'm going to hit
create credential. And I'm going to
paste the same exact API key. I'm going
to skip the organization ID. And for the
base URL, I want to make sure that I'm
replacing this with the base URL that I
obtained from the KCL keyspace. Go back
to my lab and paste the base URL right
here. Okay, so I'm going to hit save
right now. As you can see, it says
connection tested successfully. However,
there's a couple of things I want to
point out here cuz if you pick from
list, as you can see, the list doesn't
really match the known models of GPT.
This is because it's not really working
based on the UI that's been built. So if
you try to run it based on the chat ID
or chat model that we've selected, I'm
just going to run a hello message here.
It's going to go to the AI agent, but
it's going to error out. So what you
need to do is go into the chat model and
instead of picking from list, you want
to go by ID and as suggested from the
instruction on the left hand bar here,
you want to copy the OpenAI/GPT-4.1.
Just copy that and paste it all word for
word. And let's do a test run again. And
this time it should actually be able to
access the appropriate model. So that's
if you're using the CCloud keyspace API
keys. Now what I want to point out is
the difference between this and using
the OpenAI API key is that it's much
more straightforward when you use OpenAI
API keys. So to show you the difference,
what I'm going to do here is I'm going
to create another new credential and I'm
going to call it OpenAI account 2. This
time I'm going to head over to
platform.opai.com.
And what I want to do is head over to
API key section. I'm going to create a
new secret key in and then email
integration. I'm going to hit create
secret key and I'm going to copy the API
key and head back to my lab. I'm going
to paste the API key and leave the base
URL as is and I'm going to hit save. So
there you go. So connection tested
successfully. And the difference is
instead of by ID, I can now just pick
from the list and it should load up the
correct list of models that I might
possibly want to use. So for example, if
I choose GPT 4.1, it's just going to be
that. And we're going to run this node
again. And as you can see, it's calling
the correct model right here. Okay. So
the next thing I want to point out is
when you add your Gmail note on the next
workflow or Google tool for that matter
the difference between doing that in our
labs versus edit cloud is that on edit
and cloud you often see when you create
a new credential with Gmail that you can
actually have a button which you can
sign in directly using your Google
account if your browser happens to be a
Google Chrome browser. However, with the
labs what you need to do when you create
a new credential is you actually need to
connect it with the o method. So what
you want to do is you want to head over
to console.cloud cloud.google.com.
And the first thing you want to do is
create a new project. And for the new
project, I'm going to name it Enit in
email app. All right. So, I'm going to
leave this as no organization. I'm going
to hit create. And it's going to take a
couple seconds to create a project. And
once that's done, I'm going to select
the project. And as you can see, it says
end email app project. And the very
first thing I want to activate is I want
to go to Gmail API.
So what I'm doing right now is I'm
creating a project because that's how
Google recognizes each of these ooth
access that we're giving it. But the way
the security works is you need to enable
the particular tool that you want to use
within the project. So in this case I
want to use Gmail. So I want to make
sure I enable the Gmail API. All right.
Once that's enabled, I want to head over
to OOTH consent screen. And right now
there's no OOTH consent that's set up.
So I'm going to just hit get started.
And in this case, I'm going to have to
call it an app name as well. So I'm just
going to say it is an email
app. Okay. User support email. Going to
put this.
And we're going to select external. And
by the way, each of these steps is
documented on the left hand side panel
of the lab. So you don't have to
memorize any of these. But we're going
to go to next. And under contact
information, just going to put my email
here. And once you're done, just hit
continue. and create. So, just before we
go, I just want to go to audience and I
want to add a test user here, which is
an email that you're going to use to
send out basically the emails that you
want the agent to send out. So, in this
case, it's maroneia atcookcloud.com.
We're going to save that. And lastly,
we're almost there. We're going to go to
API and services again, and this time
we're going to go to credentials. And
what we want to do is to hit create
credentials with OOTH client ID and
application type we want to choose web
application and under name you want to
name it Canadian email oath client and
then we're going to add the authorized
redirect URLs which we can obtain from
our lab. So, going back into the lab
here, you see that this is the oath
redirect URL and we're going to copy
this and we're going to head back and
just fill this in and we're going to hit
create. As you can see, we now have the
client ID and client secret to the app
that we just created. So, we're going to
copy this and head back to client ID.
Paste it in. Client secret. Copy that.
Paste in the client secret. And you'll
see now that there is a sign in with
Google button that pops up. So, what you
want to do is just hit that and a Google
login popup will show and then you just
want to correctly select the email
address and you'll say that Google
hasn't verified this app. But because
you're the one who created it, you know
it's safe. So, we're just going to hit
continue and we're going to select all
because we want to have the agent be
able to do all these actions with our
Gmail. So, we're going to hit continue
now. And as you can see, it says
connection successful and we're good to
go. So, just wait a couple seconds here
within the labs and it's just going to
load up. And there you go. I already
have my credentials set up here and we
just want to run an execute step to show
you that everything is working. So in
the workflow you'll see that we've
chosen to define all of this by the
model. So we're going to let the AI
agent define this and we're going to
start chatting and say hi can you send
an email to.com
to just say hello. All right so we're
going to hit this. So as you can see now
the workflow has run and it's actually
sent an email to my Gmail. So let me
take a look and there you go. It says
hello Mercury just want to say hello
best regards. Okay. So obviously it's
not very sophisticated because actually
in the AI agent we didn't even specify
any system prom. So the whole point is
just to show you the main difference
between running the environment on N
cloud and within our playgrounds
specifically covering the part where
code key is being used as well as when
you're going to access any Google tools,
Gmail, Google sheets and stuff like
that. You do need to go to your Google
Cloud Console to set up the project, the
app, and the OOTH clients in order to
access Google services with the
workflow. And as you explore Nit in the
course, you're going to realize that
there's going to be some differences
between running edit cloud and edit
within your own self-hosted environment.
For example, the availability of
community notes, supported versions, and
a few other features that might only be
available on edit cloud. So if you're
facing any issues any part of that
build, just keep in mind that it might
be because you're using a self-hosted
version or a lab hosted version from
CodeCloud. And it's not necessarily a
limiting issue. There's always a workar
around for that. It's just something to
keep in mind about. I want to take a
minute to go through the different types
of authentication that we can make in
our approaches when we make HTTP request
nodes and the corresponding API calls.
Now, as you've seen in the previous
section, the HTTP request node is a very
convenient way for us to replicate an
API call without configuring everything.
And it's especially useful when we can
just go to an API documentation and hit
the import curl, essentially import the
entire curl command. Now, as you've seen
in the previous example, there are a few
ways that we can configure the
authentication method in these API
calls. Now, most of the API calls are
going to be header authentication. As
you can see in this section after I've
imported the curl, it automatically
configures the header section and
populate it with the name and the value
format of the API keys for
authorization. Now you can do it this
way or you can actually set up a
credential type here. Now I want to
explain the difference between the two.
So there are two main reasons why you
would want to do it this way by setting
up a credential type. The number one
reason is it's just super convenient
because the moment you have set this up
with the header off credential, you can
just pick from the list of credential
you've created in the next upcoming
nodes or even in another workflow. And
this is especially useful when you have
an API call that has a post and a get so
you don't have to set it up again in
next node. Now the second reason why you
want to do it this way is for security
reasons because setting up credential
type allows you to approach the
authentication without hard coding the
API key into the parameters and this is
just a better practice to ensure
security especially when you're sharing
the workflow on the project with
multiple team members and individuals
and all your credentials is going to be
stored under credential tab you can have
an overview of the list of credentials
that you've set up and of course you can
delete it or you can recon configure it
from this page. Now, that's it about
authentication. I'll see you in the next
section. So, in this section, we're
going to run through how to set up your
Google Cloud Console and connect it to
your NDN. That way, you're going to be
able to start using the Google notes
such as Google Drive, Gmail, Google
Sheet, Google Docs, and a variety of
others without setting it up over and
over again. So, the note that we're
going to start with here is the Google
Drive node. And the reason why we're
choosing this is because the way to
connect it is going to be similar across
all nodes. In fact, once you've
connected it once, everything else would
just be a simple matter of enabling APIs
through your Google Cloud console. So,
in this case, under credential, what I
want to do is to create a new
credential. And in order to get your
client ID and client secret, where you
want to go is to go to
console.cloud.google.com.
And depending on whether you've worked
with Google Cloud before, and by the
way, if you have and you already know
how to get connected with Nitnen, then
feel free to skip over to the next
section. But your page might look
slightly different from what I'm seeing
right now. But in any case, where you
want to go is on the top leftand corner.
You want to pick the project here. And
again, as you can see, I already have
one set up, but I'm going to set up
another one. Click on new project. And
I'm going to call it end demo KKK. And
under location, I'm going to pick new
organization here. And I'm going to hit
create. And there you go. It's creating
a Niden demo KKK project. I'm going to
click on select project and I'm going to
pull up the left hand sidebar here.
Where I want to go is under APIs and
services. I want to go to the OOTH
content screen and nothing set up yet.
So I'm going to hit get started. So app
information
and then demo KKK. I'm going to call it
the same name for the app and click that
as the email. And for the audience, you
have the choice of internal and external
depending on whether you have your
Google workspace set up. Internal is
only available if you have a Google
workspace. But in this case, I'm going
to hit external. And contact
information. I'm going to use the same
here. Hit next. Check on that and hit
continue. And we can go on create. All
right. So the next step, I want to hit
the create oath client. And what
essentially we're doing here is that
we're creating a project even though
it's not a real project. Google just
likes to use this as a structure to
identify and keep track of the oath
clients that people create. So
essentially where what we want to do
here is we want to pick application type
web application and in the name we're
going to choose nan demo kk again and
we're going to leave this as is and
we're going to add an authorize to
redirect URLs here and where we're going
to get this URL is from our workflow
or run our credential here and click on
copy and go back to our
Google console. Paste that and hit
create. So now we have the client ID.
We're going to copy that. We're going to
go back to our workflow here. So client
ID, we're going to paste that in. Go
back to our console here. And what we
want to do is to add a client secret.
And we're just going to copy that. Go
back to our work and paste the client
secret in. So just before we hit sign in
with Google here, which will activate
the connection, we're going to head back
to our Google Cloud Console and go to
branding and scroll down to authorize
domains. So in this case, what we want
to do, we want to add an authorized
domain, which is editn.cloud.
We're going to hit save. And the other
thing we're going to do is to type in
drive in the search bar and hit Google
Drive API because we want to enable this
API here. And of course, we're going to
have to enable Gmail API as well as
sheet and docs as well. But in this
case, we're going to just enable the
drive and we're going to try it out. And
once you've enabled the drive API, the
last thing you want to do is head over
to audience
and essentially publish the app. All
right, we're going to confirm that and
head back to our workflow here and hit
sign in with Google. and you're going to
be redirected to a Google signin page
and you're going to have to click your
email and it's going to say Google
hasn't verified this app, but it's okay
because it's you who's launching this.
And once you've done that, there's going
to be a couple permissions that you need
to run on the credentials. So, we're
going to select all of these and we're
going to hit continue. And there you go.
Connection successful. So, we're going
to head back to our workflow here. As
you can see, account is connected. And
once you've connected your account,
you'll be able to see the list of
folders that you have on your drive. So
in this case there's only one folder in
my drive which is KK test folder. So I'm
going to hit that and essentially we can
watch for file created for example right
so anytime a file is uploaded it's going
to trigger this. So we're going to hit
fetch test event. I didn't upload any
file so it says no data with the current
filter. So this is the folder right
here. I'm going to open that up. I'm
going to upload a dummy file, which is
an S SOP file. And there we go. So,
we're going to go back to workflow here
and rerun the test again. And as you can
see, now it's fetching the data of the
file that has been uploaded. So, cool.
For the other nodes such as Google
Sheet, Google Docs, what you need to do
is just to go back to your Google Cloud
Console and essentially enable your APIs
by typing in the Google notes or the
Google tool that you want to use. For
example, in this case, Google Sheets
API. You want to enable that and you can
go back and connect that on your edit
end and you're good to go. All right,
I'll see you in the next section. In
this section, we're going to build a
simple workflow of an AI research agent
that goes out to the internet and search
for the latest AI news and send it
across to you via email and at the same
time log those particular headlines so
that the next day the workflow is going
to check against the logs so that it
doesn't repeat the same headlines.
And this is a workflow that's going to
be super useful in this day and age when
AI news pops up every other day.
So this is what the entire workflow will
look like. We start off with a scheduled
trigger node which is preset to a
certain time of each day. Let's say 9:00
a.m. And then this goes to a Perplexity
search. So as you know, Perplexity is
one of the leading search AI agent out
there. And we're going to use that to
search the latest news or headlines that
is pertaining to AI development. And
then we're going to kick the output to a
checking agent. And what this does is it
dips into the Google sheet which locks
all the previous headlines of the news
to make sure that it doesn't repeat the
same headlines.
And after that, it's going to send a
summary of this AI news and send it out
to the intended recipient via Gmail. And
the final step is going to log those
headlines to make sure that the AI agent
stays up to date with those logs.
Now, let's build it together. The first
thing you want to do is introduce the
trigger note, which in this case is a
scheduled trigger note to kickstart the
entire workflow. With the schedule
trigger node, you can choose what the
trigger intervals that you want. In this
case, it could be days, hours, minutes,
or seconds or even longer than that. As
we want this workflow to kickstart every
single day, we want to choose days. And
the days in between triggers, we want to
keep that as one because we want it to
send out at every single day. The
trigger hour is something that we can
choose based on preference. And in this
case, we're going to choose 9:00 a.m.
because we like the news to be delivered
to us in the morning. And we're going to
keep the trigger at minute zero. And
we're going to hit execute step just to
populate the data. And there you go. We
have set our first trigger node. Now the
second note that we want to introduce
here is a perplexity note. And usually
when it comes to using thirdparty tools,
what you typically do is go to the API
documentation of the third party tools
and make a HTTP request towards those
API endpoints. However, in this case,
Perplexity has a native node that lives
within NN which you can choose from and
is easy to use. In this case, I'm going
to choose message model note from
Propexity.
And I already have my Propexity account
credential set up. But I'm just going to
walk you through very quickly how you
can easily do that. So in this case,
you're going to pick the dropown and
click on create new credential. And as
you can see is as simple as keying in
the API key from Plexity.
What you could do is go to Plexity.ai
and in the main dashboard, go to your
account and click on settings. And under
settings, if you look at the lefth hand
side tab, you want to hit API keys. And
your API keys page may not look like
this because you do need to set up your
API billing before being able to access
these keys. You can easily do that by
heading to API billing and putting in
your credit card information and topping
out the credits at a minimum of $5.
But in this case, I've already gone
ahead and do so. What we can do here is
to hit the create key button and it's
going to create an API key ready for us
to copy. Now head back to the workflow
and copy the API keys into the field.
Hit save and you have your Plexity
account registered to NAD. And there you
go. This says connection tested
successfully and your Plexity account is
connected to your NADN. And you only
need to set up the credentials one time.
Afterwards, you can simply pick from the
account credentials that you've set up
after you've connected your account or
your API keys in this case. What you can
do is to choose the operation that you
want perplexity to carry out. In this
case, it's a messenger model. And the
model that we want to work with is the
Sona Pro in this case. And there are
other models that you can choose from
such as the Sona Deep Research or Sona
Reasoning Pro. And each of these are
dependent on the use case that you want
to achieve. The deep research is going
to give you a large size of output rich
with data information and is perfect if
you're looking to do a very detailed
research about a topic. But in this
case, because we're only looking for the
headlines of the AI development news, we
want to choose Zona Probe. And as with
the other AI agent nodes, the typical
components that you have is the user
prompt as well as the system prompt. And
you can do that by adding a message and
changing the user to a system. And there
is a slight difference between the
workings of a perplexity node and those
of the other AI agent nodes. And
specifically, one of those is that
perplexity will require the system
message to come first before the user
message. So, what you want to do is to
swap this around and choose the system
prompt as the first prompt and the user
prompt as a second. Now, within the
system prompt, here's a prompt that I've
come up with. So, it says, "You are an
expert AI research analyst specialized
in tracking and summarizing the latest
developments in artificial intelligence.
Your task is to deliver concise and
up-to-date news summaries, focus on new
AI model releases, research
breakthroughs, and strategic moves by
key AI players. So that's the system
promp. As for the user prom, what I want
to do here is to prompt it effectively
so that it summarizes and extracts the
most relevant information or news for
me. And I'm going to blow this up so
that's easier for you guys to read. And
here I go. It says find and summarize
the most recent AI news within the last
24 hours. For reference, today is, as
you can see, this is intentionally left
blank because I want to drag the
variables of today. And in order to find
that, you would go to variables and
context. And this is applicable for
every single workflow. You're going to
have a default variable of the current
time as well as the current date. And in
this case, I want to drag the value or
the variables of the current date and
put it here. And this will dynamically
change the value depending on what the
date is. And so you can see on the right
hand side it says today is date time
today's date. And the reason why I want
to do that is because a lot of the times
the AI is prone to hallucinate the
timing of the day. So when you tell it
to get the most recent news, it doesn't
have context on what today is. So when
you tell it to get the most recent news
in the past 24 hours, often times it
might hallucinate the day to be a day
within last year or the year before. So
this way we ensure that the AI has
context of what today is and be able to
focus on news that are coming in from
the past 24 hours. And also I've added
in to make sure that it prioritizes
research breakthroughs and key
announcements by organizations like
OpenAI, Anthropic, Google, Meta,
Mistral, XAI and Hunking Face. And this
is just some guardrails to make sure
that it focuses on the news that are
most interesting to me which is the
development and the release of new
models of AI. Okay. So now that the
system and user message has been set,
the last thing I want to do is to go to
add option and click on the dropown
button and perplexity has a very neat
feature which is the recency filter. So
we can set the search recency filter to
just within the day. So in this case
it's going to restrict the search to
within the most recent news that have
been put out there. Okay, now we're
ready to execute the step. And as you
can see, the output is various types of
data that has to do with the research
output as well as some of the relevant
information regarding the search. I'm
not going to go through each of these
variables. But what is interesting to us
would be the number of citations here,
which means are the number of news
outlets and links that it has found to
be relevant to your search and also the
search results.
And finally, the output content which is
in natural language where perplexity
will respond to you and say there have
been several notable AI developments and
announcement for the last 24 hours
particularly concerning major strategic
moves and those reinvestments.
And what we want to do here is to pin
the data so that we don't have to rerun
that every single run is going to use up
the perplexity API token. We want to
make sure that we're being cost
effective and prudent about it. The same
thing with the schedule trigger. I'm
going to pin this data so that every
time we run through the workflow, it's
not going to exhaust a new API token and
provide new data. After the flexity
node, what I want to introduce is the
formatter agent or in this case the
checking agent. We're going to use
OpenAI in this case and we're going to
pick a simple message and model action
as part of the workflow. And as you can
see, the output of the perplexity node,
which was the previous node, becomes the
input of the OpenAI node. And again, I
already have my account all set up as
we've covered in the previous sections.
So, you should have the same account set
up here as well. And we're going to
leave the resources as text operation as
message and model. And we're going to
pick a model here. In this case, we're
going to pick a GPT
4. That should be enough to check
against the log. And just a reminder of
what we're doing here with this node is
we want this node to check against a
specific Google sheet log of past
headlines that has been recorded. And
the reason we do that is because some
headlines are going to be reported
twice, three times, four times, or
repeatedly depending on how big the
headlines is. If we don't add this
particular note within the workflow,
what happens is it's going to report
some of the headlines again and again
thinking that it's a new headlines in
the past 24 hours. We're going to rename
this news
checking agent. Now for the user prompt,
what I'm going to do is I'm going to
give it a set of prompt here and I'm
going to blow it up so that you can see
it says here's today's AI news list from
Perplexity.
So I'm going to add the summarized
content from the previous perplexity
node which is under choices under
message and the variable is called
content and we're going to just drag and
drop as you can see in the right hand
side this is the whole chunk of the
summarized news headlines that came from
perplexity
and then later on I'm going to ask
please cross check the news against the
past news lock sheet and remove
duplicates
So this is the name of the news lock
sheet that I'm going to create later on
so that they can compare and check
against such that there won't be any
duplicate news on the output. I've also
added here that we should ensure each
item has a clear impactful one to two
senses summary. So there's a bit of
summarization task here. Also after each
summary include the full unfold source
URL. This is to make sure that you're
interested in that particular news. You
can click on that URL and get access to
where it's been cited from. Use bold to
highlight company names or major
updates. This is just for readability
and add spacing between news items for
readability as well. And at the end,
I've added tip perplexities output news
are all duplicates from the past news
lock sheet. Respond appropriately with
no notable AI development news in the
past 24 hours. is just to make sure that
it's only generating content when there
are new headlines and not recycling
headlines again and again. So that is
the user prom. Meanwhile, we need to add
a system prompt. In this case, we choose
an add message and we're going to choose
a system message or a system prompt. And
in the system prompt, we're going to do
a simple prompt. And I'm going to blow
it up to say you are an AI news format
and your role is to process, check and
clean AI news result fetch from
perplexity to ensure readability and
that past headlines are not repeated.
I'm going to add when checked
against
Google sheet tool named past
and use lock. So this is just to ensure
that it knows that the tool exists and
that you're supposed to use that to
counteract the news headlines and make
sure they're not repeated. Okay, cool.
So now we are going to add a tool here
which is essentially the Google sheet
tool. All right, click that. And I've
already have my Google Sheets account
connected, but we've covered how to
connect your Google Sheets or Google
Docs or your entire Google account
through a Google Cloud console in the
previous section. So, you should have
the same setup. Under the tool
description, we're going to set
automatically because we're going to let
the AI decide when to use this tool. At
the same time, we're going to have to
determine some of the configuration
here. And under resource, we're going to
want to to refer to a sheet within the
document. And the operation is to get
rows because we're not trying to create
new sheets. We're not trying to update
the sheets. We're simply trying to check
against the sheets if the headlines are
repeated. So, we're going to use get
rows. Now, we need to fetch a document.
But just before we do that, we need to
create the actual sheet where all the
information of the past headlines are
going to be logged. So I've created a
sheet here called the passi news log and
it's a simple sheet with a date column
and a headlines column. So the date is
going to correspond to the date of the
headlines. And the headlines basically
is going to say the content of the
headlines in one or two sentences. Now
let's go back to the workflow. Once you
have the sheet set up, you can then find
the sheet just by selecting from list
and clicking on the right one which is
pass AI news lock. And then later on you
can choose from the sheet the correct
sheet in this case which is sheet one. I
only have one sheet in the document. So
that's the correct one. You should be
able to see now
and that's it. And just before we go, we
want to rename this to pass AI new law
because we want to make sure that the AI
knows that this is the particular Google
sheet that we're referring to when we
told it that there's an attached tool to
it. Now, going back to the workflow,
we're going to run this particular node
here based on the configuration that we
filled it. You can see based on the data
that came from perplexity
is checking it against the news log
which in this case is an empty sheet. So
there's no headlines to be checked
against but still doing that. And
finally the output is here's the
formatted AI news which is Julius AI has
successfully raised 10 million funding
round amongst other things. In this
case, the next node that we want to
introduce is the email node because
right now all the output lives within
the workflow. Just like in the previous
section, we're going to attach a Gmail
node. But in this case, we're going to
still choose the send a message action.
But the difference, as you can see, is
that we're not attaching the Gmail as a
tool for the checking agent to leverage
on. Now, why is that? And what's the
difference between attaching Gmail as as
a tool versus Gmail as a subsequent node
to the AI agent node? And the difference
is that when we attach the Gmail tool to
an AI agent note, the AI agent has the
liberty to choose whether or not to
leverage the tool or use the tool. In
scenarios where AI agent hallucinates,
they might actually use the tool when
they don't need to. They might not use
the tool when they actually need to.
Having Gmail note as a note that is
chained up to the workflow like this
ensures that any then runs through the
note regardless of what the AI agent
thinks. In this case, we make sure that
the note is run no matter what. And this
is extremely useful when we're sure that
each time of the day we want to receive
an AI news output into our Gmail because
a note has to be run every time a
workflow is run. It is just best
practice to chain it up as part of the
workflow instead of connecting it to the
AI agent as part of the tools that it
may or may not use. Now, let's go back
to the Gmail node. And you can see that
in terms of configuration, there's some
slight differences between a Gmail node
that is chained up to the workflow
versus a Gmail tool that is attached to
the AI agent. You now don't have the
option to let the AI choose to define
the recipient, the subject, and the
message. you actually have to hardcode
it into the node. And I use the word
hardcode, but actually you're simply
dragging the variable and dropping it
into the field. The fields appear to be
the same as when you have a Gmail node
attached to your AI agent. So, we're
going to leave it as that. But here's
where it gets a little different. So the
recipient that you want the output of
the Gmail to be sent to should be
consistent because you want this to be a
daily operation sent to the same set of
email addresses. So in this case I'm
going to put my email and under the
subject I'm going to put daily AI news
digest.
And this doesn't need to change because
I know what it is. And email type
because it's all going to be text. I'm
going to choose text and the message is
actually from the content here. And as
you can see the content itself, it says
here is formatted AI news. So this is
not ideal in the way that I want to
receive the summary. So we can actually
prompt this a little bit better. So once
we're done with this, let's go back to
the news checking agent and under the
user prompt and we blow it up. Let's add
an output format rule output in text
only and start the
news
daily news summary with here is today's
AI development news
today is and I'm going to just track the
today variable and drop it here. So, I
wanted to start this way so that it
makes sense. Every time I open up the
email, it's going to say, "Here's
today's AI development news. Today is
that particular day." So, I know that
I'm getting the latest news from the
headlines. Okay, now that's done. As you
can see, when I make any changes, it's
going to turn the node yellow. So, we're
going to run it one time and let's see
what the output now is. Here's today AI
development news. Today is 7th, sorry,
today is 29th of July, 2025. And then
the news. So, okay, that's the format
that I want it to be. I'm going to open
that up again and pin it so that I don't
have to rerun it every time. And within
the Gmail, this is the content that I
want to drag into the message field. So,
that's good. And remember in the
previous sections, we know that it's
going to automatically come with an
added attribution within the email. So
what we want to do is just click add
option and it toggle off the append
attribution and then we're going to try
to execute step. As you can see there
has been some actions and then email has
been sent. So let's go to our email
inbox and see what we've received. So
this is the email that I've received and
it says here's today's AI development
news. Today is this date. Of course this
formatting is not ideal. You can go into
the prom engineering again and try to
format it in a more readable way, but
essentially in terms of formatting, it's
much easier to read now. And of course,
these are the news, the headlines,
and if you push it to production, you're
going to be receiving these emails day
in day out and keeping yourself updated
with the latest AI development news.
Going back to the workflow because we're
not quite done yet. Even though we're
now receiving the daily news update and
I'm going to pin this as well is that
remember the AI news log that we've
introduced here. We want to make sure
that whatever news that have been
researched in the past 24 hours has been
logged into the sheet. That way the very
next day the agent or the news checking
agent can then check the log and make
sure that the headlines are not repeated
again. and the agent will always be
checking against past news to make sure
that it's not repeated. So in this case,
what we want to do is we want to
introduce a Google sheet node. Click on
Google sheet. As you can see, there are
a couple actions that we can do with
Google sheet. And what we want to do
here is to choose an append row in
sheet. And the reason why we choose
append row in sheet is because we want
to add new information on an existing
sheet. So we've got the Google Sheets
account connected already as usual as
what we've covered in the past section
and under resource we want to access the
sheet within the document and the
operation we want to do is to append
rows or append new information. Now
under document what we want to choose is
the past AI news. So just type in the
title and you can choose that. The
alternative is actually you can link it
by typing in or copy and pasting in the
URL or the ID of the document. And once
you've typed that in, you can then
choose the list of sheet. And again, we
have only one sheet. So the sheet sheet
one and once you've done that, you'll be
able to see the columns that now exist
within that particular sheet. And within
the sheet that we've created right now,
it's completely blank because there are
no headlines. They've been logged in. We
have two columns, which is date and
headlines. And if you go back to the
workflow,
there's the date column and there's a
headlines column. So now what we do is
we want to populate this particular rows
in under these particular columns with a
specific data. So in this case, we want
to choose under variables and context
the today variable and drag that and
drop it under date. So that way it's
going to always populate the date column
with the value of today's date. And
under headlines, what we want to do is
we want to choose under the news
checking agent the output of the
content. And now as you can see when you
drag variables from other nodes, you
don't necessarily always have to drag it
from the previous or the immediate
previous nodes. You can drag it from
nodes that were from previous instances
in the chain. So in this case, I'm going
to drag the content variable from a news
checking agent and just drop it into the
headlines.
Now with that done, I'm going to try the
execute step to see what it does. And as
you can see, it's telling you that it's
added the date component or rather
today's date into the date column and
headlines into the headlines column. And
of course, the way the logic works is
it's going to pick the topmost row that
is unpopulated within the sheet. So it's
always going to pick the next available
rows as you go. So now that that's done,
we've ensured that there is some form of
loop in terms of the way the workflow
checks for new headlines. And now with
the workflow, we've created a form of a
loop where the perplexity agent every
day would go out and search for the
latest information on AI development and
a news checking agent checks against
past logs of the headlines to make sure
that the information is not repeated
before sending it into your Gmail. All
of that gets stored within a Google
sheet every day. And then the next day,
the same operation is going to happen
again. The research agent is going to go
out into the internet and check the most
recent and relevant news for you. And
then we also have the agent checking
that against the Google sheet together
with the news that were locked in the
previous days before sending it out into
your Gmail. So that way you can be sure
that the headlines that you receive
every day are really the most relevant
and the most recent news that you can
get. There are many other options of
tools that you can use along this
workflow. Perplexity is just one of the
options even though in my opinion it's
one of the most advanced out there when
it comes to search. OpenAI can be
replaced with any other LLMs as well.
And you can replace a Gmail with a
Telegram node or Slack node or any other
platforms that you spend your most time
on. In any case, I hope this section
provides a good understanding of how a
general logging mechanism of the
workflow can possibly work. So that's
the end of the section and I'll see you
in the labs. In this section, we're
going to have a little bit of fun. So
since we now know how to create an AI
agent on end and how to connected to our
Slack accounts, we're going to try to
see if we can have the AI agent fully
take over our profile and respond on our
behalf, aka do our job for us. So in
this case, this is the same workflow
that we've built in the past section.
The first thing we need to do to have
the AI agent respond on our behalf is to
go to our Slack API and reconfigure some
of the OOTH permissions.
And what we want to do here is simply go
down to the scope section. And in the
past section, we filled up all the bot
token scopes as required. But in this
case, we're going to scroll down to the
user token scopes and fill it up with
the required scopes. And these are the
list of required scopes that are
typically recommended in order for it to
function well. So as you can see we have
channels history to read the channels
just in case we want to respond and
interact uh within the channel and we
obviously we have the channels read chat
right groups history as well as some
read and history permissions and this
all depends if you want the bot to
respond on your behalf in a channel in a
direct message or in a group message.
So once you're done adding all the new O
scope under the the user token scopes,
what you need to do would be to
reinstall the app. There will be a
notification popup that comes up here.
I'm just going to show you right here.
So if I were to add
let's say an admin apps read. What will
happen is it's going to ask me to
restore my app over here.
So just go ahead and do that and you're
good to go.
And just before we go under the bot
token scopes, you do need to add one
more permission which is essential which
is the chat write.customize.
And this just allows the bot to send
messages on your behalf.
All right. So once you're done, you want
to copy the user o token and go back to
your n workflow.
And here you're going to create a new
credential which I've done here.
But essentially what you need to do just
do the same thing. pick access token and
paste the user O token here.
I'm not going to click on save because
I've already created one, but it's the
same exact process as we've done in the
previous section. And once you're done
with that, you want to toggle over to
the event subscriptions. And what you
want to do is just unfollow the
subscribe to events on behalf of users.
In this case, also add these four event
subscriptions, which is message
channels, message.groups, message, and
message.mppim. This just covers most of
the use cases that you might need. And
if you want to know a little bit more
about the permissions and how it all
works, I encourage you to go into the
Slack documentation and to check it out
yourself.
But now I'm going to go back to our end
workflow and reconfigure some of these
nodes. So in the first trigger node,
previously we have the bot/ app mention.
Now I want to add on just any event as
one of the event triggers. The reason
for that is I want to make sure that I'm
capturing direct messages when my team
members are chatting with me. And I also
toggled on the watch o workspace. The
reason for that is again I don't want to
be just watching a single channel or a
single chat. So when I toggle this on
any message received will constitute a
trigger event which would then run the
workflow. And scrolling down under
options what I want to do is also add
the username or ids to ignore.
In this case, I'm going to fetch my own
ID to ignore my own messages because
right now how it works is it's going to
trigger on any event, which means it
will also include any messages that have
sent to any channels as a trigger event.
So, how do I get the user ID?
Well, you can get your user ID from your
Slack channel by just simply clicking on
profile
and clicking on the three dots here. And
you're going to be able to copy the
member ID. So that's the ID and you can
go back to your NDN workflow to the
trigger node
and essentially paste the ID under the
expression for the username or ID string
ignore.
Cool. For this demo, I've created a
fictitious workspace. I've added one of
my colleague Michael Forester into the
channel. And obviously this is not the
real Michael. This is just a profile
that I've created. And again, as you can
see, this is just a demo workspace. And
I'm even on a free account on the left
hand side. You can see that it's 23 days
left on the trial.
And what I've done was I created another
account as a fake Michael Forester. And
it says Michael F, which is me.
It says Michael F, which is this
account. And the left hand side, you see
this is my account. And the reason why
I'm setting up these two accounts is so
that I can show you how it all works
when my team member chats with me on a
direct message. So now I'm going to try
to complete the reconfiguration of the
entire uh workflow. And what I'm going
to do here is go into the agent node and
everything stays the same with the
exception that I've added a system
message here. So essentially what I'm
telling the agent is that you are
maronei a team member at corkcloud and
your job is to impose a maronei as well
as you can and respond to his team
members message on the behalf sound
friendly and natural in a typical tech
working environment. So as you can see
is a pretty broad system message. Um,
you can obviously prompt this much much
better, but I'm going to just start with
that.
But one more thing that I'm going to do
is I'm going to go to a chat model and
I'm going to just toggle this to GPT5
because it just came out.
And GP5 in my experience has been great
at responding to complex queries um and
using tools when required.
So I'm going to leave it as that. The
simple memory is still attached.
And on the last note, which is a send
message note, I made two changes here.
And the first one is, as you can see,
the channel ID, I made it into dynamic
ID. It's showing an error here right now
because it doesn't have any data
populated just yet. But essentially, I
want to make sure that this is going to
be dynamic because the workflow now is
triggered by any events watching the
entire workspace. So, it's not
necessarily going to be restricted to
one single channel. So it could be a
direct message, it can be a group
message as well as a channel message. So
I just want to be able to dynamically
reply to that specific channel where I'm
interacting with people on.
And the second thing that I'm going to
change is under option, I'm going to say
send as user. I'm going to put in my
username here which is Mark. And that's
it.
Okay. So I'm going to hit execute
workflow here.
As you can see, it's waiting for a
trigger event. So, I'm just going to go
back to our Slack app.
In this case, I'm going to use the
Michael Porser account here to chat with
me. So, I'm going to say, "Hey, Marone."
All right. So, let's go back to the
workflow. As you can see, it's getting a
trigger. It's hidden the AI agent and
it's responding to my message. So, let's
see what kind of message or response
that I'm receiving back. It says, "Hey,
what's up? Need an update on anything
AWS/resour
or is there something that can help
unblock?" All right, sounds pretty
helpful. And I think it threw this in
because I said, you know, pretend to be
somebody in CodeCloud. So, CodeCloud's
associated with cloud technologies, etc.
So, it's probably just throwing that in.
Okay. So, I'm going to go back to the
workflow. I'm going to hit execute
workflow again. So that it's now waiting
for my trigger.
And using Michael's profile, I'm going
to type.
How was your weekend? And I'm just going
to stay social. Not going to talk about
anything about work. So there you go.
And it's responding pretty good. Kept it
low-key. Got some rest. Did a bit of
planning over the week. I also sketched
out a couple of tweaks for the AWS and
the zero pipeline that I'll share after
the stand up. Cool. All right. Um, it's
adding it own flare. So, not sure about
that. Uh, probably need to work on the
prompt a little bit, but cool. Um, so,
so it's working quite well now.
Responding as myself, even though I
don't normally sound like that. Um, I
don't necessarily work intimately with
all the cloud stuff, but I'm just going
to go back to the workflow here. And I
want to try to make the agent a bit more
useful and give it more context uh than
it currently is. And this is where I'm
going to try to introduce a pretty
simple rack system. And in this case,
I'm just going to add a tool and this is
just going to be a Google doc tool
because the reason why I'm doing that is
I realized that people are going to
start asking me about projects etc.
So what I did was I created a document
which is a Google doc document and
essentially this is completely
fictitious but it outlines kind of the
update of the project progress in terms
of some of the stuff that we're trying
to do and this is again completely fake
but it's just a document so that we can
see how it all works. All right cool I
have this Google doc drafted out. This
is where I put all the fictitious
updates. So I'm going to connect that
here and under the Google doc node what
I want to do is is a get operation
and I'm going to go back and just copy
link which is a URL link of the cloud
project status
and I'm going to paste that URL here.
All right. So I'm not going to do
anything with this node. This is just an
attach tool. I'm going to go back to the
AI agent and under system prompt I'm
just going to add a tool description.
Use the Google doc tool when asked about
project updates.
Cool. All right. So, I'm going to just
make that change. I'm going to save that
and I'm just going to click on execute
step here and go back. Ask Michael.
I'm gonna ask
how
is the is your project going.
As you can see, it is kicking it to the
air agent. It is thinking it is going
into the document checking the latest
update of the project before kicking it
back into Slack. So, let's see what it
responds to me.
So, there it goes. says quick as your
project I made objective interactive lab
design done 65% complete ETA 15th
October 2025 now let's see if that's
correct
so this says that the lab is 65%
complete and expected completion 15th
October 205 so it's successfully
fetching the relevant information into
the message however as you can
So, it's giving me the right
information, but as you can see, it's
missing a little bit of a human touch.
So, that's probably something I can work
into the assistant prompt for it to
respond more like me and more humanlike.
But at rarely, this gives you a good
understanding on how you could leverage
AI agent. Maybe not to impersonate
yourself just yet, but in a lot of other
use cases. As you can see, once you've
created a rag agent like this, this
could function as your technical support
or just a bot FAQ.
And this could work for HR, legal,
marketing, whatever the case may be, or
even customer support. But there you go.
This section is to really just
familiarize yourself with the select
permission logic and how you can
potentially leverage that to its full
extent.
Again, this is just for fun, and we're
not recommending you to replace your day
job with an actual AI agent just yet.
See you in the next one. In this
section, we're going to walk you through
how to set up the Slack API and connect
it to your NAN. And then we're also
going to go through the various
different permission levels as well as
the logic in terms of integrating Slack
into your NAND workflow. If you're
already familiar with how Slack API
works, feel free to skip this section
and go to the next one. So, first where
you want to head to is the Slack API
page, which is api.slack.com.
And once you've signed in, you want to
go to your app section and hit create
new app. And what you're going to do is
come up with an app name. In this case,
I'm going to call it the end devilbot.
And pick your workspace and hit create
app. Now, what you want to do in your
bot configuration page is first head
over to permission. So, there are a
couple things to keep in mind here in
terms of the architecture of the bot.
So, first we need to configure the
permission level that we want to grant
the bot on Slack. In this case, we want
to scroll down to the permission scope
and we're going to hit add an O scope.
In this case, a couple of basic
permissions that I want to start off
with. So the first being the obvious
which is the app mentions read. So this
is going to let the bot read any
messages that you've sent to the bot
directly by pinging it. And the second
one that we want to add will be the chat
write which allows it to send messages
as any demo bar. And just to make sure
that it could read our messages on
channel we want to add channels read and
group read as well. Lastly we want to
make sure that they could actually see
the name of the people that are sending
the messages. So we're going to add the
user profile read as well as users read.
So that should be enough for now. And so
we're going to go ahead and install the
bot to our Slack workspace, which is
called the Slack automation lab. And
we're going to click allow. And again,
you need to be an administrator of your
Slack account to be able to do this. As
you can see now, we've got a bot user O
token generated. So we're going to hit
copy on this because we're going to use
this to create a Slack credential back
in our NN workflow. So going back to our
N& sheet, the first trigger node that we
want to introduce here is the select
trigger node. And I want to make sure
that we pick from the selection of the
trigger nodes and not the action node.
In this case, we're going to hit the
onbot app mention. And I already have a
Slack vential preset up here, but I'm
just going to recreate another one just
so that I can show you how it's done.
So, we're going to name this Slack and
then demo code cloud. And I'm going to
paste the key here and hit on save. As
you can see, the green bar shows up.
Connection tested successfully. So, I'm
going to go back to the API page here.
And right now I'm going to go to the
event subscription because I want to
make sure that's listening to the events
that I've specified. I'm going to toggle
this to on. And for the request URL is a
URL that I'm going to grab from my endn
workflow. So I'm going to hit the web
hook URLs here. As you can see, there
are two URLs provided. The first one is
the test URL and then we've got the
production URL. The production URL is
the URL that you're going to use when
you toggle into production. But right
now, because we're doing it on staging,
we're just going to take the test URL,
copy it to our clipboard, and go back to
the API page and paste the URL. And as
you can see, when you do that, it's
going to say that your URL didn't
respond with the value of the challenge
parameter. This is because you need to
go back to your Slack trigger and
execute step and your workflow will be
ready to receive the trigger. That's
when they will send back the challenge
parameter to the Slack API. Now, in this
case, before we can hit execute step, we
need to populate the field, which is the
channel to watch. And there are a couple
ways to pick your channels. As you can
see, you can either get it from this, by
ID, or by URL. So, we're going to pick
from this. Click on all Slack automation
lab. So, this is a general channel that
anybody can join. So, I'm going to hit
execute step. I'm going to go back to
the API page and hit retry. And this
time, it's going to say verify. And just
before we go, I want to make sure that
subscribe to the right event. So, under
subscribe to bot event, I want to add
bot users event. In this case, I want to
add at mentioned, which is just making
sure that it's listening every time that
this been mentioned. So, I'm going to
hit save changes, and we should be ready
to go. All right. And then, going back
to our Slack page, as you can see, it's
listening for an event. And we don't
have an event right now because
nothing's been typed into our Slack
channel. And here, for it to listen to
the channel, we need to add the app into
the channel. And the way to do that is
to go up to the member section here.
Click on that. And where you want to go
is toggle over to the integration bid
and select the app that you want to add.
So there you go. The N demo bot is now
added to the channel.
Now we're going to just do a trial run
by hitting execute step.
And that's going to wait for any
triggers while listening to the channel.
And going back to the channel, what I'm
going to do is I'm going to ping-demo
and I'm just going to simply say hello.
So I'm going to go back to my NN. As you
can see, it's now populated the SL like
trigger node. Now for SLE, there's quite
extensive amount of output payload and
the information that gets passed on to
the triggers. So of course the most
important bit is the text itself or the
content of the text which is hello.
But as you can see, you're also being
passed other types of data like the user
ID as well as the channel ID and the
event timestamp. So all of those are
going to be useful depending on the use
case of your workflow.
Now once you've set it up, next note
just to play around with this is we're
going to add an AI agent node
and we're going to pick define below
and we're going to drag from Slack
the text content
and we're going to add a chat model.
In this case, I'm just going to pick
follow mini.
I'm also going to add some memory
because I want to make sure that it's
going to be able to recall contextual
knowledge or information from previous
compositions.
So, in this case, the session ID that I
want to pass to it is a channel ID.
I'm not going to attach any tool for now
because essentially what I want to
happen is just for the AI agent to be
able to chat with me on Slack. So I'm
going to add the third node which is the
Slack node and this time
I'm going to allow it to send message
back. So pick the right select
credential in this case the select and
demo codecloud
resources message operation as send. And
we're going to choose send message to
channel and again we have to provide the
channel ID. Of course, you can pick from
the list, but in this case, I'm going to
pick from the ID. And how do we get this
ID? In this case, we want to go back to
the select channel.
As you can see, if I click on the title
of the channel or the name of the
channel
and I scroll down here, you're going to
see channel ID C099 AJ1 KP46. So, I'm
going to copy that and go back to my
edit end workflow and paste that in. So
I'm just trying to show you that there
are two ways to do this which is one by
the list and the second one by the ID.
Actually there is a third one which is
by URL
but typically selecting by ID will work
for all cases.
Okay. So I don't have the parameters yet
to drag into the message text. So I'm
going to execute previous nodes.
And there you go. There's an output
which is hello. How can I assist you
today? I'm going to drag that, put it
into the message text,
and I'm just going to hit execute step.
All right, let's go back to our Slack
channel. And as you can see, it's
replying to me on the channel saying,
"Hello, how can I assist you today?" And
it also says automated with this
workflow. So, we just want to click on
add option. We want to click on include
link to workflow and toggle that off.
So, I hit execute step again.
And when you go back to a slack channel,
it says, "Hello, how can I assist you
today without the automated open end
workflow appended to it?"
Cool. So that's a simple way to connect
your Nen workflow to Slack. And of
course, there's a lot more cool stuff
that you can do on Slack, such as
configuring the Slack model and also
being able to reply as a user. I'll see
you in the next section. So one of the
most useful features of ENN is the
ability to tap into thirdparty tools to
be used on your workflows. And these
could include tools that are not
natively available on NDN. And the ways
you would do this would be through API
calls, MCPS or webO calls. So in this
section, we're going to run through the
foundational workings of a typical HTTP
request nodes as well as how to make API
calls in three different scenarios. So
the first scenario, we'll do a simple
API call to a public API endpoint that
doesn't need any authentication in order
to get some fun little facts about cats.
And on the second scenario, we're going
to make an API call to the open
weathermap.org
to get you the latest information about
the weather on a specific city that you
requested. And finally, we'll make an
API call to a web scraper called
Firecrawl that could intelligently
scrape any web pages that you like. So,
we're going to go ahead and build the
workflow here. And the first trigger
note that we want to choose in this case
is a trigger manually note. The reason
why we're choosing this is because it's
a simplest one. And in this case,
because there's no input that we want to
transfer into the HTTP node at this
point, we're going to start with this.
So, we're going to hit execute flow to
see what sort of data that populates it.
So, we're going to open it up. And as
you can see, it's an empty payload in
the output data. And this is good enough
because the next node is just the HTTP
request node which does not need any
input. So in this case I'm going to hit
the HTTP request node. There are a
couple options that you can choose from
in terms of what type of HTTP request
that you're trying to make. The most
popular ones obviously the get and the
post. In this case we're going to stay
with get. And the public API URL
endpoint that we want to hit is this
catfact ninja/fact.
So there's no authentication required
and we're just going to hit execute
step. And as you can see, it returns a
data output of a random fact about cats.
So the random fact that we fetch is two
members of a cat family are distinct
from all others. The clouded leopard and
the cheetah. The clouded leopard does
not roar like other big cats, nor does
it groom or rest like small cats. Okay,
so that's very interesting. So now we
like to do something a little bit more
useful by calling an API on the weather
app. So in this case, we're going to
delete the HTTP node and reintroduce a
HTTP request node again. In this case,
we're going to choose post. And the
public endpoint that we want to hit in
this case can be found at
openweathermap.org/curren.
And you scroll down to the built-in API
request by city name. You'll be able to
copy and paste the API endpoints
straight into your workflow. So going
back to the workflow, we're going to
paste this. And before I do anything
else, I'm going to blow this up. And as
you can see, there are two variables
here. The first one being the city name
and the second one being the API key. So
for the city name, we want to pick a
city that we're interested in. In this
case, London. And for the API key, we
need to key in a valid API key, which
can be obtained from the website. So
going back to open weathermap.org,
the first thing you need to do is to
sign up and ask for free. And once
you've signed up and gone through a
little fun onboarding questionnaire, you
can head to my API keys and you'll be
able to view all the API keys that you
have with Open Weather. And you could
also generate a new API key by clicking
on this button over here. But in this
case, we already have one set up. So
we're just going to copy that and paste
it back in the workflow. Cool. Now that
we filled that in, we're going to
execute step and we're going to toggle
this into the schema format. As you can
see, the temperature, the maximum, the
minimum temperature, as well as a cloud
coverage situation. Now, as you might
have noticed, I've actually input the
API key in the API endpoint itself. But
a better practice within Noden is to set
up an authentication that's going to
persist even after you're done with a
note, which means you can use the same
credentials or API keys in a separate
note if you happen to need to work with
this particular app. So, I'm going to
show you how that's done. In this case,
I'm going to hit authentication, select
generic credential type, and choose
header O. And why did I choose Brit? And
the reason why I'm choosing header
because the API keys itself is contained
within the header section. Now, we want
to set the new credential. So, we're
going to name this credential open
weather map demo. Okay. And under name,
I'm going to do X-API-K.
And under value, I'm going to paste the
same API key. I'm going to hit on save.
And there you go. My credential is saved
under the name open weather map demo.
Now after setting it up as a header O,
what I can do now is actually remove the
API key section from the API endpoint.
Now let's run that. As you can see, it's
giving me back the same information. The
only difference is now my API keys lies
within the preset header authentication
section here. And if I were to create
another node that works with
openweathermap.org, or I can simply use
the same header off credentials wherever
I go. So this is just a very easy way to
go through your workflow. Great. So now
we're done with the weather app. What we
want to do is to try a different
scenario where we're actually calling an
API endpoint of a web scrape. So let's
delete this node. And what we want to do
is again to reintroduce the HTTP request
node. And this time we're going to make
a post request to a web scraper endpoint
called firecraw. So we're going to
rename this firecrawl scrape. So when
you head to fcraw.dev, this may look
different depending on whether you've
signed up or not. I'm already signed in,
so this page will look slightly
different for you, but essentially to
sign up, you just have to go through a
fun little onboarding questionnaire and
pretty much it's straightforward for you
to sign in. And once you're in, you want
to head over to the dashboard and even
on a free plan, you'll be given an API
key to start scraping with. So where I
want to go from here is to go to the
docs because this is where all the API
documentation is going to rest at and go
to the API reference. In this case, I
want to scrape. So this is the post
request for scrape. I'm going to hit on
try it and copy the curl command up
here. And before copying the curl
command up here, what I'll do is I'm
going to put in the URL of the page that
I want to scrape. So in this case, the
page that I want to scrape is actually
TechCrunch, specifically the Gen AI
section of the news. So, I'm going to
copy that and go back and just paste the
URL here. As you can see, this going to
change the JSON body of the URL to the
URL that I've intended here. Of course,
you can do this manually within the
workflow itself as you import the curl
command, but I thought it easier to just
do it beforehand. And we can just copy
the whole curl chunk and go back to the
workflow here and click on import and
paste the entire curl command. As you
can see, there's a post method to this
particular URL. every one of these
fields are configured the way we needed
to. And under authentication, we're
going to set this up again the same way
we've set it up for the open
weathermap.org. So, we're going to hit
generic credential type and then we're
going to pick header o because that's
where the authentication credentials
lies for this particular API call. And
then we're going to create a new
credential. So, we're going to call this
credential fire crawl codecloud demo.
And just before I go any further here, I
want to go back to the documentation and
look at the the header within the curl
command. In this case, what we're going
to pay attention to is the name of the
authentication here, which is
authorization. And we're going to type
in authoriz. And under value, we see
that it's got bearer space API token.
So, what we want to do is go back. I'm
going to toggle this to expression so
you can see what I'm typing in is bearer
the API token which you're going to get
by going to the dashboard and copying
the API key here and pasting that into
the field. So there you go. Hit save and
that credential is going to be saved.
Okay. So going back you notice that
under the send headers portion is toggle
to live and it actually has the same
thing which is authorization which we
don't need anymore because we already
have it all set up here. So we're just
going to toggle that off and then focus
on the body right here. So, we've
already set the body up earlier before
copying it over. So, there's nothing
much we need to do here. So, we're just
going to hit execute step. There you go.
So, there a couple of output data here.
As you can see, the main content is
really under the markdown. And I'm going
to toggle this to a table format so it's
easier to see, but it basically feeds
you the headlines of the news of the day
that has to do with Gen AI. So, as you
can see, Tech Crunch covers the latest
news, da da da, headlines only, learn
more, and here are the links to those
headlines. And yeah, so this is a simple
scrape. So it's going to scrape all the
information within that particular web
page. But of course, you can format this
more within firecrawl itself. But in
this section, I just want to show how
easy it is to set up an API call to call
a web scraper or any other type of apps
that might be interest or might be
useful to your workflow. So, building on
to that, I wanted to show what happens
when you have a post request that
requires a get for you to obtain the
result of your post request in the first
place. So, we're going to stick with
firecrawl. In this case, under fire
crawl, there are a couple of features
that we can use. So, one of them is
extract. So, how's extract different
from scape? Well, extract is an AI
powered way for Firefly to extract
structured data from either single page,
multiple page, or even an entire website
and any of the subdomains or subpages
associated to those websites. So, it's
an extremely useful tool for us to use.
So, what we want to do here is head over
back to the docs. Where you want to go
is go to the documentation tab and
scroll down to extract. There's a pretty
good curl example that we can make use
of here. So I'm going to copy the entire
curl command here and just go back to
the workflow and hit import curl. You
can see all the fields here that needs
to be configured are all added. And what
I want to do first here is to select the
same authentication that I've set up
previously under generic credential type
and header o and firecrawl codecloud
demo. So as you can see since we have
already set up the header off here we
can toggle this off and move on to the
JSON body. Now the reason why I picked
the example is because this already been
pre-filled. So it's easy for us to run
this example so that you don't have to
really fill in every single sections or
variables that is required to be filled.
What's happening is that the prompt
that's given to extract the company
mission whether it supports SSO whether
it is open source or whether it is in Y
combinator from the page. So that's the
prompt for extract. So it's going to try
to get relevant information surrounding
that prompt. So let's go back. We're
going to stick with that and we're going
to hit execute step. So when we do that,
as you can see that the note says it is
executed successfully. What it means is
it's pushed through the post request to
the API endpoint successfully and it's
currently processing the post request.
Now because it's extracting for a couple
different pages and specific
information, it's going to take some
time. So what we want to do now is wait,
right? So and how do we wait within the
workflow? Well, you can actually
introduce what we call the wait node. So
introducing the weight node, you can
actually configure the amount of time
that you want to wait. So in this case
we're going to hit 30 seconds and of
course the weight unit is in seconds and
you can change that to minutes, hours
and days depending on your needs but
we're going to wait 30 seconds here.
Going to remain rename this to that.
Okay. So I'm going to hit execute step
as well. So the data is populated here.
But what I want to do with the fire
crawl extract post is that I want to pin
the data because every time I run this
particular node, it's going to exhaust
my API token. It's going to consume my
API token and I want to make sure that
the pre-populated data stays here for
every execution run. So that's going to
be super helpful for me. As we wait for
the wait note to complete the load, what
we want to do here is to introduce the
HTTP get request. So going back into the
firewall documentation, what you want to
do is scroll down for that example and
here's the curl get command that you can
copy and go back to your workflow. Hit
import curl. And basically all the
required field has been set up. So if
you look at the documentation, you'll
see that the extract ID comes from the
slash after the endpoint. So what we
want to do is introduce the same here.
Use a slash and toggle that to
expression and drag the request ID, the
post request ID into the field here. And
of course, this is the same post request
ID as what the previous nodes was
passing the same data into the weight
node. and hence we're using that in the
field here. So essentially what we're
telling it to do is to get the result of
this particular post request ID and
return those results here. So I'm going
to rename this firecrawl get request.
All right, things are pretty much set up
here but we have to again select the
same authentication method in this case
the firecrawl codecloud demo and then
we're going to hit execute step. So
there you go. As you can see, move this
into a schema format and it's extracting
the information that we want, which is a
company mission. At the same time, it
also tells us the status which is
completed. And this is important because
as you wait, I know we put in 30 seconds
here. It's not necessarily going to be
completed within 30 seconds. So, let's
say after the post request, we wait 30
seconds and then the get request
launches and the status is not yet
completed. What will happen then? Well,
the workflow will break down. there will
be an error and the workflow will stop
working. So to avoid that scenario, what
we want to do is to create what we call
a loop and in this case we're going to
add an if node so that we can continue
that logic. And what we'll do here is
under firecrawl get request we want to
take the status and drag it into this
field here. And because this is going to
be strings as you can see completed is a
string. If the string is equal to
completed, as you can see, I test run
that and basically this is true. It
falls into the true branch. It's
populated. False branch is not
populated. It's true because the JSON
status here is completed as per the
requirement. So if that's true, it's
going to continue and maybe send that
completed extracted data onto something
else such as maybe sending it to let's
say a Gmail send a message note and we
can populate the content which is in
this case a company mission let's say
and the message and I'll just say here
company mission right so two let's say
I'm just going to pick maronecloud.com
so it doesn't matter because this is
just an email or any other output node
that you want to set However, if the
status is false, what it means is that
after waiting for 30 seconds, when we
try to get the result of the post
request, it's still not ready. So, what
happens is under the false branch, we're
going to wait another 30 seconds because
we want to wait until it is ready. Okay?
And we're going to wait another 30
seconds and we're going to loop this
back onto the firecrawl get request. As
you can see, the yellow highlights
coming out because there are some
changes that we made to the node which
is connecting it with another potential
previous node. This loop will avoid the
breakdown of the workflow in the case
where after waiting for 30 seconds the
request is not ready and it's checking
that by the if note because if the
request is not ready the status is going
to say in progress and therefore it's
not completed and it's false it'll wait
for another 30 seconds and try to get
the result again. And if still not ready
by then it's still going to go into the
false branch and it's going to wait
another 30 seconds and going to try to
get it again. So it's going to be in
this loop until it is ready and hence
the status is completed and becomes true
and then it sends out as an email. So
this is how we can use loop to avoid a
breakdown of workflow when it comes to
an API request that requires time to
process and it requires a second step of
getting the result from the app. So
we're just going to run this again just
to make sure it's populated. But
essentially that covers the principal
workings of how we would go about
building a workflow with HTTP request
node to access API endpoints of other
apps that are not necessarily native to
any. In this section, we're going to
build from scratch a text to image
workflow. This is how the workflow is
going to look like in the end. The goal
of this workflow is for you to begin
with a text input specifying how you
would like the image to be created in a
simple format and that will be passed on
to an image prompt agent that would then
create an effective prompt to be passed
to an image generation model in a
platform called Wspeed AI. Now waist
speed AI is just one of the many
platforms that host video or image
generation model like V3 or Cadets. A
little caveat, it is a paid platform and
to be able to use the API calls, you'll
need to top up a minimum of $10. Now,
we'll walk you through how to set up a
post API call to waist speed and how to
post a get API call to get the result of
the image from waist speed and then how
to set up an if loop just to make sure
that the workflow doesn't break down
when the image or video isn't ready yet.
And finally, we're going to pass on the
result or the output of the image into a
simple Gmail. And the output node can be
replaced by any form of outputs such as
Telegram, WhatsApp or Slack. Now,
starting from scratch, the first trigger
that you want to hit in this case is the
chat trigger node. And again, we're
choosing the chat trigger node because
we want to be able to tell the workflow
a certain specification of the image
that you want to be generated. And this
doesn't have to be super detailed
because again, you will have an AI image
prom agent that's going to help you
craft an effective prompt. But in this
case, I'm going to just populate the
chat note data with a simple create an
image of a cat flying through oops of
rainbows. All right. And that data is
now populated in the payload. As you can
see, the chat input is created image of
a cat flying through groups of rainbows.
Cool. All right. And the second node
we're going to introduce here is the
OpenAI node, which is just a message
model node. In this case, I'm going to
rename this image prompt generation AI.
All right. So, the OpenAI account has
been connected as per as how we've gone
through in the previous section. And
we're going to choose a model here. And
we're going to pick GPT4.1. And of
course, we're going to drag the chat
input into here as the user input. And
what I'm going to do here is I'm going
to add a system message just to specify
exactly what I want this node to do. And
here's the system message that I've
prewritten. I'm going to toggle this to
expression so I could blow it up. As you
can see, I did not come up with this
system prompt. What I did was I went to
chatgpt and essentially asked it to act
as a prom engineer and come up with an
effective system prompt for a text to
image generation prom engineer. So I
recommend that you do that for all
production cases because LLMs are just
much better at coming up with system
prompts than we are. and you don't
really have you can do it yourself, but
I would say that starting off with an
LLM and iterating it and improving the
version afterwards is a much easier way
to go about it. So, as you can see, it
says you're an expert text image
generation prom engineer working inside
and in an automation. Your only task is
to generate clear, vivid, and effective
prompts to be passed to an image
generation API. I'm not going to read
the whole thing, but essentially it's
just giving it details on how we want
the output of the video or rather the
image prompt generation to be. Cool. All
right. So, I'm going to hit execute step
here and see what kind of output it
gives me. So, as you can see, the
content has been outputed is an image
prompt generation, which is a playful
cat soaring gracefully through vibrant
glowing hoops made of rainbows suspended
in the sky surrounded by fluffy clouds,
dynamic motion, bright and whimsical
lighting. magical atmosphere highlight
detailed digital art fantasy
illustration style. So great. So it's
describing the image the background as
well as the style that we want it to be
in. Cool. All right. So after the image
generation prompt where we want the
output to go is to HTTP request node to
way speed API. So I'm going to rename
this a simple way post because this is
going to be the post API that we're
going to call to waist speed. So going
back to waist speed again if you guys
are new to this platform what you need
to do is sign up it's pretty easy and
straightforward there's going to be some
onboarding questionnaires and stuff like
that but it's pretty easy
straightforward to sign up and I'm
already signed in here where I will go
is I will go to the explore models here
and as you can see there are many models
that you can try and play around with
but in this case what we want to do is
we're going to checkbox the text to
image so that we are only given the text
to image models here and we're just
going to take the first one which is
Cream by my dance and we're going to
toggle over to the API documentation
here. As you can see there's there's a
post curl command that I can easily copy
and import to my workflow. So going back
to the workflow I'm going to click on
import curl. I'm going to paste the
entire curl command here. And there you
go. The fields are populated. The next
thing I want to set here is the
authentication method which as we've
gone through. We're going to choose the
generic credential type header off. And
I already have a waist speed credential
set up here. But what I'm going to do,
I'm going to create a new credential so
that it's clear on how to set this up.
So I'm going to call this waype
credential demo. And under the name,
just to be sure, I'm going back to the
API documentation here. And under name,
so the name should be authorization. And
the value is bearer API key. And what I
want to do is I'm going to head over to
the API key section. And as you can see,
I already have an API key generated. But
what you can do is you can actually
create a new key and hit that create key
and it'll just give you a new API key
for you to copy from. But in this case,
I'm going to use the existing one. Going
to copy that. I'm going to go back and
under value, I'm going to toggle this to
expression so you can see it. I'm going
to type beer space API key and I'm going
to hit save. Cool. So now that's set up.
Under the hitter section, I can toggle
this off because the authentication is
already contained within the hitter
section. So I can just remove that and
focus on the body section. Now there are
a couple fields here under the body
section and most of it is just how you
want the image to be the ratio as well
as the size here. But what we want to
pay attention to is really in this case
the prompt because we want to make sure
this prompt comes from the content
output of the image prompt generation
AI. And we're going to drag this here.
And there you go. The rest of the stuff
I'm going to just leave as is. But of
course as you were generating it you can
specify it to your requirements. But in
this case, I'm going to hit execute
step. And there we go. We got it out. So
the next note I'm going to introduce
here is just a simple weight node. Now
this is image generation. So which means
it's going to be pretty quick. As you've
seen the status here is already created.
But in this game, I'm just going to wait
15 seconds for it to complete. And I'm
going to hit execute step just to
populate this note. And just before I go
any further, I want to go back here and
pin this data so that I don't have to
rerun that every single run. It's going
to consume my API token. So I want to be
careful about that. And while that's
spinning up, what I want to do is start
setting up the get HTTP request, which
is the same node here. But in this case,
I want to go back to waist speeds API
documentation and hit the curl command
for get result and go back to the
workflow import curl. Import the entire
thing. And again, all the fields are now
configured correctly. With the get API,
we want to make sure that we're pasting
in the right request ID. As you can see,
there's a variable here called the
request ID. And we just want to first of
all toggle this to expression and then
remove this with our request ID from the
wheat node. And we're just going to drag
that right between the slashes here. And
what we're doing here basically is we're
calling the post API and way speed is
receiving the image request and it's
processing the image. We waited 15
seconds and now we're going to get the
image. But we got to tell way speed
which image that we're trying to fetch
and we're doing that by dragging the ID
into the API endpoint. Here under
authentication we're going to choose
generic credential type header O and the
same credential here and of course
similarly we can toggle off the headers
because we already have it within our
credential. Rename this to waste feed
get and we're going to hit execute step.
So we're seeing a couple things in the
payload here but essentially the URL
link of the image is the output link
here. All right. So I'm just going to
copy that and paste it to see what the
image looks like. So this is what the
image looks like. is a cat jumping
through hoops of rainbows. That looks
pretty cool. All right, now that that's
done, again, what happened here is that
the get API was called after the status
has been created. So, the result exists
and is valid. But what about the case
when speed is taking its time to create
an image or when it's a video generation
model, it's probably going to take
longer than just 15 seconds. So, in the
scenario where the get API call is made,
when the image is not ready, it's not
going to give you any result. in fact
it's going to bug out or error out. In
that case, a workflow is going to stop
without rerunning it because essentially
it's just going to error out on this
node. So what we want to do to avoid
that scenario is we want to create an if
loop. In this case, what we want to do,
we want to drag the status string from
the waist get http request and
essentially say if it's equal to
completed, then it is true. I'm going to
hit execute step here and it goes into
the true branch because the status has
been completed. So the if branch acts
like a logic node where it can filter
out the output based on the logic that
you've set. So in this case the logic
that I've set is for the status to be
completed and that's true. It falls
under the true branch and it can
continue. In this case I'm going to set
the Gmail note. Send a message. I've
already set up my Gmail account.
Essentially I'm going to send this to
myself. The subject is image generated.
And I'm going to drag the now variable
just so that we can tell by the title
when this is created. And email type can
be text because I just want the link in
the email. Nothing fancy. And the
message we're going to drag the output
here so that the link to the image is
sent to me via email. All right. So I'm
going to hit execute step. As you can
see, the email has been sent. And let's
see what the email looks like. So the
email looks like this. Nothing fancy.
It's just passing on the link to my
email address to my email inbox. I'm
going to hit that and open it up. And
here we go. That's the image. So going
back to the workflow or to the if loop.
So this is what happens if the status is
completed. The image is ready and it
gets passed on to the Gmail. However, if
the image is not ready, it's going to
fall into the false loop because the
status would now say that it's not
completed. It's probably going to say
that it's processing. It's going to fall
into the false branch. In this case, we
want to wait for another 15 seconds or
any other arbitrary weight amount. We're
just going to name that wait another 15
seconds. Okay. And I'm going to hit
execute step. In this case, nothing is
going to happen because it's under the
false branch. 15 seconds. And after we
wait another 15 seconds, what we want to
do is to loop it back to the get node or
the HTTP request node here. If the image
is not ready, the status is going to say
incomplete or in progress and that's
going to fall into the false branch and
it's going to wait 15 seconds and it's
going to try to get the image again. And
then if it's still not ready, what
happens is it's still going to fall into
the false run. It's going to wait
another 15 seconds and try to get it
again. And it's going to stay in this
loop until the image is ready. And then
it's going to fall into the true run and
kick it out into the Gmail. Cool. I hope
there's a clear explanation on how to
build this workflow from scratch. In the
next section, we're going to build a
texttovideo workflow where is somewhat
similar to what we've built here with
the exception that it's going to
generate a video for us. I'll see you
there. In this section, we're going to
build a text to video workflow, which is
similar to the previous section, which
was a text to image workflow with some
minor but important distinctions. Now,
just like the previous workflow, we're
going to start off with a chat trigger
so that you can pass on the
specification of the video that you want
to create and that's going to be passed
on to a video prompt pigeon instead of
an image prompt pigeon. This time we're
going to do a text to video model
on V3.
in your labs, you're going to already
have these two nodes set up.
And again, the only difference in this
case from the previous node
is the system prone that we've
pre-populated in the video prom agent.
So you don't have to. So if you toggle
the expression, and I'll blow it up so
you can see it. It is now saying that
you're an expert tech video generation
prom engineer working inside an
automation. And again, you can generate
this yourself through GPT or any LLM of
your choice to act as a prompt engineer.
I'm not going to read the whole thing,
but it is pretty similar to the text to
image generation prop engineer before.
Now, continuing to build the workflow,
the next node that you want to hit here
on your labs will be the HTTP request
node again.
And this time, we're going to waste feed
and explore models. We're going to type
in VO3
and we're going to filter out text to
video category only.
And as you can see, as you can see,
there are two models here. V3 fast and
regular V3. In this case, we're going to
pick V3 fast.
>> Google V3 Fast is now on WSE AI. It'll
blow your mind and go try it now.
>> So, just for a warning for V3, it is
pretty expensive to run. And as you can
see, the request itself will cost me
$3.20 per run,
but as you can see from the sample
video,
it is pretty high quality. So, let's try
it out.
So, toggle to the API documentation
section. And what you want to do here,
very similar to what we've done in the
past section, is to copy the entire curl
command and go back to the workflow.
We're going to import the entire curl
command here.
And again this is a post method
and we're going to say waste post
for the name of the node
authentication. We've set it up earlier
in the previous section. So we're going
to just select the same generic header
off with waist speed credential demo
and we're going to toggle off the send
headers and focus on the send body
section. And just before I complete set
this up, I'm just going to populate the
previous nodes just by saying create
a video of
of five gorillas
on a boat
having a great fishing trip.
Okay, so I'm going to type that into the
chat note. It's going to pass it on to
the video prom agent. And let's check
out what's the output from the video
prom agent. It says a lively cinematic
scene of five gorillas on a wooden
fishing boat in the middle of a sunlit
lake. Laughing cheering as a reel in big
thrashing fish splashes of water in a
golden sunlight. All right, looking
pretty good. Uh, so let's try that out.
And we're going to open up the HTTP
request node here. Going back to the
JSON body section of the way speed post
node.
What we want to take a look here, as you
can see, there are a couple parameters
that were not mapped correctly. And this
sometimes happens when the curl command
either wasn't formatted correctly or
wasn't parsed over in the correct way.
So, in this case, what I want to do is
I'm just going to use JSON um the raw
body JSON. And in this case, I'm just
going to copy the entire body of the
JSON curl command and go back to the
workflow and paste the entire body. And
from here, essentially what I can do is
I'm going to replace this with the
string content from the video promotion.
Right? So I'm just going to delete that.
And I'm going to drag this again. I need
to toggle this expression. Let's blow
that up and drag this content over in
between the prompt. And you can take a
look at the right hand side for the
result, which is yes, the prom is now a
lively cinematic scene of five gorillas
on a wooden fishing boat in the middle
of a sun lake. Cool. Aspect ratio,
duration, and whether or not to generate
audio. And these are formats and
configurations that you guys can decide
based on your requirements. But this is
good to go for now.
So I'm going to hit execute step here.
And as you can see,
the post request is successful.
So, as what we've done with the image
generation workflow, what we want to do
here is to wait 15 seconds. And we might
need a little longer than that because
it's a video and it's high quality as
well. So, it might take longer than 15
seconds, but we're just going to put 15
seconds for now. And we're going to
execute stat to populate that. Of
course, what I want to do is to pin this
data here. Again, it was is quite costly
for me to run the API call. So I want to
make sure that I don't have to
repopulate this data again and again.
So the wait 15 seconds is done. The data
has been parsed through. What we want to
do now is to introduce the HTTP request
get node. And again going back to the
API documentation here. I'm just going
to copy this.
I'm going to import curl
the entire thing. and it's going to just
configure all the fields that are
required here. I'm going to toggle this
to expression and the same thing as we
did before, we're going to replace this
request ID with the video ID so that it
knows which video to fetch as a result.
And then in terms of authentication,
we've already set it up before. So,
we're just going to pick the same one.
Toggle off the send headers part because
we already done the authentication on
the previous bit.
Rename this to
waist speed get
and I'm going to hit execute step.
So as you can see the status has been
completed. So we can actually go over to
the next node. But before I do that,
again, just as what we've done in the
image generation workflow, what we want
to do is introduce an if loop because
just in case that we run the get API and
the status is not completed, meaning the
video is still processing that the
workflow doesn't all break down. Right?
So, we're going to drag the status here
and we're going to say that if it's
equal to completed
that it is true. So, we're going to
execute the step and there we go. It
goes to a true branch. But what we're
interested here is that in the case
where it's not true, which means it's
not completed, that we're going to wait
for another 15 seconds.
All right, I'm going to set that as 15
seconds. Okay,
and we're going to loop this back to the
get HTTP request node. And what
essentially we've done is that in case
when the status is not completed, it's
going to go into the false branch. It's
going to wait another 15 seconds and
it's going to try to get the video. And
if it's still not done, it's going to
fall under the false branch. It's going
to wait another 15 seconds and try to
get the video again. It's going to go in
a loop until the video is ready and
becomes true. And then we can kick that
out to a Gmail note or a Telegram node
or whatever node in terms of output that
you want to send it to. All right. So
then going to configure this send a
message again. It's going to be the same
thing as before.
I'm going to name it video generated
on this particular time. I'm going to
give it a time stamp.
Drag the now variable. Whoops. Toggle
that expression. Drag the now variable
here. And there we go. We're just going
to do text here because I only want to
pass the output,
which is this one over here. Okay.
And I'm going to hit execute step.
And there you go. Message sent. Let's
look at our email and check out the
video. So, this is the video link. I'm
going to click on that. It's going to
download the video. I'll open that up.
Okay.
All right. So, that was a video.
Probably could have done better with the
prompt for the videos. Could have told
it to say something or do certain
actions and stuff like that. And of
course, if you have any particular
requirements in terms of the style of
the video, you can always tell it that.
Right now, it's still a little 3Dish to
me. So I definitely can do a bit better.
Of course, this is V3 fast. If you want
higher fidelity, you can definitely pick
a higher model like just a full V3. But
again, it is rather costly. V3 is one of
the costliest model out there for video
generation. So alternatives to that
could be CDAN or cling or want which are
model that available in the same
platform which is wavepeed.ai
and of course a lot of other platforms
as well. So that's a quick build along
for the text to video workflow. And in
the next section, we're going to do an
image to video workflow just to complete
the whole set. And we can explore how to
actually implement or use that in a real
life use case such as a marketing use
case or just for fun. See you in the
next one. So in this section, we're
going to build an image to video
workflow from scratch. Just because for
an image to video workflow, we're going
to need two inputs. One being the base
image that the video is going to base
off. Second being the video generation
prompt that we want to send over to the
video generation model so that it can
effectively craft videos based on the
image as well as your requirements and
specifications. Because of the
multivaried inputs that's required here,
we do need a centralized way that we can
communicate with the workflow. In this
case, a platform that we'll use for this
demo is Telegram or any other
communication platform that you spend
most of your time on. So in this case,
for this workflow, it gets triggered by
either one of two things. It could be a
Google Drive trigger, which is when you
might upload an image like this into a
designated Google Drive and that would
trigger the event and the workflow and
that's going to send a Telegram chat
message to you saying that an image has
been uploaded and please provide the
video idea that you want the video to
look like and you can chat basically to
the bot from this Telegram chat saying
create a video of a cat jumping through
the hoop based on this image and
parachute opens to safely land the cat.
So essentially when you reply to this it
would then activate the second part of
the workflow which is a telegram trigger
and it's going to kick that input into
open AI. So in this case a video prom
generator that will pass the video
prompt to CES API which in this case
we're going to again use waistfeed as a
platform that will provide the services.
So it's going to make a post API call
based on the image that we have. And the
way it would do that is as you can see
in the first part of workflow after
we've received a telegram message. What
the workflow does it also uploads a
particular image ID onto a Google sheet.
And in this node, the agent basically
dips into the Google sheet to get the
latest URL in case you have more than
one image and pass both the video prom
and the image URL to Cance post API. And
then the rest remains the same, which is
a wait note followed by a get HTTP
request with an if loop just to make
sure that it doesn't break down in case
the video has not done processing yet.
And then when it's done processing, it's
going to kick it out to either a Gmail,
but this case, I think it might actually
be better if we just replace this with a
Telegram node. So there you go. We're
going to build this from scratch because
it's an entirely new workflow. So let's
jump right into it. So the first node
you're going to start off with is the
Google Drive note. And the trigger we
want to pick is basically on changes to
a specific folder. And essentially what
this note does is going to monitor the
folder every minute to detect any
changes to that folder. And the changes
that we're talking about specifically in
this case is an upload file. So this is
when you drop a photo into that specific
drive. Right? So we're going to pick in
this case I have a folder ready which is
called image to video codecloud. All
right. So I'm going to pick that from
the list and watch for file created.
Right? So I'm going to click on fetch
test event and because initially we I've
already uploaded a file just a minute
ago. It's reading this as a trigger with
the corresponding output payload. As you
can see, there are a couple of data
here, but where we really want to pay
attention to is a web URL link which is
publicly available and the variable
title is web content link. So, this is
the link that we'll be most interested
in. And just so that everything works
well, we want to make this publicly
accessible. So, make sure that you
change the settings to be publicly
available at least during the testing
period. So the second node that we want
to introduce here is a telegram node
because what we want to do essentially
is when a photo has been dropped into
the folder, we want to get a
notification on the telegram chat. So
this is a simple send a photo message on
telegram and I already have my telegram
account set up but I'm going to show you
very quickly how you can do that on
telegram and what you want to do is type
in on the search bar on top here
botfather and you want to make sure that
it's the correct one which is the one
with a blue tick and it's going to lead
you to this page. And what you want to
do is hit on start and it's going to
have an automated/start
uh message here. And what botf father is
a place where you create your bots
within telegram. And this is not to be
confused with the air agents that we're
creating on nn end. This is simply a bot
that listens in to the communication
within telegram and be able to pass on
that as a trigger to your end workload.
Now in this case I'm going to create a
new bot. So I'm just going to choose a /
newbot and I'm going to need to name the
bot. So I'm going to say end it in test
demo one and I also need to give it a
bot name. So the same thing I'm going to
add it a bot. So there you go. This is a
link to chat with the bot. The bot has
been created and this is the API key
that we can copy. So I'm going to click
on that. I'm going to go back to my
workflow and in this case I'm going to
create new credential under telegram.
I'm going to name it telegram demo. And
I'm going to essentially paste the
access token here and hit save. And
there we go. We're going to look out for
the green bar here. So connection has
been tested successfully. All right. So
now that's connected. The other
configuration of the fields that we want
to do here is to fill in the chat ID.
And for the chat ID, what we want to do
here is we want to call another telegram
trigger, which is the second trigger
that will activate a separate part of
the workflow where then you can input
the video ideas that you want into the
workflow. So in this case, I'm going to
scroll down. I'm going to go to triggers
and then I'm going to pick the onssage
trigger. So the credentials that it's
connected with is Telegram demo which is
the same credential and the trigger is
on message. I'm going to hit execute
step. As you can see it says there's
problem running the workflow. Please
resolve outstanding issues before you
activate it. It has nothing to do with
this node because as you've noticed the
problem actually lies in the other node
that I was setting up halfway. So I
didn't finish it because I needed the
chat ID and in order to get the chat ID,
I was going to use a trigger note to
call the chat and then paste the chat ID
over here. That's what I was trying to
do. But so in the in room what I'm going
to do, I'm just going to fill this in
with a dummy content which is just a
string which is a test test and that
should solve the issue and let me run
this node. Right? So I'm going to
execute step and it's waiting for a
trigger event. So I'm going to go back
to my telegram chat and again I'm going
to click on this link. So it's going to
lead me to a chat with the bot and I'm
going to say test.
Oh, okay. So let's start. So, so as you
can see, it's capturing the very first
message here. And because it was just a
test workflow, it stops after the first
iteration. So, the first iteration was /
start is not capturing the second one,
which is the test here. Cool. All right.
So, let's head back here. And so, that's
the Telegram trigger. Let's get the chat
ID here, which is the one here. And
because this is going to be the same
chat ID throughout, as that's the way
you would communicate with the bot. So,
I'm just going to copy the chat ID here,
which is the ID of interest. And because
this is going to be the exact same chat
that we're going to communicate with
this workflow, I can just basically copy
the ID and paste the ID hardcoded into
my chat ID here. So it doesn't need to
be dynamic. All right, cool. So we want
to drag now the photo variable into this
field. But again, because I ran the
Telegram trigger earlier, it is now
rewriting my previous node, which is the
Google Drive trigger node with nothing.
So, what I want to do is just go to
executions and go to my very first
execution. And as you can see, this is
populated with the previous data. And
I'm going to copy that to editor. And
now it's going to populate this. And
then what I want to do is go to the
Telegram node. And I'm going to drag the
URL of the image that rests on the
Google Drive. So, we're going to scroll
right down here. And this is basically
the URL that is publicly available now
that I have shared the permission to
anyone being able to be with the link.
So I'm going to just drag this into this
field. And what I want to do is also
just add a caption to this because what
I want to say is you've uploaded this
photo to the Google folder. Kindly
provide the video idea that you want to
generate from this image. Okay. So, I'm
going to just blow this up so that you
can see it. You have uploaded this photo
to the Google folder. Kindly provide the
video ID that you want to generate from
this. Okay. Type here from this image.
So, cool. I'm going to hit execute step
and see what it does. So, all right.
It's sending uh some type of stuff into
my Telegram chat. So, let's see. Now,
it's saying this. is sending me the
correct image which is the cat going
through the hoops. It says you've
uploaded this photo to Google folder.
Kindly provide the video idea that you
want to generate from this image.
Perfect. So that's how I want to be
notified of any photos that anyone or
myself have dropped onto the folder. So
I'm going to just move this up. And the
next node just finishing this cart
workflow would be a Google sheet node
because what we wanted to do is we
wanted to append a row in sheet or
rather log the information about the
photos that have been uploaded into the
Google sheet. Right? So this could be
we're going to name it image log. All
right. So it is simply logging it to a
sheet that I've created here which is
called image to video log and it's just
a simple two column sheet. The first
column being the image URL. The second
column being the date when it was
uploaded or created. And I'm going to
make sure that the sharing permission is
public as well. And I'm going to head
back here and I'm going to choose from
the list image to video log. So there's
only one sheet there. So it's going to
be sheet one. And it's going to read out
the two columns which is the image URL
and date. For the image URL, what we can
do is that we can go to the Google Drive
trigger and drag the same web link from
Google Drive and just paste it here. And
for the date, we want to scroll down to
variables and we're going to drag the
now variable. All right. Cool. We're
going to hit execute step. All right.
And then we see that now the second
image has been uploaded at this at this
time. Cool. All right. So that's done.
That part workflow is completed. What we
want to do now is to move on from create
a video agent. And in this case, I'm
going to use open message and model
node. And this is essentially a video
agent. And I've got my open air account
already connected as before. I'm going
to choose the model here which is GPT
4.1. And again I want to execute the
previous node. Okay. And right now it
says test not very meaningful for me to
run the test run. So what I want to do
is I want to open up the node and
execute step so that it can get new
information. And then I'm going to go
back to my Telegram chat and I'm going
to create a video of a cat jumping
through oops of rainbows and opening up
a parachute towards the end so that it
can land safely. Okay, so I'm going to
send that message through Telegram. As
you can see, it's getting the text which
is create a video of a cat jumping
through hoops of rainbows and opening up
a parachute towards the end so that it
can land safely. Cool. Okay. So, let's
go back to the video prompt agent. Now,
this is populated. What I want to do is
I want to direct this text into the
prompt field here, which is the user
prompt because that's the video ID that
it's going to work on for it video
prompt. So, next thing I want to do is I
want to define the system prompt for
this node. And again, it is pretty
similar to the previous sections where
it was a text to image or a text to
video model. And you can definitely make
use of tragic PT to effectively create
and craft the system message for a video
prompt engineer. But what I want to show
here is I'm going to blow this up, put
it on expression so that you can see is
the output format. So this time I didn't
put in much effort in terms of trying to
really prop engineer the system message.
So as you can see there are two main
tasks for this particular node which is
the first one being create an effective
video prompt to a prompt video
generation model based on users input
which is great and then the second one
is to make sure that the output is
separated into two JSON objects which is
what the next node is going to expect
from this particular node. And the first
one being the video prompt itself which
is going to be in strings. And the
second one is the image URL which is the
image that the video generation model is
going to create a video based off. And
in order to fetch the image URL, we need
to make sure that we're attaching the
right tool for the agent to look at. And
in this case is a Google sheet attach
tool. And what I want to do is write to
get rows. And from the list is going to
be the same image to video block sheet.
And we're going to pick the only sheet
that exists in that project, which is
sheet one. And there we go. So
essentially what I'm telling it is to go
back to the system prompt again that is
going to fetch this value from the last
row of the attach Google sheet locked.
All right. So we just want to name it
just for clarity. Google sheet lock.
Okay, cool. So we're going to run this
and execute the step and see what the
output is. And again, even though we
have already specified the output in the
system prompt, we just want to make sure
that we toggle on the output content as
JSON just to make sure that it's
outputting it as two separate JSON
objects. And as you can see under the
content here, there's the prompt strings
and the image URL strings which which
are the two expected parameters in the
next node. So the next what we want to
do here is just a HTTP request node. And
again, we're going to import curl here.
Go back to waist speed and under models
this time what we want to do is we want
to check the category image to video and
there are a couple of options that we
can choose from but in this case we're
going to choose the first one which is
Cance version one. All right cool uh and
we're going to toggle into the API docs.
Copy the curl command which is the post
method here. Go back and essentially
paste the entire thing and it's going to
configure everything here.
Authentication we've set it up
previously. So we're going to just
choose the same authentication here. And
for that we can toggle off the header
part because the authentication has
already been included in the
credentials. And what we want to change
here the same thing we have couple
parameters that we can set that has
relation to the type of videos and what
kind of requirements that's required.
But what we want to do here is we're
going to just change the image URL. Drag
it to this value here. And under prom,
we're going to replace that with the
prom strings and also drag it, drop it
here. We can specify the duration. I
think 5 seconds is good for now. But
yeah, bunch of other stuff that you can
configure as well. And then we're going
to remake rename this to cens.
And we're going to hit execute step.
And there you go. The post request is
successful. And as always, we want to
wait
until the video is ready for us to get.
Right? So, we're going to change this to
15 seconds. I'm going to hit execute
step. And as that's running, we're going
to introduce the get HTTP request node.
See, that's get. And again, for this,
we're going to go back to the
documentation and copy the get curl
command and just import curl. Paste the
entire thing and it's going to be
configured. And as usual, we're going to
replace this request ID with the ID that
we got from the weight node earlier. So
I'm just going to paste it there. And
authentication, same thing. We're going
to pick the same credentials. We're
going to toggle off the headers here.
And we're good to go. So as you can see,
it's waited 15 seconds. And we're going
to try to get and the status is
completed. So that's good. So 15 seconds
was enough uh to get that. But just in
case again, um that's not ready yet. We
want to make sure that workflow does not
break. So we're going to pick the if
node creating an if loop. So we're going
to say that if the status is equal to
I'm just going to copy and paste make
sure that it's word for word the same
completed then it's going to fall into
the true branch and it's going to go to
the next node. But in case it's not
completed it's going to go into the
false branch. It's going to wait another
15 seconds. Right? I'm going to execute
a step. Actually it doesn't matter
because it's going to fall into the true
branch. I'm going to reconnect it back
to the get requests because in case it's
not ready yet, it's going to wait
another 15 seconds. You got to try to
get it and it's going to wait another 50
seconds and try to get it again until
it's ready. So, in this case, it's
already done. Uh, what we want to do is
to just move on to the next node because
it's true. And the next node that we're
going to append here would be the
Telegram node. And this is going to send
a video node. Telegram send a video. And
again, chat ID, you know, it can be
dynamic or you can just hardcode it into
the thing. But in this case, because I
already have it plugged in, I can just
drag the ID here. In terms of video,
we're going to get it from the if node.
Got the output right there. Going to
drag it in here. Okay. And we're going
to hit execute step. All right. As you
can see, sending over some stuff on the
Telegram. So, let's go to Telegram and
check it out. There we go. As you can
see, there's a video of the cat jumping
through the hoops of rainbows and
parachute opens and it lands safely.
Cool. All right. Of course, a lot of
things can be done again with the
prompt. We could define what kind of
style we want it to be. This is a bit
more cartoonish. If we want it to be
hyperrealistic, that's possible as well.
So, we just need to include all of that
in the system prompt. All right. I hope
that's a clear build of the image to
video workflow. So, in the next section,
you're going to practice building it
from scratch yourself in the labs and
I'll see you in the next one. And just
very quickly before we go, I just want
to point out that I'm building this in
one single workflow or one single
worksheet. Um, but what you can do, you
can actually separate these two
workflows into two separate worksheets
and it'll just make it much easier for
you to troubleshoot and iterate as you
go. So just in case if there's any error
and the workflow is bugging out or
erroring out, you can actually zoom in
to any one of the workflows that are
bugging out instead of looking under the
hood for these two workflows at a go. So
if you separate it into two workflows,
you're going to know which workflow is
working down and you could go into the
relevant workflow and find out what's
not working within the notes. With that,
I'll see you in the next section. In the
previous section, we talked about how we
can access way speed API to call on text
to image or text to video generation
model like V3. Now, obviously that's not
the only way to access V3. And perhaps
one of the most straightforward way is
to access it through the Google Cloud
Platform itself. Now, the way you can do
that is to go to
cloud.google.com/vertex-ai
and go over to the documentation. So,
Vert.Ex AI is a place where V3 is hosted
on Google Cloud. And in the
documentation, you can scroll down to
the full HTTP request with which you can
populate the HTTP node to call on the
API. Now, in order to get started, you
can actually go down to the simplified
version in the sample request. Let's go
back to the workflow now and build a
simple workflow where we can call on the
text to video generation model in V3.
So, the first node that I'm going to
call on here is just a simple trigger
manually node. And the reason for that
is because we just want to try the HTTP
post node and make sure that everything
works well before then we can replace a
trigger node with something that
actually populates the API calls with a
prompt and all the information that you
want to pass on to via 3. So we're going
to run this node just to populate it. So
the next thing we want to do is we want
to obviously call the HTTP request node
here and the method that we're going to
choose is post. We're going to go back
to VO on Vert.Ex AI API documentation.
So, what I want to do is go down. As you
can see, there's a whole HTTP request
curl command here that you can copy and
paste. However, it's got a lot of stuff
that I might not necessarily need to
configure for my videos, at least not at
the start. We want to keep it simple as
this is the first run of the demo. Just
want to let you guys see how to set it
up simply. So what we want to do here is
to scroll down a little bit further and
you see that there is a sample request
that we can copy right here and this is
the URL endpoint which we want to copy
and let's go back to the credential here
and we're going to copy the URL and
there are two things that we need to
configure the project ID as well as the
model ID. So in fact what you can do is
you can configure this directly from the
Google cloud console. For example, if
your project ID is naden- ak-demo
the project ID you can get from your
Google cloud dashboard. It's not the
project number is a project ID. So you
can just copy this and go back there.
Actually I mistyped that. It should be
demo-k. And the second thing you need to
configure is the model ID. And you can
scroll up and see the different types of
model ID that you can choose from here.
So in this case, I am going to just copy
VO3 fast generate the 01. There we go.
I'm just going to copy that and go back.
Actually, I'm just going to just paste
the entire thing and we're good to go.
Delete the post. And there we go. That's
the URL right there. Now, the way the
authentication is done is going to be
different from the way we would call
platforms like way speed or any other
platforms that uses the typical header
off API. With Google Cloud, what you
want to do is you want to choose
predefined credential type. And under
the credential type, there's a specific
credential that you want to choose which
is Google OOTH. So that's Google O2 API
which you want to choose. And right now
I already have an account set up. But
what you want to do is to create a new
credential. And as you can see, it's
very similar to the way we've set up our
OOTH earlier. You have the OOTH redirect
URL here, which you have actually
already pasted before in the Google
Cloud Console. But just to make sure
it's the same URL, you can do that. But
what you need to grab is the client ID
and client secret. So, what you need to
do is just simply go back to your Google
Cloud Console and go to credentials and
go over to your client IDs here and
you've got your client ID. Paste that in
and go back again for your client
secret. And depending on the setting,
sometimes your client secret will be
unavailable once you've created them and
you're supposed to copy them and store
it somewhere safe. So, if you set this
up previously and you stored it
somewhere, you can use the same client
secret and paste it into your end
credential. However, if you haven't,
what you can do is you can delete this
and add another secret. So that's an
option as well. So again, as you can
see, there's authorized redirect URLs
here, which is essentially the same with
what we see on the NN credentials right
here. And we're good there. So we're
just going to paste the client secret
there. And just before we go, this is
where the first run, the way we set it
up in the OT previously, we need to add
a scope here. So essentially, the scope
grants the bearer all the services in
the Google Cloud Platform that is
authorized by the IM. Since Vertex AI is
part of cloud AI platforms APIs, it
requires an OA token with this scope in
order to accept the request. Now again,
before we go back to the workflow and do
that, what you want to do is you want to
make sure that you go over to Vert.ex AI
API
on your Google Cloud console and make
sure that you enable this API. And once
you're done, you can simply click sign
in with Google and it'll lead you to a
login page. And what you want to do is
just allow the access and you're good to
go. So you can see the green bar shows
up and the account is connected. So you
can toggle out now. And what you want to
do now is toggle over the sand body
section. And this time we want to choose
use JSON body. And using JSON body,
we're going to just go back to Vertex AI
and copy the entire JSON body which is
the simplified version here. And even
before you do that, you can configure a
couple things in in terms of the prompt
for the video and the output to the
storage URI if you are using the GCS
bucket and you want the video output to
be stored in that bucket and the sample
count. So I'm going to take that but at
the same time I'm going to configure a
couple of fields as well such as the
aspect ratio as well as how long I want
the video to be. And I'm going to just
paste the entire trunk here so you can
see. And it's basically similar to the
simplified version. is just that I've
added a couple things such as the aspect
ratio and the duration of the video and
I have actually removed the storage URI
because I didn't set up a GCS bucket and
basically I just wanted to return the
video in B 64. So if you remove the
storage URI Vertex AI is going to output
the video in B 64 format back to end. So
now that's all set up what I want to do
is just hit execute step and there you
go. I've got a name here and basically
this is sending over to Vert.Ex text AI
and VO3 is generating the video as we
speak. What we want to do next is sorry,
let me just rename this V3. Okay. All
right. And we're going to wait for 15
seconds. Going to run that. And
essentially what we're trying to do is
we're waiting for the video to be
generated before we pull for the result.
Right? And we're going to create another
HTTP request node as that's waiting all
for video from text. All right. And the
method is also a post in this case. And
essentially you can get the endpoint URL
from the pull long running operation
right here in the same documentation.
But I'm just going to go back and paste
the entire URL here. And under
authentication is the same thing.
Predefined credential type. We're going
to pick the same credentials with the
same account under the body. This one's
going to be simpler because we're just
getting the results back. And basically,
we just need the operation name.
And we're going to hit execute step. As
you can see, it's got an output, but it
says the unsupported output video
duration 5 seconds because I specified
it to be 5 seconds. So, supported
duration are eight for text to video
feature. So, I'm just going to go back
and change this to eight. All right. So,
I'm going to rerun this.
Run this again. And we're going to hit
execute step.
All right. So, we're getting an output
here. And it says that the node contains
6 MBs of data. It's going to slow down
the browser temporarily, but we're just
going to hit show data here because I
want to show you the entire format of
it. As you can see, because we didn't
specify a bucket for the destination of
the output, it's actually returning a
base 64 format into nen. Now what we
want to do next is obviously convert
this into binary. So we can download it
and do with it whatever we want. Send it
over to whichever platform, email etc.
But here what we want to do just before
I move on is to introduce an if loop as
usual because we want to make sure that
just in case the status isn't done that
it's going to go into a loop and try to
fetch it when the video isn't completed.
So I've actually picked the wrong type
there. So it should be boolean. We're
going to make sure that when it's true,
which is, you know, it's done, that that
goes into the true branch. And if it
falls, it goes into another wait note.
And this is going to be wait another 15
seconds. All right, it's running again.
And we're going to loop this back to the
poll. I'm going to run this. Anyway, it
wouldn't run because in this case, it
shouldn't run. It should fall into the
true branch. Yep, there you go. Goes the
true branch goes green. So this is going
to a set for edit field note because
essentially we want to set this output
which is B 64 to strings. We're going to
just call this B64 and we're going to
hit execute step there. And the output
is rather large. This is coming back as
strings. And the last step here we want
to do is to convert file. And we're
going to pick move B 64 string to file.
So there we're going to hit show data
there. We're going to drag this
parameter right here into B 64 input
file and we're going to output the file
as just a typical default name data.
Going to hit execute step there. So, as
you can see, we're getting an output
here which is a data. Now, we can just
download the file and that's essentially
the video. Let's open that up.
All right. So, we've got now a serene
walk down the beach. It's a good video
by VO3. As you can see, the audio comes
with it as well. Looks pretty good.
Yeah. So, we've got the output here. And
what you want to do with the output,
obviously, you can upload it to drive or
just try to send it somewhere. But this
may not be the way you want to download
the video exactly from Vert.ex AI. But
what I want to show is just the way that
you would set all of this up and be able
to call Vertex AI and the VO3 model to
generate text to video through this
workflow. Of course, the easiest way is
to create a GCS bucket and just put the
destination storage as that bucket so
that you can just retrieve it from there
later on. This is just a simple build to
let you guys know how it all works. But
there are a couple of reasons why we're
using platforms like misfeed AI instead
of just Vert.ex AI. The reason for that
is because there are other models that
potentially we want to use. Again, VO3
is one of the costlier models even
though it's one of the best out there in
producing high fidelity videos and
audio. But for everyday use models like
Cance and one and cling might be more
suited to your use case depending on
your needs. So for that reason platforms
like way speed or foul and there many
other platforms like that actually
provide an easy onetop platform where
you can access all different types of
models without the need of setting up
all the credentials and API calls just
to access that particular model. Hope
this is a good explainer. I'll see you
in the next section. Now, let's quickly
talk about a decision you'll face when
starting out with nitn. Should you use
nitn cloud or should you self-host it?
Both have their perks and a few
trade-offs. Starting out with nitn cloud
gives you the benefit of simplicity. You
just sign up and start building. No
service, no docker setup, no headaches.
Updates and security patches are handled
for you. So, you're always on the latest
version as long as you updated on your
admin panel. Plus, if you run into
trouble, you've got official support and
uptime monitoring already built in.
But the trade-off, cost and control.
With cloud, you're paying subscription.
And while it's worth it for convenience,
you don't get full control over the
environment, and you're limited to the
TSM resources that N&N offers. On the
flip side, self-hosting means you get
total freedom. You decide how much power
your instance has, where it's located,
and who has access. It can be much
cheaper if you already have a server
space or even want to run it locally for
personal projects. and you have the
flexibility to integrate with internal
tools and databases that you might not
want to expose to the public cloud. To
me, this is probably one of the biggest
positive points of self-hosting is that
for some companies and businesses, it's
simply not a choice, but a security
requirement to make sure you host it in
the service. They're not exposed to
external environments. But again, the
drawback would be maintenance. You're in
charge of keeping the server alive,
updating addit, and handling security
patches. If something breaks, there's no
safety net, and you've got to fix it
yourself. So to sum it up, choose ended
and cloud if you want the easiest path
to get started and don't want to deal
with infrastructure. Self-hosted if you
want full control, tighter integrations,
and airtight security.
Honestly, there's no wrong answer. It
depends on your use case. And if you're
just learning, cloud is usually the
fastest way to go. But if you're
building production workflows and want
flexibility, self-hosting gives you the
keys to the kingdom. I'll see you in the
next one. This section we're going to
cover the sub workflow tool and how we
can use it to segmentize complex
workflows into easy separate segments so
that it's simple for us to troubleshoot
and iterate on.
Now in this case I'm going to take the
workflow that we've just built in the
previous section which is the image to
video workflow. And there's a different
architecture in the way we built these
workflows. And I've said previously
the current workflow has two triggers in
one single worksheet. And that might not
be ideal for error handling because when
the workflow errors out, you have to go
into both the workflows to check which
node that the error has occurred in.
Now, one way to do that is to split the
workflow into two.
So in this case, I'm going to take this
as the main workflow.
As you can see, the trigger is a Google
Drive trigger, which is triggered when a
photo was uploaded to the drive. And
once triggered the next note, it's going
to send a telegram message with the
photo from the drive and ask users for
the video prompt or the video ideas that
user want to generate the video based on
this image. And after that, it's going
to lock the image URL in the Google
sheet. Now, what we're going to do here
is we're going to add a execute
subworkflow node.
Essentially, I'm going to show you how
to set that node up and how it links to
another workflow and how the entire
logic works when it comes to the
interface.
So, in this case, when you choose an
execute workflow node, you're going to
have to choose which workflow that's
going to activate or trigger. And in
this case, you can't actually just
choose it from the list. And here, I've
created another workflow and I've called
it sub workflow demo. Essentially what
it is is a copy paste of what we had in
our image to video workflow and just the
top part and we're going to edit that
later on. So going back to the main
workflow after you created the sub
workflow you can simply choose that
from the list
and essentially it's going to call the
sub workflow whenever this workflow is
triggered right after it logs the image
URL into the sheet.
Now what you want to do here is to go to
the sub workflow. In this case we're
going to replace a trigger node
to a execute subworkflow trigger. And as
you can see it says here when executed
by another workflow. So we want to
choose that one.
So there are a couple things that you
can configure when it comes to how you
want the information from the main
workflow to be transferred to the sub
workflow.
Now, in this case, we're just going to
choose accept all data for now, just to
show you the distinction of what happens
when you configure the data.
All right.
So, I'm going to take this I'm going to
and I'm not going to plug it into the
rest of the workflow just yet because I
just want to show you what kind of data
would be populated when this workflow is
executed.
So, let's try to run this workflow and
see what we get.
All right. So the execute workflow has
run successfully.
And one thing about the sub workflow
trigger is that when a workflow is
triggered by another workflow, you don't
actually need to publish it here. And as
you can see, it says that the execute
workflow trigger does not require
activation as it is triggered by another
workflow.
So cool. So we're going to hit
executions here.
to see the logs of the execution.
As you can see, I've executed the main
workflow and it's receiving data here in
the
in the trigger workflow. I'm going to
hit copy to editor so that I can see
what kind of data has been populated for
this node.
All right. So, I'm just going to blow
this up slightly so that you can see. So
what I've done is I picked acceptab all
data as the input data mode and
essentially the data that's been pushed
forward is the image URL and the date.
So these are the two data points that's
coming off here. And that's in part
because the data point that we've
received from the previous Google sheet
node in the output payload. We're only
saying two components that are being
pushed out
which is the image URL and the date and
it's simply passing those to the other
workflow.
So if you want more information to be
passed into the next workflow, well then
you're going to have to configure the
note to make sure that it's passing on
the relevant information that you want
to the other workflow.
Now going back to the other workflow,
I'm going to just push this back a
little bit. And in this instance, we
want to choose define using fields below
just to see what the difference is
between accept all data. And when we
choose define using the fields below, we
want to make sure that we define the
fields. So for example, in this case, we
want the image
URL as one of the data,
right? And let's say we don't really
need that and we don't really care about
it. Okay, so we're only specifying just
the image URL.
So let's go back to the main workflow
and this time we're going to only run
this execute workflow node and see what
it sends across to the subworkflow. So
as you can see, node is executed
successfully. Let's go back to the
subworkflow and check the execution
logs.
As you can see, there's a new one coming
in here.
And we're just going to open that up.
Now, you see the difference is that it's
only going to output the data in the
format that you've defined here. So,
we've defined the image URL field only
as strings and it's only going to pass
that data property in the output
payload,
which means the date output is not
included in the payload here. So that is
a main difference when you're defining
using the fields below as opposed to
just accept all data. And this is
extremely useful when you want to format
the input in a specific format or in a
specific way in the next node for the
rest of the workflow.
So all right, so let's go back to the
editor. In this case, we're going to
change back to defined using fields
below, but we're going to add in this
case the date variable as well. I'm just
going to hit execute step again. And
there we go. There no inputs data. So
what it's doing is it's outputting these
two variables with no content inside.
Cool. All right. So now when we look at
the architecture, what's going on here
is that the two workflows are activated
when a photo is uploaded on Google
Drive. The user is notified on Telegram
that a photo has been uploaded together
with an image of the photo and that
image is then logged into an image log
sheet and then the sub workflow is
called. So in the sub workflow, this
trigger is the start of the entire chain
of workflow here. And what we want to do
in now is to chain it up with the video
prompt agent here.
Want to do just before we chain it up
with the video prompt agent here is we
want to add a telegram note. And if you
remember what we need to do is we want
to get a text input from the user
describing what they want to do with the
video. just an idea on how they want to
generate the video because they don't
need to come up with the entire prompt.
We already have a video prompt agent as
part of the workflow. They just need to
tell us directionally what they want the
workflow to do for the video. So in this
case, we're going to select the send
message and wait for response node.
So we're going to make sure that we pick
the right Telegram account. And in this
case, I'm just literally going to go
back to the main workflow. I'm going to
copy the chat ID because it's going to
be the same chat ID that we use.
Right? Again, you can make this more
dynamic by passing it through the
execute workflow node and passing it
into the sub workflow. But in this case,
because it's going to be the same chat,
I'm just using I'm just hard coding it
into the fields. So now the previously
the telegram node will show the photo to
the user and say that you've uploaded
this photo to the Google Drive. Kindly
provide the video ID that you want to
generate from this image.
So, we're we don't need this part
anymore
because this is just going to be a
notification message. Instead, we're
going to have the message sent here in a
subwoofer.
So, we're going to make this a question,
which is could you provide the video ID
that you want to generate from this
image? Question mark. And we want the
response type to be free text. All
right.
Cool. So I'm going to hit execute step
here
just to see what we get on the telegram.
And as you can see the message that we
got from telegram is could you provide
the video idea that you want to generate
from this message. All right. And
there's a respond tab here which I can
click on.
I'm going to open that link
and basically type in my response for
the video generation ideas that I want.
So create a video of a cat flying
through hoops of rainbow
and landing
on a field of
sunflowers
with purple
unicorns
running in the background. All right.
So, we're just going to do that and hit
submit.
And cool. All right. So, let's go back
to the workflow. And as you can see,
receiving, and this is kind of hard to
see, but I'm going to just drag this to
the left a little bit. Um, as you can
see, the input that we're getting is
create video of a cat flying through
hoops of rainbows and landing on a field
of sunflowers with purple unicorns
running in background, which is a text
that we've input into the field there.
Cool. All right. Okay, so we've got an
output data here and what we want to do
now is to chain up the output. And let
me just blow this up again so that you
can see.
So we're going to chain that up with the
video prompt agent.
And now we're going to have to configure
this very quickly and just drag the text
from the telegram again. Drag this here
to the right. the text which contains
the video ID into the prompt here
and this is going to be the user prompt.
Essentially, in terms of system message,
it's going to be the same because we
still want it to create an effective
video prompt to prompt the video
generation model based on users input
and also to output only the following
two values in JSON format which is the
prompt that it generates as well as the
image URL that it fetches from the
Google sheet. So everything else stays
the same in terms of the rest workflow.
The only difference is we've now changed
the trigger to this workflow from a
telegram onssage trigger into a executed
by another workflow trigger.
So the rest of the workflow is going to
run the same.
And what I'm going to do now is I'm
going to do a top to bottom run of the
master workflow and the sub workflow as
well. So I'm going to hit save here and
I'm going to hit execute workflow here.
It runs the execute workflow and it's
spinning. The reason for that if we go
back to the telegram, it says that I've
uploaded this photo which is correct.
And again, it says, could you provide
the video ID that that you want
generated from this image? All right.
So, I'm going to respond here. It's
waiting for my response.
And again, I'm going to say create a
video of a cat
flying.
Oops. of rainbow
and landing on a bed of sunflowers
with purple unicorns
running in the background. Okay, I'm
going to hit submit here
and go back to the workflow.
It's not showing anything right now, but
if you go to the execution logs, you can
see that it's getting an input. It's
getting a trigger and it's running the
workflow.
So, it might take a little bit because
it's going through basically the HTTP
API post call to Cance or Way AI and
then it's going to wait 15 seconds. It's
going to do a get API call and then if
that fails, it's going to run the loop
again and try to get it again. And once
it has the result, it's going to send it
to the Telegram chat. All right, so
we're just going to head back to
Telegram.
As you can see, the video is ready.
And I got to say, this one is not as
good as what I expected it to be.
probably something to do with the prom,
but it's getting the main gist right,
which is, you know, the cat is landing
onto a bed of sunflowers with unicorns
running in the background, but the style
is a little bit inconsistent. And of
course, that we have to go back to the
system problem and fix that. But the
point of this is to show you guys the
difference between how the architecture
could work. Essentially does the same
thing. But the difference is that you
have now split the one bake workflow
into two separate workflows. And this
will help you when it comes to
troubleshooting and iterating because
you can re literally look into debug all
the errors occurring in the main
workflow or in the sub workflow. So it
just makes your life easier when you're
trying to error handle uh if anything
goes wrong. I hope that's a pretty clear
explanation on how we can use the
execute subworkflow note as well as the
executed by another workflow trigger and
how it all works. I'll see you in the
next section. In this section, I'm going
to run you through how to selfhost n on
your local machine through Docker
Desktop. So, the easy way to get started
is to go to GitHub and grab the NN-IO/
selfhosted AI starter kit. And what you
want to do is just go over the
documentation a little bit. And
depending on what kind of machine that
you're on, I'm currently on a Mac. So,
this is the documentation that I want to
be referring to. As you can see, it's a
simple git code and you can get started.
But just before you go ahead and do that
from your terminal, what you want to do
is you want to make sure that you have
your Docker Desktop downloaded
installed, and if you haven't done that
before, what you want to do is to go
over to docker.com and basically
download the Docker Desktop app into
your machine. And I already have my set
up here. So, just go ahead and download
that if you haven't done so. And also,
you want to make sure that you have Git
installed on your local machine. So,
once you've got everything set up, and
again, I'm on Mac machine. So what you
want to do is scroll down here where
we're going to clone the entire package
together with Olama within because
that's the LLM that's going to run on
your local NN. And if you already have
Olama set up on your machine, there is a
documentation separately for that. But
in this case, what we want to run is the
command lines right here. And we're just
going to copy that to our terminal. And
it's basically a git clone. And once
we've cloned that repo, what we want to
do is cd into the self hosted AI starter
kit and set up the environment file.
Make sure that it in there. There we go.
And what we're going to do now is to do
a docker compose command. And this time
is the CPU one because this is a max
silicone. And once you run that, it's
going to take some time to run depending
on how quick your internet connections
are. And as you can see, it's
downloading a couple of things. You
know, Postgress, Quadrron, Nadan, and O
Lama, which are the basic building
blocks that you need to work with
locally.
Now, that took a little while, but now
it's completed. So, what we want to do
now is just to go to the local host
here, 5678, which is the default local
host that you can access your NAND
instance on. So on my browser, I'm going
to go over to localhost 5678. As you can
see, it's going to ask for some email
credential. Sign up. So you can do any
email you want because it's all
basically running on your local machine.
All right. And once you're signed in,
you're going to see that there is a demo
workflow because part of the repo there
is a demo workflow template that's been
loaded up. So you can just open that. As
you can see, the template loads up with
a basic LLM chain with the Lama chat
model. And for this basic chain to work,
you have to plug in a fallback model. So
in this case, I'm just going to add
another lama model here. Choose the same
credential there. Just leave it as that.
And basically, we can just say, "Hey,
how's it going?" And it's going to
access the Oama chat model. And again,
all of this is basically being powered
on your local machine and is running on
the doc desktop. But I'll show you in a
little bit what that looks like. But
here you go the notice executed
successfully and it's responding with
this output.
And on the docker desktop as you can see
there's one container running which is a
self-hosted AI starter kit and under
that you can see postgress quadrant and
n is running and if you would stop the
operation of a container and you head
back to edit end instance you can see
that the connection is lost here. So it
is entirely running on your docker
container and powered by your local
machine. So going back to our docker
desktop here I'm going to hit on start
and on the left hand bar you see there
are images and these are images that you
can download you know latest versions
you can see you can search for nan
images and download the latest version
of nan and and basically run that and
then there's the volumes here which is
where all the persistent data is stored
and I'm not going to go into the weeds
of how it all works together because we
do have a separate course for docker to
address that but going back to the
container if you will want to do any
configuration changes and stuff like
that what you can do is actually open up
the configuration here and actually
configure the doc compose file depending
on the kind of requirements that you
need such as user credential persistent
storage and port configurations. But
that's it for a super quick one on how
to host edit on your local machine with
docker. So in the past section we talked
about how we can host nitn on our local
machine with docker desktop. And in this
section we're going to go through how
you can run nitn on an ec2 instance with
docker running on top of it. So an easy
way to get started is to go to
codcloud.com/playgrounds.
So this is where we host all our
playgrounds including AWS Azure GCP and
Azure data. So what we want to do here
is to go to AWS and spin up the AWS
sandbox playground by hitting launch
now. And it's just going to take a
couple seconds to start up the
playground. And there we go. We've got
our credentials here. What we want to do
is just copy this to our browser. And
that's going to lead us to a sign-in
page on the AWS console. And what we
want to do is to go back to the
playground and copy out the username and
the password. And you can just fill that
in to gain access to the AWS console.
And there you go. Got a console ready to
go. And what we want to do here is head
over to EC2 instance and hit instances.
And as you can see, because it's a new
session, there are no instances just
yet. So we're going to hit launch
instance. And I'm going to call this
instance nan-demo.
And we're going to choose YUbuntu
here. And we're just going to leave
everything else as is for now. And what
we want to choose here is a T2 medium.
And again, this is just to cater for the
fact that we're going to have lama as
part of the packet that's going to come
in. As you recall in the past section
when we did the git repo clone, it's
going to be quite sizable. So we just
want to make sure that we're catering
for that. And of course the storage
later on, we want to configure that to
be quite sizable as well. So with a key
pair login, we can create a new key
pair. And we're also just going to call
this an demo key. It's an RSA type
withPN
format. So let's create key pair. And
that's just going to get saved in my
download folder. And under network
settings, we'll make sure that it is
allowing SSH traffic from anywhere for
the time being. And at the same time,
under configure storage, we want to
change this to 30 just to ensure that
it's got enough to cater for the packet.
Right. And everything looks okay to go
right now. So, we're going to launch
instance. And there you go. The instance
was successfully launched. It takes just
about a minute. And as you can see, the
instance has been spun up. And right
before we go to our terminal to start
setting things up, what we want to do is
just go into the instance here and make
sure that the security group is
configured. So, there you go. We're
going to the security tab here and hit
the security groups. And under the
inbound rules, we would just want to
make sure that we add the SSH inbound
that's coming from the end port. So this
is custom TCP and it's going to be port
5678, which is the default port. And
we're going to allow it to be able to be
accessed from anywhere. Now, we're going
to save rules. And there we go. That's
been added. And going back to our
terminal, we're just going to type in
chmod 400 and then dash demo- key, which
was our p key. So just to make sure that
we can gain access with the right
pairing key. And we're going to SSH into
the instance by typing in this command
line which includes the PAM key as well
as the Ubuntu
public IP address which we can get from
the EC2 instance console. and just copy
that and go back to the terminal. Paste
that in and we're going to run that. So,
this is a typical message that's going
to show up for the first time if we're
trying to SSH into it. So, we're just
going to hit yes. It's going to just
double confirm because the key is not
known by any other names. But since
we're the ones who created the instance,
we know that this is safe. So, we're
going to hit yes. So, the next thing we
want to do is to update the system and
install Docker.
We're going to run install there. Next,
we're going to enable Docker as well as
add the user group. Make sure that we
can get access to column Docker. And
what we're going to do next is just to
install Docker Compose version two. And
just to make sure that it's installed
properly, we're going to run Docker
Compose
version command. Sorry. It should be
Docker Compose just version without the
dash. Oops, there's a typo there. So,
should be Docker Compose
version.
All right. So, that's version two. Good.
We can move on. So, the next few steps
are going to be very similar to what we
did in the last section when we're
trying to self-host on Docker. So, we're
going to do a g clone of the g repo. And
we're going to change our directory to
the self hosted AAI starter kit. Okay.
And we're going to copy the environment
file. Let's make sure that that's
created. All right. And just before we
run the docker compost command, we're
just going to do some minor change in
the environment file because sometimes
the security settings might not allow
you to access the destination due to
certain types of security protocols. And
what we want to do is in this case
because we're just trying to show how to
set it up is to disable some of that.
So, what we want to do here is to just
paste in the line edit and underscore
seccure cookie equals false. And once
that's done, we're going to exit. We're
going to hit yes. And there we go. So,
I've just cleared that up. And what
we're going to do here is just run the
docker compose command
and basically to pull in the same way as
the previous section all the nan files
the packet and everything that's
contained in the git repo into the EC2
instance.
So it's going to take a little bit of
time as is pulling all of that. So now
that's done. We want to check if the
container is up and running. So, we're
going to type in the docker ps command.
And as you can see, the containers are
running. So, let's go to a public IP
address. And the way you can do that is
to go to your AWS console and copy the
public IPv4 address. And once you do
that, you can simply type it in with the
port 5678 at the end.
And there you go. That's your Nident
instance. And as you can see, uh this is
the first time logging in. So there's no
persistent data of credentials. So I can
just type in any credentials just like
this and
it's going to spin up the edit instance.
As you can see, this is a dashboard that
you would normally see on your end
cloud. It's just announced it's hosted
on an EC2 instance on AWS. Everything
else works the same way. If you go into
your demo workflow, the same template is
going to spin up as a default demo
template which is attached to Olama. And
again, Olama was part of the package. So
it was downloaded and basically loaded
up within the EC2 instance. Cool. So if
you were to go back to your instance
right here,
you can see the usage in the instance
tab.
And that was probably when we were doing
the git clone. But of course, as you
move on and operate the NDN environment
within the EC2's instance, you're going
to see the consumption data over here.
Okay. So that's a straightforward way to
show you how you can quickly set up an
EC2 instance and get your end
environment connected and hosted on
there through Docker. And again, if
you're not familiar with how EC2 and
Docker works, we do have courses on
that. So you can go and check that out
for sure. And of course if you rather
just skip all the hassle of not just
setting up the environment and the
infrastructure to run this uh but also
at the same time maintaining it you
could just use any cloud. They do charge
a monthly pricing but at the same time
you do get a lot more convenience if
you're just starting out with automating
things through an end and you don't need
to spend a lot of time worrying about
all the other stuff in terms of
maintenance of the infrastructure. So
let me know what you guys think and what
you guys prefer. Otherwise I'll see you
in the next section. Congratulations,
you made it to the end of the course.
Think back where we started at the
beginning of this course. Notes and sub
workflows may have felt abstract. Now
you've not only learned the
fundamentals, you've actually built
intelligent productionready workflows
from scratch. That's a huge step. Along
the way, you've explored how to connect
APIs and services with HTTP request node
and build AI powered agents that
automate real world tasks like email
replies, research, and even select
conversations on your behalf. You've
also learned how to generate images and
videos directly from text or media
inputs and also use sub workflows to
organize complex systems into reusable
modular components. And you've also
learned how to host NDN anywhere from
edit at cloud to docker or AWS EC2
giving you the flexibility of your own
environment. We then move into a non
territory rag agents with pine cone back
to databases enabling memory and
contacts in your automations. You've
also learned how to use MCPS in your NDN
workflows to scale and reuse some of the
building blocks. And of course, we've
gone through the best practices in terms
of error handling, retries, and
leveraging the end template marketplace
to accelerate your builds. By now,
you've seen how automation goes beyond
just saving time. It's about
orchestrating systems, extending the
reach of AI, and freeing yourself from
repetitive work so you can focus on
higher value tasks. So, where do we go
from here? First, get building. The
workflows we've covered are a strong
foundation, but end is also limitless in
its flexibility. Try connecting new
APIs, experiment with custom nodes, and
layer in new additional agents. The more
you explore, the more powerful your
automations become. Second, share and
learn with others. At CodeCloud, you're
part of a global community of learners,
engineers, and automation enthusiasts.
Ask questions, showcase your workflows,
and draw inspiration from others
projects. Collaboration is one of the
best ways to grow your skills. And if
you like to share feedback, ideas, or
just to show me some of the cool stuff
that you built, feel free to get in
touch with me. I'd love to hear what you
build with ended.
Third, think about your own environment.
Could you integrate end into your DevOps
pipelines, replace manual reporting with
AIdriven summaries, or deploy customer
support agents that scale your
businesses? Whatever your role is,
automation is a lever that you can pull
to create real impact. A final thought,
remember automation isn't about
replacing people. It's about augmenting
what you do. By letting Editen handle
the repetitive, the mechanical, and the
time-conuming, you create more space for
creativity, strategy, and innovation.
So, keep experimenting and keep building
and keep pushing boundaries on what's
possible with Na. Thank you for lending
with me and with Docloud. I'm Maronei
and I can't wait to see what you build
with that event.
So I want to run through the differences
between running your instance on end
cloud versus the lab playgrounds that we
have on the code cloud course. Now the
first thing you'll notice as you go into
the lab is that you still have to
provide your email and your first name,
last name credentials as well as a
password in the instance. And don't
worry, this is not saved anywhere. So
you don't actually have to save the
credential and the password somewhere.
You can use different emails and
password for each of these instances. So
once that's filled in, you can hit next.
And there's going to be a series of
onboarding questions here which you
don't need to fill. So you can hit get
started. And in the same way for the
paid features information, you can just
skip. And there you go. And now you're
in the admin dashboard that is very
similar to the N9 cloud environment.
However, there's still some nuances and
some differences as we go into the
workflow. So here what we're going to do
is we're going to click start from
scratch. And the very first workflow
that you're going to build is the email
AI agent. And the first trigger to that
is a chat trigger. And I'm going to
speed through this because this is very
similar to the workflow that we've done.
But I want to just show you quickly the
differences of using the codecloud
keyspace as well as the open AI API key.
So if you're using the codecloud
keyspace, what you want to do when you
select your model, for example, in this
case, I'm just going to select the open
AI chat model. And what I want to do is
to follow the instruction in the left
hand bar right here. And you see that
there's a link to code key. I'm going to
just hit that URL. And I'm going to hit
launch now. and it's going to lead me to
this dashboard right here. I'm going to
click start playground and there we go.
So, this is the dashboard on CodeCloud
Keyspace. And what I want to pick is the
OpenAI GPT 4.1. And in this case, I'm
going to just copy the API key here. I'm
going to go back to my workflow. And
here, I'm going to hit create credential
and I'm going to paste the same exact
API key. I'm going to skip the
organization ID. And for the base URL, I
want to make sure that I'm replacing
this with the base URL that I obtained
from the CL keyspace. Go back to my lab
and paste the base URL right here. Okay,
so I'm going to hit save right now. As
you can see, it says connection tested
successfully. However, there's a couple
of things I want to point out here. Cuz
if you pick from list, as you can see,
the list doesn't really match the known
models of GPT. This is because it's not
really working based on the UI that's
been built. So if you try to run it
based on the chat ID or chat model that
we've selected, I'm just going to run a
hello message here. It's going to go to
the AI agent, but it's going to error
out. So what you need to do is go into
the chat model and instead of picking
from list, you want to go by ID and as
suggested from the instruction on the
left hand bar here, you want to copy the
OpenAI/GPT-4.1.
Just copy that and paste it all word for
word. And let's do a test run again. And
this time it should actually be able to
access the appropriate model. So that's
if you're using the CodeCloud keyspace
API keys. Now what I want to point out
is the difference between this and using
the OpenAI API key is that it's much
more straightforward when you use OpenAI
API keys. So to show you the difference,
what I'm going to do here is I'm going
to create another new credential and I'm
going to call it OpenAI account too.
This time I'm going to head over to
platform.openai.com.
And what I want to do is head over to
API key section. I'm going to create a
new secret key in and then email
integration. I'm going to hit create
secret key and I'm going to copy the API
key and head back to my lab. I'm going
to paste the API key and leave the base
URL as is and I'm going to hit save. So
there you go. So connection tested
successfully. And the difference is
instead of by ID, I can now just pick
from the list and it should load up the
correct list of models that I might
possibly want to use. So for example, if
I choose GPT 4.1, it's just going to be
that. And we're going to run this node
again. And as you can see, it's calling
the correct model right here. Okay. So
the next thing I want to point out is
when you add your Gmail note on the next
workflow or Google tool for that matter
the difference between doing that in our
labs versus edit cloud is that on edit
cloud you often see when you create a
new credential with Gmail that you can
actually have a button which you can
sign in directly using your Google
account if your browser happens to be a
Google Chrome browser. However with the
labs what you need to do when you create
a new credential is you actually need to
connect it with the oorthth method. So
what you want to do is you want to head
over to console.cloud cloud.google.com.
And the first thing you want to do is
create a new project. And for the new
project, I'm going to name it in email
app. All right. So, I'm going to leave
this as no organization. I'm going to
hit create. And it's going to take a
couple seconds to create the project.
And once that's done, I'm going to
select the project. And as you can see,
it says edit an email app project. And
the very first thing I want to activate
is I want to go to Gmail API. So what
I'm doing right now is I'm creating a
project because that's how Google
recognizes each of these oath access
that we're giving it. But the way the
security works is you need to enable the
particular tool that you want to use
within the project. So in this case I
want to use Gmail. So I want to make
sure I enable the Gmail API. All right.
Once that's enabled, I want to head over
to OOTH consent screen. And right now
there's no OOTH consent that's set up.
So I'm going to just hit get started.
And in this case, I'm going to have to
call it an app name as well. So I'm just
going to say it is an email
app. Okay. User support email. Going to
put this.
And we're going to select external. And
by the way, each of these steps is
documented on the left hand side panel
of the lab. So you don't have to
memorize any of these. But we're going
to go to next. And under contact
information, just going to put my email
here. And once you're done, just hit
continue. and create. So, just before we
go, I just want to go to audience and I
want to add a test user here, which is
an email that you're going to use to
send out basically the emails that you
want the agent to send out. So, in this
case, it's maronei atcloud.com. We're
going to save that. And lastly, we're
almost there. We're going to go to API
and services again, and this time we're
going to go to credentials. And what we
want to do is to hit create credentials
with OOTH client ID. And application
type we want to choose web application.
And under name you want to name it
Canadian email oath client. And then
we're going to add the authorized
redirect URLs which we can obtain from
our lab. So going back into the lab
here, you see that this is the OOTH
redirect URL. and we're going to copy
this and we're going to head back and
just fill this in and we're going to hit
create. As you can see, we now have the
client ID and client secret to the app
that we just created. So, we're going to
copy this and head back to client ID.
Paste it in. Client secret. Copy that.
Paste in the client secret. And you'll
see now that there is a sign in with
Google button that pops up. So, what you
want to do is just hit that and a Google
login pop-up will show. And then you
just want to correctly select the email
address. And you'll say that Google
hasn't verified this app. But because
you're the one who created it, you know
it's safe. So we're just going to hit
continue. And we're going to select all
because we want to have the agent be
able to do all these actions with our
Gmail. So we're going to hit continue
now. And as you can see, it says
connection successful. And we're good to
go. So just wait a couple seconds here
within the labs and it's just going to
load up. And there you go. I already
have my credential set up here. And we
just want to run an execute step to show
you that everything is working. So in
the workflow, you'll see that we've
chosen to define all of this by the
model. So we're going to let the AI
agent define this. And we're going to
start chatting and say, "Hi, can you
send an email to codingcloud.com
to just say hello?" All right. So we're
going to hit this. So as you can see now
the workflow has run and it's actually
sent an email to my Gmail. So let me
take a look. And there you go. It says
hello mer just want to say hello best
regards. Okay. So obviously it's not
very sophisticated because actually in
the AI agent we didn't even specify any
system prompt. So the whole point is
just to show you the main difference
between running the environment on end
cloud and within our playgrounds
specifically covering the part where
code key is being used as well as when
you're going to access any Google tools
Gmail, Google Sheets and stuff like
that. You do need to go to your Google
cloud console to set up the project, the
app, and the ooth clients in order to
access Google services with the
workflow. And as you explore Nit in the
course, you're going to realize that
there's going to be some differences
between running edit cloud and edit
within your own self-hosted environment.
For example, the availability of
community notes, supported versions, and
a few other features that might only be
available on edit cloud. So if you're
facing any issues any part of that
build, just keep in mind that it might
be because you're using a self-hosted
version or a lab hosted version from
CodeCloud. And it's not necessarily a
limiting issue. There's always a
workaround for that. It's just something
to keep in mind about
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.