AWS Certified Generative AI Developer - Professional: Build an agent + Lambda tool
FULL TRANSCRIPT
Day 18, build an agent plus a Lambda
tool. Day 18 is where AWS tests whether
you understand a very strict rule. The
LLM is allowed to think. Only Lambda is
allowed to touch real systems. That
separation is the entire point of this
day. Imagine this. A large hospital
deploys an AI assistant called the
hospital bed manager. Doctors and nurses
ask it things like, "Is there a free bed
in cardiology? Reserve a bed for an
incoming patient. Notify the correct
ward." If this assistant guesses
availability, people get delayed care.
If it hallucinates reservations, the
hospital descends into chaos. So, the
hospital makes one rule very clear. The
AI can plan, but only real systems are
allowed to decide the truth. This is
what AWS means by agent plus lambda
tool. An agent is not a single call. It
is a loop. The LLM plans the next step.
A Lambda tool executes a real action.
The tool returns structured results. The
LLM observes those results and decides
what to do next. And the loop continues
until the task is complete. The key exam
idea is this. Tool outputs are the
source of truth. The LLM must never fake
them. Now picture the clean examfriendly
architecture. A client, web or voice,
sends a request through API gateway.
That request goes to an agent
orchestrator, usually another Lambda or
application service. The orchestrator
talks to Bedrock where the LLM acts as
the planner. When an action is required,
the orchestrator invokes a Lambda tool.
That Lambda touches the database or
internal API. Results flow back. The
agent decides the next step. Finally, a
response is returned to the client.
Every part has a role. Nothing overlaps.
Let's be very clear about who does what.
The bedrock model interprets the user's
goal. It decides which tool to call and
in what order. It writes the final
userfacing response. The Lambda tool
performs the real work. It queries
databases. It calls internal APIs. It
reserves resources. The orchestrator
enforces the loop. It manages retries.
It handles timeouts. It applies safety
and control. This separation is exactly
what AWS wants you to describe. Now,
let's talk about what a Lambda tool must
look like. In AWS's mental model, tools
return structured JSON, always, not
pros, not explanations, not friendly
text. For example, a bed availability
tool returns a clear result, a status, a
count of available beds, the ward name,
a timestamp. This structure is not
optional. It's how the agent stays
deterministic. AWS expects you to follow
strict tool rules. Tools never return
free form text. They always include
status and error codes. They validate
inputs. They run with least privilege IM
roles. They log every invocation and
they are item potent. So retries are
safe. These details matter in exam
questions. Now let's walk through a real
request. A nurse says find a bed for a
patient in cardiology and reserve it if
available. The agent does not guess. The
planner thinks. First check bed
availability in cardiology. If beds are
available, reserve one. Then notify the
ward. The executive. The Lambda tool
runs those steps. One Lambda checks the
real database. Another Lambda writes the
reservation. The LLM never invents
numbers. It only reacts to tool output.
Finally, the assistant responds, "Bedb
14 and cardiology has been reserved. The
ward has been notified." That response
is grounded in real system calls. Why
does AWS love Lambda as the tool layer?
Because Lambda creates a clean security
boundary. It integrates easily. It works
cleanly with IM. It logs automatically.
It supports retries and timeouts. It can
run inside a VPC to reach private
systems. If the exam mentions calling
internal APIs, touching databases, or
performing actions securely, Lambda
tools are a top tier answer. Error
handling is also exam relevant. If a
tool times out or fails, the
orchestrator retries, the LLM is told
the tool failed and can choose a
fallback. If there's an access denied
error, the fix is IM, not prompting. If
a tool returns invalid JSON, it's
treated as a failure. The LLM never
makes up missing tool results. AWS also
cares deeply about observability. You
must be able to answer which tool was
called, with what parameters, how long
it took, what it returned, which user
triggered it. This is done with
Cloudatch logs and X-ray tracing.
Especially when agents perform multiple
steps, tracing becomes essential. Now,
watch for the exam traps. Never let the
LLM pretend it called a tool. Never
return unstructured text from tools.
Never put secrets into prompts. Never
give tools wild card IM permissions.
Never skip timeouts and retries. These
are all subtle ways to fail day 18
questions.
Here is the one sentence to lock this
in. The LLM decides. Lambda verifies and
executes. If you remember that, day 18
becomes automatic. Final selfch check. A
system must plan actions using an LLM,
but only trusted code may access
databases and APIs. What architecture
should you use? An agent with Lambda
tools. That's day 18 mastered.
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.