AgentCore / Strands / memory + state
FULL TRANSCRIPT
This day is very exam heavy because it
mixes state, memory, tools, and tracing.
All things that break silently if you
design them wrong. Agent core strands,
memory, state. Big idea, one sentence.
An agent is not a single prompt. It's a
longive system with state, memory,
tools, and traceability. One, agent core
versus strands conceptual, not syntax.
You'll see different names, but AWS is
testing the concept, not the brand agent
core conceptually. The runtime brain of
the agent decides next action, calls
tools, updates state, reads, writes
memory. Think the control loop. Strands,
independent execution threads or
reasoning paths. One strand might be
booking, another validating, another
fetching context. Strands let agents do
multi-step reasoning, pause and resume,
branch safely. Exam mindset. You are not
expected to code this only to design for
it. Number two, agent state versus
memory. This distinction matters. Agent
state short-term current task step
number tool call in progress temporary
variables. State is session scoped
mutable short-lived. Example user is
halfway through booking an appointment.
Agent memory long-term user preferences
past interactions known entities.
Historical outcomes. Memory is
persistent, durable, reused across
sessions. Example, user prefers morning
appointments.
Exam trap state memory mixing them is a
design flaw. Number three, where AWS
expects you to store memory. The most
common and exam safe answer, Amazon
Dynamo DB. Why Dynamob? Low latency,
durable keybased access, user ID, agent
ID, scales cleanly, typical keys, PK
user ID, SKT's memory type, timestamp.
If an answer stores long-term memory
only, and prompts four, AWS static one
agent memory edition. Static agent logic
tool definitions memory schema state
machine rules plus one current user
interaction the agent design stays fixed
conversations change that static plus
one again number five tool integration
patterns very exammy agents don't do
things they call tools common tool
patterns lambda functions APIs databases
search rag tools
wrong design tool logic embedded in
prompts no validation no traceability
exam signal external system, action,
side effect, tool call. Number six, why
agent tracing exists. This is critical.
Agents fail in non-obvious ways. Wrong
tool order, repeated loops, hallucinated
tool arguments, state corruption.
Tracing answers, what did the agent
think? Which tool did it call? In what
order? With what input output? Number
seven, what agent tracing implies in AWS
terms. Tracing means step-by-step
visibility, tool calls logged, state
transitions recorded, timestamps
preserved, typically integrated with
Amazon Cloudatch, AWS X-Ray, exam
signal, debug, audit, governance,
explain behavior, tracing plus logs.
Eight, why tracing matters for
governance, not just debugging. AWS
cares about compliance, explainability,
audit trails. Tracing lets you answer
why was this action taken, which data
was used, which tool modified a record,
who approved this logic. If an exam
question mentions regulated systems,
tracing is mandatory. Nine. Realistic
agent architecture. Exam friendly. You
can see the code in our conversation
history.
This loop may run multiple times per
request.
10. Classic exam traps. Watch carefully.
Store agent memory only in prompts. No
need to trace if responses look fine.
State and memory are the same. Tools are
just prompt instructions. AWS wants
systems, not clever prompts.
One memory story. The assistant with a
notebook. State what page they're
currently on. Memory notes saved in a
filing cabinet. Tools. Phone calls they
can make. Tracing CCTV footage of
everything they did. If something goes
wrong, you don't ask the assistant to
remember better. You review the footage.
Exam compression rules memorize
short-term state long-term memory.
Dynamo DB actions tools debug and audit
tracing design once static plus one. If
an answer treats an agent like a single
prompt wrong what AWS is really testing,
they want to know if you understand that
agents are longunning auditable systems,
not chat completions. If your answer
includes state, memory, tools, tracing,
you're answering at AWS professional
level. Here are three very real
production style examples that map
exactly to what AWS expects you to
design and reason about in the exam.
Real examples. Day 13, agent core,
memory, state, tools, and tracing.
Example one, AI receptionist for a
dental clinic. Classic agent plus
memory.
Scenario. An AI receptionist books
appointments, answers questions, and
follows up later. A user calls today,
then again next week.
What is state here? State is temporary
and sessionbound. During one call,
current steps collect preferred date.
Missing infog's provider tool in
progress will check availability. This
lives only for the session. If the call
drops, state is discarded.
What is memory here? Memory is long-term
and reusable stored in Amazon Dynamob.
You can see the code in our conversation
history. Next week, the agent reads
memory, skips unnecessary questions,
personalizes behavior. Exam takeaway.
Open quote. Stay equals what's happening
now. Memory equals what we know forever.
Close quote. Tool integration. When
booking, agent calls a booking API.
Lambda tool. Receives availability.
Updates state. Writes confirmed
appointment to memory. Tools are
external actions. Never prompt tricks.
Example two. Compliance sensitive agent.
Why tracing matters? Scenario. An agent
assists staff with financial compliance
checks. One day it incorrectly approves
a transaction. Now auditors ask open
quote. Why did the agent do that? Close
quote. Without tracing bad design, you
only have final answer, no visibility.
This fails governance requirements. with
tracing correct AWS design the agent
records reasoning steps tool calls
inputs and outputs timestamps stored via
Amazon Cloudatch logs AWS X-ray
execution path you can now answer which
rule was applied which data source was
queried which tool approved the
transaction exam signal if you see audit
compliance regulated explain decision
tracing is mandatory example three
multi-step agent with strands branching
safely
Scenario. An agent handles open quote.
Book an appointment and send me a
confirmation SMS. Close quote. This is
not one action. Hashed house strands
appear conceptually. The agent creates
separate strands. Strand A booking
logic. Strand B notification logic. Each
strand has its own state. Can succeed or
fail independently. If booking succeeds,
SMS fails. The agent retries SMS does
not rebook the appointment. Exam
takeaway. Strands prevent cascading
failures in multi-step agents. Example
four. Rag enabled agent with memory plus
tools. Scenario. A policy assistant
agent answers questions. Remembers what
documents a user already reviewed. Flow
one. User asks a policy question. Two.
Agent checks memory for previously
accessed topics. Avoids repeating
explanations. Three, agent calls.
Retrieval tool. Open search KB. Four.
Agent responds. Five. Memory updated.
User reviewed policy X. Next question.
Agent retrieves only new context.
Response is shorter and more relevant.
This improves UX cost accuracy. Example
five. Debugging an agent loop. Real
failure mode. Problem. Agent keeps
calling the same tool repeatedly.
Without tracing, you see slow response,
high cost, no idea why. With tracing,
you see state never transitions. Tool
result not validated. Agent re-enters
same step. Fix update state transition
logic. Add guard condition. Exam signal.
Agent loops. Unexpected behavior.
Tracing plus state inspection. Statics.
One. Real world anchoring. Static. Agent
logic. Tool definitions. Memory schema.
Tracing configuration.
Plus one. Current interaction. Design.
Once run forever. One. Memory story.
Never forget. The assistant at a desk.
State. Sticky notes on the desk. Memory.
Filing cabinet. Dynamo DB. Tools. Phone
calls and emails. Tracing CCTV footage.
When something goes wrong, you check the
footage, not guess. Ultrashort exam
cheat sheet. Long-term info. Dynamob
short-term flow state. External actions.
Tools. Debug and audit. Tracing. Agents.
Co-prompts. If an answer treats agents
as just prompts,
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.