TRANSCRIPTEnglish

Ralph Wiggum Showdown w/ @GeoffreyHuntley

48m 43s8,524 words1,276 segmentsEnglish

FULL TRANSCRIPT

0:00

should probably just have this. I think

0:03

we're live. What's up, Jeff?

0:05

>> What's up, Dex?

0:07

>> Uh, I'm really jealous of your of your

0:09

DJ setup over there. That's uh pretty

0:11

incredible.

0:13

>> It's been a while. Thanks, mate. Like, I

0:15

remember

0:16

>> um when I first caught up with you in

0:19

San Fran, probably what June, July,

0:22

rocking into a meetup and like go to

0:24

Allison, it's like here's some free

0:26

alpha. If you run it in a loop, you get

0:28

crazy outcomes. And this was with Sonnet

0:32

45. And now we're up to Opus 45.

0:35

>> No, dude. This was not Sonnet 45. This

0:37

was in May. This would have been like

0:38

Sonnet 35, I think.

0:40

>> Yeah, it was. Anyway, it was cooked back

0:44

then. 6 months later, as the model gets

0:46

better, uh this the the techniques

0:51

um there's been a few attempts to turn

0:53

it into products.

0:55

>> Um but I I don't think that will work.

0:59

Um because I see LLM's amplifier of

1:03

operator skill. Um and if you just set

1:07

it off and run away, you're not going to

1:10

get a great as great of an outcome. um

1:14

you really want to actually babysit this

1:16

thing and then get really curious why it

1:18

did that thing and then try to tune that

1:22

behavior in or out and really think

1:24

about it and never blame the model and

1:26

always be curious about what's going on.

1:28

So it's really highly supervised.

1:31

>> Highly super. Yeah. You guys were

1:32

talking with Matt today was like human

1:34

on the loop is better than human in the

1:35

loop which is like don't ask me but I'm

1:37

going to go poke it and pro it and test

1:39

it and I might stop you at certain

1:41

points but I'm not being the model's not

1:43

deciding when and how.

1:44

>> Correct. So it's it's really cute that

1:46

Anthropic has made the uh Ralph plugin

1:50

which is nice. So it's starting to cross

1:52

the chasm but I do have some concerns

1:54

that people will just like try the

1:57

official plugin and go that's not it.

1:59

And like you've you've you've poked in

2:01

the internals and I've I've we sat down

2:04

and you've done it. You you see the

2:06

concepts. It's like some of the ideas

2:09

behind human layout.

2:10

>> It's um you say that it's not it. So how

2:14

is it not it Dex?

2:15

>> Okay. So I'm going to talk about what we

2:18

actually want to do today which is like

2:20

>> I have two GCP VMs

2:23

>> and in both of them we have this specs

2:25

and they both have a repo checked out.

2:28

Um, this one actually doesn't even have

2:29

a loop dash yet. This just has the like

2:32

slash ralph wiggum create loop or

2:36

whatever. I forgot what the exact thing

2:36

is. We're going to go set it up today. I

2:38

haven't actually turned this on yet,

2:40

but I've created these two git repos.

2:42

One has a prompt. MD and a loop.sh and

2:45

it will eventually create this

2:47

implementation plan. This is like

2:49

vanilla Ralph from the Jeff recipe,

2:51

right? And so I've got in this shell I

2:55

have my loop.sh, SH, which is literally

2:58

just run claude in yolo mode, cat the

3:00

prompt in, and let it go do its thing.

3:03

>> Yeah. Uh, bigger, by the way. Triple

3:06

size. Let's bigger. Bigger. Bigger.

3:08

>> Yeah. Yeah. Yeah. And I'm actually going

3:09

to close some of these terminals. Um,

3:12

and then each of these have um let's see

3:15

if we can pull this down. Yeah. Each of

3:17

these have a so there's two directories.

3:19

There's two git repos I've made. One to

3:22

test the anthropic version.

3:26

uh and one to test the uh I'll call it

3:28

the Jeff version of Ralph. Um so we've

3:32

got the bash one and then we've got the

3:36

plugin one.

3:38

And so these both have received they're

3:40

just empty repos. Um I'm going to add

3:42

the loop and the prompt. We'll look at

3:43

the prompt in a sec. But then we've got

3:46

these just like specs for a project that

3:50

I was hacking on called customark which

3:52

if you remember Kubernetes and the

3:53

customized world it's sort of a uh

3:56

customization pipeline for like

3:59

incrementally building markdown files

4:01

with like patches and stuff. So

4:05

>> um so anyways they're both getting the

4:06

same set of specs and they're both

4:09

basically being instructed to uh

4:13

run they both get the same prompt

4:16

which is like god and actually I guess

4:19

this one will also get implementation

4:21

plan right?

4:22

>> Yeah

4:23

>> assuming we have the same prompt and the

4:25

prompt is essentially I'll just push it

4:28

and go get it. um

4:29

>> while you go get it. Now in that diagram

4:32

you have GCP

4:34

>> folks we uh we've been at AGI for a very

4:38

long time. If you define AGI as

4:40

disruptive for software engineers at

4:42

least six months now and these models

4:44

are just getting better now the GCP

4:48

thing I see people go oh what about the

4:50

sandbox sandboxing dangerously allow all

4:53

think about it dangerously allow all is

4:56

literally like put deliberately

4:58

injecting humans into the loop you don't

5:01

want to inject yourself into the loop

5:03

because that's essentially not AGI

5:06

you're dumbing it down

5:07

>> but

5:08

>> interesting it is kind of dangerous to

5:09

do things. So the fact that you're

5:11

running on a GCP VM

5:14

is key, right? You you want to you want

5:17

to enable all the tools,

5:20

>> but it remember the trifecta, right? Is

5:24

like

5:25

>> untrusted in

5:27

>> is

5:29

access to the network

5:32

and then access to private data.

5:35

So we are giving it access to do

5:37

everything which means it can search the

5:39

web which means it can accidentally

5:40

stumble on untrusted input. We're giving

5:42

it access to the network to because it

5:44

needs to do things I don't know search

5:46

web whatever it is and we're giving it

5:48

we're not giving it access to private

5:49

data. So there here's why we're safe is

5:51

this is running in like a dev cluster in

5:52

GCP and I think the only thing on there

5:55

is like the default IM key which can

5:58

literally like look up information about

6:00

the instance. You can look at this as

6:03

layers of onion,

6:05

layers of the security onion.

6:07

>> Uh so like if you run dangerously allow

6:11

all from your local laptop, congrats.

6:13

They they go nab your Bitcoin wallet if

6:16

it's on your computer. They steal your

6:17

Slack authentication cookies, GitHub

6:20

cookies, and they pivot, right? That

6:21

that's that's terrible. But if you

6:25

create a custompurpose VM or an

6:27

ephemeral instance just for this, you

6:31

start restricting its network abil

6:34

network connectivity and you do all the

6:36

things that you should do as a security

6:38

engineer. The next thing you know is

6:40

like okay it's not what if it's like if

6:44

it gets when it gets popped. I develop

6:48

on the basis that it's a when. So the

6:51

blast radius is if that GCP is like the

6:54

worst thing it is cuz this is not public

6:56

IP.

6:58

>> Yep.

6:58

>> Um there is no really absolutely

7:01

terrible thing. Okay.

7:02

>> We've given it restrict the only

7:04

permissions on this box are my cloud API

7:06

keys and deploy keys to push to the two

7:09

GitHub repos.

7:10

>> Correct. Proper security engineering.

7:13

>> It's not if it gets po. It's not if it

7:15

gets popped. It's when it gets popped.

7:17

And what is the blast radius? So

7:20

>> this is however not an invitation to go

7:22

pop my GCP VMs. I will not be sharing

7:25

the IP addresses.

7:27

>> If you want to uh share API keys with

7:29

me, Dex, I always need some.

7:32

>> Uh you know what, man? I think you have

7:34

I heard you got a lot of tokens popping

7:36

around over there. If anything, you

7:38

should be bringing me some tokens.

7:40

>> Facts.

7:41

>> Um all right, let's look at this prompt.

7:42

Yeah.

7:44

>> Yeah, let's look at the concept of the

7:45

prompt. Um, look, look at the prompt.

7:48

>> So, here's what I'm using. This is my

7:50

take on the original Ralph prompt.

7:52

Sorry, let me let me my I have T-Mox

7:54

inside T-Mux here, so it's getting a

7:56

little weird.

7:56

>> That's fine.

7:57

>> Okay, let's look at zero A, right?

8:00

>> Yep.

8:00

>> So, this is you got to think like a like

8:03

a like a C or C++ engineer and you got

8:07

to think of context windows as arrays

8:10

cuz they are they're literally arrays.

8:12

>> Context windows are arrays.

8:15

you the a tool the chat when you chat

8:19

with the LLM you allocate to the array

8:23

when you get when it executes bash or

8:25

another tool it autoallocates the array.

8:29

>> Yep.

8:30

>> So getting into something like context

8:35

engineering I heard there's a guy who

8:37

knows a thing or two about that

8:39

definition. Hey, I just talked to people

8:41

like you who uh who who knew things and

8:44

put a name on a thing that hundreds of

8:46

people were doing.

8:47

>> But yeah, context engineering is all

8:49

about designing this array.

8:51

>> It's all about the array. So, and

8:54

thinking about how LLMs are essentially

8:56

a sliding window over the array and the

9:00

less that that window needs to slide,

9:03

the better. There is no memory server

9:05

side.

9:07

It's it's literally that an array. The

9:10

array is the memory. So you want to

9:12

allocate less.

9:14

So let's go back to the prompt.

9:16

>> Yep.

9:17

>> Away. We're deliberately

9:19

allocating. This is the key. Deliberate

9:22

malicking

9:24

uh context about your application.

9:27

>> We're going to say we're just going to

9:28

have 5,000ish tokens that are dedicated

9:30

for like here's what we're building and

9:32

we want that in every time.

9:34

>> Yeah. This could be uh index h index.mmd

9:38

or readme.md which is a whole bunch of

9:41

hyperlinks out to different specs enough

9:43

to like tease and tickle the latent

9:46

space that like there are files there.

9:49

>> So you can either go for an index or if

9:53

Ralph starts being dumb you can go for

9:55

like deliberate injection.

9:58

>> So you can at specs right and that will

10:01

just kind of behavior is just to list

10:03

that out. Correct. You mention a file

10:06

name. It the the tool registration for

10:08

read file is going to go is there a file

10:10

on that? I'm going to read it.

10:12

>> Mhm.

10:13

>> So you can give a directory path. You

10:15

can give it like a direct file. So that

10:18

is the key. So if we go back to your

10:20

context window diagram.

10:22

>> Yeah.

10:23

>> Right. Think about this. So it's kind of

10:26

like you're allocating the array

10:28

deliberately. So the first couple first

10:31

first couple allocations uh is about the

10:33

actual application.

10:36

>> Mhm. And every loop

10:39

that allocation is always there. LLM

10:42

engineering is kind of tarot scarot

10:46

card reading. It's it's not really a

10:48

science. But to me on vibes it felt like

10:51

it was a little bit more deterministic.

10:53

if I allocated the first couple things

10:58

deterministically.

10:59

>> Yeah.

11:00

>> Um now once you've got that, we go on to

11:04

essentially our next level line in the

11:07

spec. So like the first one is like

11:08

deliberate malicking on every loosen

11:11

array.

11:12

>> Yep.

11:14

>> Okay. So now we got like a to-do list

11:17

type thing, like an implementation plan.

11:20

>> Yeah.

11:21

Now, something that's kind of missing in

11:24

there is like pick one.

11:26

>> Oh, it says implement the single highest

11:28

priority feature. Oh,

11:30

>> yeah. Okay. Yeah, I see that. Sorry. Um,

11:32

>> I'll say

11:33

>> that's the idea.

11:34

>> Yeah. So, a lot of the people they do

11:37

these multi-stage things. Let's go back

11:38

to the context window diagram. They do

11:40

these multi-stage things. Well, what you

11:45

want to do is for each one item,

11:48

reset the goal. Bree Malik the

11:51

objective.

11:52

>> Yes.

11:52

>> Cuz when you the objective,

11:55

>> imagine you have your somewhere down

11:56

here is the line of like where like

11:58

performance degrades noticeably.

12:01

>> Correct. There is a dumb zone. You

12:03

should stay out of it.

12:04

>> If the dumb zone is down here and it's

12:07

very dependent on where this line is

12:08

depending on what you're doing and if

12:09

how your trajectory is and how much

12:11

you're reading and all of this. Um, but

12:13

if you ask it to do too much in the

12:15

working context, then some of your

12:17

results are going to be dumb. And

12:18

especially the important part where it's

12:19

like, okay, I've made all the changes.

12:21

Let me run the tests. And then the tests

12:23

are failing and it's like scrambling and

12:24

flailing to try to get everything

12:25

working. You kind of want to have this

12:28

and then like a little bit of headroom

12:30

also for like finalizing like doing the

12:33

git commits and the pushes and making

12:34

sure that all works. You want to have

12:36

that all happening in the smart zone.

12:38

>> This is the human and the loop. a human

12:41

on the loop not in the loop. So I I I

12:44

you we we we set this up we architect

12:48

this loop in in this way and you can

12:51

either go complete AFK or you can be on

12:56

the loop. The what you just draw drew

12:58

there is on the loop. I when I'm doing

13:01

this I always leave myself a little bit

13:04

of space for juicing like I like when

13:06

I'm reviewing the work. This is when I

13:08

software instead of Lego bricks is now

13:10

clay. So this is where I I'll do my like

13:13

final wrap-up steering or I just throw

13:15

it away and then I get reset hard and I

13:19

adjust my technique and let it rip

13:21

again.

13:22

>> So you're saying you might even in the

13:23

early days you might just run one

13:25

iteration of this loop and then actually

13:27

sit here and check it like have it

13:29

basically for input between looping

13:31

again. Right.

13:33

So, like there's a reason I do the live

13:36

the live streams. It's literally I use

13:38

it as a cheeky portable monitor on my

13:40

phone. I'm doing like housework and

13:43

stuff and it's like as like a portable

13:45

monitor and I check in, I watch it. You

13:48

start to notice patterns like and you

13:50

start to anamorphize

13:52

certain tendencies. Opus 45 doesn't have

13:56

high anxiety with the context window

13:59

gets but it does seem to be forgetful

14:03

of some objectives. Um but

14:07

>> so I want to I want to quickly um

14:09

because I know you have a limited amount

14:11

of time. I want to quickly go through

14:13

the architecture of the anthropic plugin

14:15

and how it's different and I really want

14:17

to get these things kicked off because I

14:18

want people to start seeing how they how

14:20

they actually work.

14:21

Um, and so in the Ralph Wigum plugin,

14:24

rather than like do the very first

14:26

thing.

14:28

Um, so it's it we're going to use the

14:30

exact same prompt for both of them

14:32

because we want to like change as much

14:33

as little as possible. But what's going

14:36

to happen in the

14:38

uh anthropic plugin is basically

14:41

whenever it get forget where the

14:43

performance line is, but whenever it

14:45

gets to the end and you have your like

14:47

final assistant user message assistant

14:49

message, It it uses a promise. So the

14:52

user's got to do a promise and it it

14:55

relies on the LLM

14:56

>> to promise that it's completed.

14:59

>> Yeah. So you have your final message and

15:02

then basically unless unless this

15:05

contains the promise, sorry, let's just

15:07

drop this in. If it's no, then we

15:10

basically inject like the the hook

15:12

injects a new user message that is just

15:15

like prompt. MD again, which is then

15:18

going to cause this stuff to be

15:19

reallocated and like happen again.

15:22

>> And then you get things like compaction

15:24

and all this stuff. I want compaction is

15:27

the devil. Dex.

15:28

>> Yeah. At some point you get compacted

15:30

here and then instead of having all of

15:32

the context, you end up with okay, you

15:34

were running some tools and then you get

15:37

compacted

15:38

>> and then you have the model summary.

15:41

>> Yeah.

15:41

>> Of like what the model thinks is

15:43

important and then you keep going.

15:46

until you get your final message and

15:48

then this process repeats. And so in

15:50

these points it has a very different

15:51

behavior.

15:52

>> It's a it turns this is why I say

15:55

deterministic

15:56

because it's essentially

15:59

uh one is one model has zero auto

16:02

compaction ever. The other one is using

16:05

auto compaction. So the model auto

16:06

compaction is lossy. It could remove the

16:10

specs.

16:11

um it can remove the task and goal and

16:14

objective and with this with the Ralph

16:16

loop the idea is you set one goal one

16:19

objective in that context window and so

16:21

it knows when it's done if you keep

16:24

extending the context window forever the

16:27

>> you you lose your deterministic

16:29

allocation

16:30

>> you lose your deterministic allocation

16:32

and more more so let's assume the

16:34

garbage collection hasn't run it hasn't

16:37

been compacted

16:38

that window has to slide over two or

16:41

three goals

16:43

and some of those goals have already

16:44

been been actually completed.

16:47

>> Mhm.

16:48

>> One context window, one activity, one

16:51

goal and that goal can be very fine

16:53

grain like do a refactor, add structured

16:56

logging, what else have you like, and

16:57

you can have multiple of these running.

16:59

You can have multiple Ralph loops

17:00

running.

17:02

>> Mhm. Um, okay. So, I'm on my Ralph

17:05

plug-in one. I'm going to run Claude and

17:07

I'm going to kick off this loop for the

17:09

um for the other one. So, we're going to

17:11

do Ralph Wiggum, Ralph loop, read, and

17:14

then our what is it? Uh, prom. What is

17:16

the name of the flag?

17:18

>> Sorry, I'm going to something.

17:20

>> Yeah,

17:21

>> completion promise.

17:23

>> Completion promise. Yeah.

17:24

>> And this is going to turn on the hook

17:25

and it's going to start working. And

17:27

over here, I'm going to kick off our

17:30

loop.sa. Oh, I think I might have uh I

17:32

think I might need to grab the prompt.

17:35

>> Yeah. All right. So another thing to

17:36

think about this is this is essentially

17:39

the Ralph plugin is

17:41

>> um running within claude code and the

17:46

the nonplugin like the the keep it

17:49

really simple is the idea

17:52

of an orchestrator running chord code.

17:55

So or running a harness.

17:59

>> Yeah. You have the outer harness and

18:00

then the inner harness. Right.

18:01

>> This is the idea of between the inner

18:03

harness and the outer harness. So

18:05

remember I said opus is forgetful the

18:07

current opus is forgetful for example

18:10

when I'm doing loom and building loom I

18:13

see that always forgets translations

18:16

>> so cool you got this raph loop to do

18:19

what it's meant to do you got a

18:21

supervisor on top which sees if it did

18:24

asks if it did translations and if the

18:26

translations don't work you run another

18:27

raph loop to nudge it hey did you do

18:30

translations so the idea behind Ralph is

18:33

an outer layer orchestrator,

18:36

not a in a loop.

18:38

>> So it doesn't it doesn't just have to be

18:40

loop and do it forever. Your loop could

18:42

actually have like you know run

18:45

the main prompt and then you could have

18:48

another one which is like

18:51

classify if X was done.

18:54

>> Correct. We're entering

18:55

>> and then you can jump out to other

18:56

prompts like add the test and fix the

18:58

tests or like do the translations or

19:01

whatever it is. Yeah, we're engineering

19:04

into places that don't even have names

19:06

for these concepts yet, Dex. [laughter]

19:09

>> Yeah, you can front Anthropic on this

19:12

one.

19:13

>> Yeah. What do you want to I saw I was

19:15

thinking uh there was some conversation

19:17

on Twitter which was like, okay, if

19:18

cloud code is the harness, what is the

19:20

name you give for engineering the slash

19:24

commands and plugins and cloud code and

19:26

prompts and maybe the bash loop that you

19:27

wrap around it? Because like you could

19:29

say that the Ralph loop script is

19:32

becomes part of the harness and you've

19:33

created a new harness on the building

19:35

block that is cloud code or AMP or open

19:37

code or whatever. But someone else

19:39

posted is like well if if cloud code is

19:42

the harness if the if the coding model

19:44

agent CLI tool is the harness then the

19:46

things you build to control it are the

19:48

reinss. And so now I'm like what about

19:51

what is reins engineering? But I I hope

19:53

that one doesn't catch on because it

19:54

sounds really dumb.

19:55

>> No no I have some ideas. I spicy. It's

20:00

called software engineering.

20:02

>> It's called software engineering.

20:04

>> So,

20:04

>> I like it.

20:05

>> We need the new term because

20:08

um there are so many people who just

20:10

don't get it right now and in denialism

20:13

that this is good. They're in their cope

20:14

land and people want a way to

20:16

differentiate

20:18

>> to they want to different differentiate

20:21

their skills. like we had like CIS

20:23

admins and devops and s surres. They

20:26

created these new titles to diff

20:27

differentiate and eventually those

20:29

titles got muddied.

20:31

Um

20:33

cuz people go oh I'm I'm DevOps now

20:35

because I know Kubernetes. Oh, I am I'm

20:38

an AI engineer now because I know like

20:41

like how to malik the array um or how

20:44

the inferencing loop works. No, no, no.

20:47

These are just fundamental new skills

20:49

and if you don't have what we're talking

20:51

about in a year, I think it's going to

20:53

be really rough in the employment market

20:56

for high performance companies. Like I'm

20:58

already seeing things at like fangish

21:01

companies. Won't go into specifics

21:03

because they're live, but like like if

21:06

you're a software engineering manager

21:07

right now, um axes are coming out like

21:11

they want your team, which you have no

21:14

control over really

21:17

>> because there humans to get good at AI.

21:21

>> Um so it's kind of got to be kind of

21:23

brutal. It's kind of kind of brutal.

21:25

Like everyone wants people to get good

21:27

at AI, but really comes down to if

21:29

someone's curious or not. Really, did

21:31

you make the the right hire originally?

21:34

>> Yep.

21:36

>> Um, so I think it's software

21:38

engineering, Dex.

21:39

>> I think it's just literally software

21:41

engineering, but what it means to be

21:42

software engineer changes.

21:45

>> I did realize that um I think we can get

21:48

push. I just want to make sure that

21:49

we're allowed to commit because I know

21:51

you have to do some

21:52

>> Yeah. GH login. So the G so I have

21:55

deploy keys on both these boxes. Um

21:59

[clears throat] let's see if we can I'm

22:01

like T-Mox within T-Mox is create. I'm

22:03

really lucky I changed my default key

22:05

T-Max prefix. So now I but I have to

22:07

remember what the default one is on the

22:09

new on the new boxes. Um

22:12

>> while we're on the tangent folks.

22:14

>> Yeah.

22:15

>> Um you should be thinking about loop

22:16

backs. um any way that you that the LLM

22:21

can automatically scrape uh context. So

22:25

the LLMs know how to drive t-mucks. So

22:28

instead of doing some background clawed

22:30

code agent, etc., just tell it to spawn

22:32

a T-max session, split the pain and

22:35

scrape the pain. It does it really well.

22:37

If you got like a web server log and

22:39

then a backend API log, create it in two

22:42

like in two splits

22:44

um and just tell Claude or the model to

22:48

go grab the pain and then your automatic

22:50

loop back for troubleshooting. And this

22:52

you don't need to be in the loop. You're

22:54

on the loop and you're programming the

22:56

loop and this is all Ralph.

22:58

>> Um yes. Uh we actually did on last week

23:01

uh a couple weeks ago on AI that works

23:02

we did do a we did a session on get work

23:04

trees and we figured out that uh we did

23:08

some demos of like having one Ralph

23:10

running over here and using T-M not

23:12

route but having one claude running over

23:13

here and using T-Mox to like scrape the

23:16

pains of the other ones and then like

23:18

merge in the results from the work trees

23:20

and resolve the conflicts.

23:23

>> Yeah. Well, whilst that kicks off, we're

23:25

also on another tangent. This is a

23:27

concept that you coined. Damn it.

23:30

Because I [snorts] just didn't write it.

23:31

You You me.

23:34

>> That's why I invite you on my streams. I

23:36

want you to come up with fun words and I

23:38

I'll just be there while you do it,

23:39

which is just recording what happened

23:42

anyway.

23:43

>> Most test runners are trash. They output

23:45

too many tokens. You only want to output

23:47

the failing test case.

23:49

>> I wrote a blog post on this. Did you see

23:50

this?

23:50

>> You did. And it's it's golden, Derek.

23:53

It's golden. Most test runners are

23:56

trash.

23:57

>> This is actually based on a bunch of

23:59

work that I think the first person to

24:00

write this stuff in our codebase was

24:03

when um Allison was hacking. Like this

24:05

is a version of a script that like

24:07

Allison and Claude built a while ago

24:09

because it was just like why would you

24:11

want to output like a million tokens of

24:14

like go spew like JSON test output if

24:17

the test is passing? What happens is

24:21

normally the test run the output's so

24:23

large what it does is it goes tail minus

24:25

100 but if the error is at the top the

24:27

tail it misses the tail.

24:30

Yeah. No, this is the thing that

24:31

happened all the time where it's uh

24:33

yeah, it's just head-N50

24:35

and then yeah, if your tests take 30

24:36

seconds, then you're fine. But most

24:38

people that we work with are like teams

24:40

with 50, hundred, thousands of engineers

24:42

and their test suites, if you run them

24:43

wrong, they can take hours. And so like

24:46

there's some work to be done to like

24:49

if it runs the head and then something

24:51

fails but it doesn't see it and then it

24:52

has to run it again, it's like that's

24:54

not wasted tokens. It is wasted tokens

24:57

and it is wasted time. But like if in

24:59

most cases most people aren't doing this

25:00

super hands-off Ralph Wiggum thing. And

25:02

so what just happened is I finished my

25:04

code and I the human I'm sitting there

25:06

waiting for it to run this 5minute test

25:07

suite again.

25:08

>> That's the key. And I'm like why would I

25:10

ever use this tool?

25:12

>> That's a that's the key. Like like I'm

25:15

not in the loop bashing the the array

25:18

and manually allocated and like ste

25:20

trying to steer it like most people use

25:22

cursor. Instead, I I try to oneshot it

25:26

at the top and then I watch it and then

25:29

if you watch it enough, you notice

25:30

stupid patterns and then you make

25:32

discoveries like the testr runner thing

25:34

that you just showed and you go, "Ooh,

25:35

that's a trick that works."

25:37

>> I've also I've also

25:38

>> discoveries are found by treating Claude

25:40

code as a fireplace.

25:42

>> As a fireplace that you just sit there,

25:44

watch it.

25:45

>> You just sit there and watch it. You

25:46

like you're out camping. You're sitting

25:48

sitting there watching the fire going. I

25:50

actually I had a I had a party on uh

25:52

Tuesday, a little like pre-New Year's

25:54

event, and I wanted to set this up, but

25:56

I just didn't have time. But I really

25:58

wanted to have one of the attractions at

26:01

the party is a uh laptop hooked up to

26:04

the TV and one there's a terminal in a

26:07

web app and you can see Ralph working

26:09

and then anyone at the party can go up

26:10

and edit the specs and like control the

26:13

trajectory of the loop. Uh so next time

26:16

you come to one of my parties, we'll

26:17

have we'll have that.

26:20

Mate, I've still got a couple

26:21

pre-planned trips, so it's just a matter

26:23

of when I come to SF.

26:25

>> Okay. When you come to SF, we're doing

26:26

we're doing a Cursed Lang hackathon.

26:29

We could probably also do a Ralph plus

26:31

Cursed Lang hackathon. I think that

26:33

would be really really fun. Uh, and

26:36

yeah, just like how do you make this

26:37

it's it's deeply technical and you can

26:39

change the world. you could build

26:40

incredibly useful things that actually

26:41

make many people's lives better, but

26:44

also just like the perspective of like

26:46

some of this is just art and like how do

26:49

you how do you bridge the gap between

26:50

art and and and utility and yeah, it's a

26:53

fun time.

26:54

>> Yeah, it's a it's a crazy time. So, I'm

26:56

down for that. Um,

26:59

let me get Loom done because I think

27:01

Loom is the encapsulation of some of

27:04

these ideas into

27:06

uh, essentially what is a remote

27:09

ephemeral sandbox coding harness. M

27:13

>> so the ability for a self-hosted

27:16

platform to actually create its own uh

27:20

remote agents weavers and then it's just

27:24

like your standard uh like agentic

27:27

harness which is 300 lines of code. If

27:29

people think claude code's amazing, it's

27:31

not. It's literally the model that does

27:34

all the work. Go look at my how to build

27:37

an a how to build an agent harness. All

27:41

right. So you got this harness, you got

27:44

this remote like provisioner on

27:46

infrastructure.

27:48

>> The next step there is really like how

27:50

do you like how could you encodify Ralph

27:56

and like little these all these nudges

27:57

and all these pokes and what happens if

28:00

it's the source control? It's also

28:02

source control. Like I I've been wanting

28:04

to get off GitHub for a long time and

28:06

evolve SCM.

28:08

>> Did you build your own

28:09

>> now?

28:10

>> Yeah. the last 3 days like AFK I now

28:14

have a remote provisioner I now have

28:17

full like Aback uh device login flows

28:21

off login flows tailwind UI

28:25

>> uh it's got full SCM hosting full SCM

28:28

mirroring we've got a harness so I've

28:30

got this CLI now that can like spawn

28:33

remote infrastructure

28:35

kick off an agent and then when it says

28:37

that it thinks it's done then then I can

28:39

set up like almost like a pix chain

28:42

reaction of agent pokes agent. So this

28:46

is like do did you do the translation do

28:48

all these things and

28:50

>> if you control the entire stack

28:53

>> from source code you can modify and

28:55

change that stack to your needs includes

28:59

like source control as like a memory for

29:01

agents.

29:02

>> I love it. Um, I've realized one other

29:05

thing here, which is that I did not put

29:08

a push command in my prompt, and so the

29:10

agents didn't push their stuff.

29:12

>> Yeah. So, that's that's another thing we

29:15

haven't covered off yet is the idea of

29:19

if you have a a shell script on the

29:22

outside or an orchestrator over the

29:25

harness.

29:26

>> That's true. You could just do the push

29:28

in the orchestrator.

29:29

>> Correct. which makes it deterministic.

29:31

But you can also add deterministic push,

29:33

deterministic commit. You could add uh

29:36

deterministic

29:38

like evaluation, whether it meets your

29:41

criteria. Does it do a git reset hard?

29:43

Does it run Ralph Ferber on what you've

29:46

already got? Does it bake it more? Or

29:48

does it just reset and try again?

29:50

>> Yeah.

29:51

>> But if you run into the harness, you're

29:54

just going to get you're just going to

29:55

get like steak that's either blue or

29:58

it's charred. Okay, so here's what's

30:00

interesting is we are back to

30:02

non-determinism. So you see this one

30:04

over here started running the thing and

30:06

it actually emitted the promise because

30:08

it read the prompt and it said okay

30:10

everything is done with the first thing

30:14

like it answer it finished the prompt

30:15

and it did the first thing but it's now

30:17

not looping

30:19

>> and so the kind of

30:21

>> like if I tell you not to think about an

30:22

elephant what are you thinking about Dex

30:25

>> elephants

30:27

>> exactly like this is another thing about

30:30

prompt engineering like people go it's

30:32

important that you do not do XYZ right

30:36

next in the context window. I'm going to

30:38

think about XYZ. And if it gets the

30:40

important not

30:43

>> the less that's in that context window,

30:46

the better your outcomes.

30:48

That includes trying to treat it like a

30:51

little kid.

30:53

>> Mhm. I want to actually edit this

30:55

because I haven't worked with this

30:57

plugin much. So, it's like a little bit

30:59

of this is my uh Huh.

31:03

Uh,

31:04

a little bit of this is my um like just

31:07

learning the the tricks of these of this

31:10

plugin. But it looks like the Ralph loop

31:11

is finished. So, I'm going to make

31:14

another one.

31:16

Let's see.

31:18

Or what is it? Completion promise. I'm

31:20

just going to try to run it without a

31:21

completion promise and see if this will

31:23

just run forever.

31:25

>> Yeah.

31:26

I hope uh people stumb upon this video

31:29

and they um they're able to disconnect

31:31

the two between like the official

31:35

product implementation and go, "Oh, wow.

31:37

It's bienthropic."

31:39

Verse

31:40

uh learning the fundamentals

31:43

of like

31:45

>> why

31:46

>> why it works and how does it work and

31:49

like actually watching it like I have

31:51

AFKed it for 3 months but I wasn't

31:53

paying for tokens. I saw it rewrite the

31:55

Alexa and PA like

31:59

so many times and I thought the model

32:02

was the issue. It wasn't the model.

32:05

>> Hey Dex, do you know someone who uh said

32:07

that you should spend some time reading

32:09

the specs and like more time on the spec

32:12

because one bad spec equals

32:15

uh like one bad line of code is one bad

32:17

line of code. One bad spec is like 10

32:21

new product features, 10,000 lines of

32:23

like crap. and junk because in the case

32:26

of cursed

32:27

>> Yeah. in the case of cursed my spec was

32:31

wrong. So it was tearing down the Lexa

32:33

and the PASA like because I declared the

32:39

same keyword for and and or to be the

32:42

same keyword

32:44

>> because you had a mistake in your in the

32:46

list of you couldn't come up

32:48

>> and I was saying that the model was bad

32:50

and loop it was literally garbage in

32:52

garbage out like you got

32:54

>> because you didn't know enough you

32:56

didn't know enough Gen Z slang to do a

32:58

good job. Yeah, I never met Ran out of

33:01

Gen Z words. [laughter]

33:03

>> I ran out of gen.

33:05

>> I'm just going to show this real quick

33:06

for people who are not familiar, but

33:08

this is a programming language that was

33:10

built with Ralph uh three times over in

33:12

three different it was C and then Rust

33:14

and then Zigg, right?

33:16

>> Yeah. Playing with the notion of back

33:17

pressure and what like what's in the

33:19

training data sets and all that stuff.

33:23

>> Yeah, this is cool. Um anyways, I'm I'm

33:26

going to leave this running for a while.

33:27

I'm probably not going to be sitting

33:28

here, but I hope if you're watching, uh,

33:30

you had fun and you learned some stuff.

33:32

And Jeeoff, I know you got to head into

33:33

work in a minute, but

33:34

>> I got a head into work.

33:35

>> Any any final thoughts? Any last words?

33:38

I mean, you kind of said your advice,

33:39

which is like don't just jump on the

33:41

plugin and the name and the cartoon

33:43

character, but like actually it's it's

33:45

kind of as much of anything as a

33:46

teaching tool and like go learn why it

33:48

works and why it was designed the way it

33:50

was.

33:50

>> Yeah. Think like a think like a C or C++

33:53

engineer. Think that you got this array.

33:55

There's no memory on the server side.

33:58

You It's a sliding window over the

34:00

array. You want to set only one goal and

34:03

objective in that array. And um you want

34:07

to leave some like uh headroom if you're

34:12

>> if you're not complete afking, you want

34:14

to leave some headroom because sometimes

34:15

you got this beautiful context window

34:17

that you just fall in love with

34:19

>> and then you're like, "Oh, can I squeeze

34:20

some bore out? Maybe it's not a new

34:22

loop. maybe like like like you get just

34:24

you get these golden windows. Um

34:27

>> yeah where it's like the trajectory is

34:28

perfect and it's running the test

34:30

properly and you get into the right

34:32

>> you want to save it. You want to save it

34:34

like

34:35

>> that's something I think that we

34:37

an area of research uh in aic harnesses

34:40

is like the ability to say this is the

34:43

perfect context when I want to go back

34:45

to it.

34:46

>> Yeah. Deliberate malicking.

34:48

>> Yeah. Deliberate malicking. Um, and less

34:52

is more.

34:54

Holy crap. Um, take [snorts] your claude

34:56

code rules and tokenize them.

34:59

[laughter]

35:00

Go to [snorts] like Tik Tok get tick

35:02

token off GitHub. Run it through the

35:04

tokenizer or the open AI tokenizer.

35:07

>> Read the harness

35:11

guides.

35:12

Um, read the harness guides. Like

35:15

anthropic says it's important to shout

35:17

at the LLM. GPT5 says if you shout at

35:20

it, it becomes timid.

35:22

>> You [laughter] detune the model. Yeah.

35:24

It stops being good. But yeah, you can

35:25

look at the look at the tokenizer. I

35:27

mean, this is easy because it's this,

35:28

but like yeah, we talk about this all

35:29

the time as if like you should go look

35:31

at how the model sees what you say

35:33

because when you type JSON into here,

35:35

you see like there are so many extra

35:37

charact like this is way denser than

35:39

just feeding the model words. And so you

35:41

should turn the JSON deterministically,

35:43

turn it into words or XML or something

35:44

more token efficient.

35:46

Yeah, I'll leave you with a quip.

35:49

>> Yeah, let's go.

35:51

>> So,

35:52

you could only fit about a

35:55

Oh, actually, maybe you'll here's the

35:57

quip.

35:59

I remember someone coming to me and

36:02

wanting to do an analysis on some data

36:04

using our labs.

36:06

>> Mhm. And I go, "How big is the data

36:07

set?" And that that person went, "Oh,

36:09

it's small. It's only a terabyte."

36:13

So, I had to pull up the chair. had to

36:15

pull up the chair and go, "Oh, this is

36:18

only a Commodore 64

36:21

worth of memory." So, if you want to

36:24

know how big like 200k of tokens is,

36:28

>> yeah,

36:29

>> you've got it's tiny. You've got like a

36:32

the model gets about a 16k token

36:35

overhead.

36:36

>> The harness gets about a 16k overhead.

36:39

You only got about 176k usable, not the

36:42

full 200 because there's overheads,

36:45

>> right? There's the there's the system

36:46

messages that come in, right?

36:48

>> Yeah. Yeah. Yeah. So, for that person

36:53

um downloaded Star Wars Episode 1 movie

36:56

script.

36:57

>> Mhm.

36:58

>> And I tokenized it.

37:00

>> Okay.

37:01

>> And that that worked out to be about 60K

37:03

of tokens or about 136 KB on disk.

37:08

You can only fit two movie the max of

37:11

one movie or two movies into the context

37:13

window.

37:14

>> Here's a new measurement.

37:16

>> How many movies can you fit into the to

37:19

get people thinking about like visually

37:22

from when we talk about tokens? It's

37:25

it's just this weird concept like you

37:28

can only fit about 136 KB and people go

37:30

what's 136 KB? It's Star Wars movie

37:33

script.

37:33

>> Amazing. that that includes the tool

37:36

output if you and then if you apply the

37:38

domain back that includes your tool

37:39

output, your spec allocation, it

37:42

includes your initial prompts. It goes

37:45

by fast. Goes by fast.

37:48

>> Yeah,

37:49

>> those both you can do a ton, but it's

37:51

also it's it's incredibly small and uh

37:53

the engineering and being thoughtful

37:55

about how you use this stuff uh can make

37:57

a huge impact.

37:59

>> Correct. And your best learnings will

38:00

come by treating it like a Twitch stream

38:03

or sitting by the fireplace and then

38:05

asking the all these questions and

38:06

trying to figure out why it does certain

38:09

behaviors and there's no explainable

38:11

reason. But then you notice patterns and

38:13

then you you tune things like your

38:16

agents MD which should only be about 60

38:19

lines of uh code by the way.

38:21

>> Yeah. Agents MD should be small.

38:22

Everything should be small. You want to

38:24

maximize everything should be so small

38:26

>> useful working time in the smart zone.

38:29

Uh, this was super fun. I decided to do

38:30

this on as a as a bit and Jeff texted

38:32

me. I was like, I'm gonna come hang out

38:33

and talk about Ralph. I was like,

38:35

incredible. So, thank you so much for

38:37

joining.

38:38

>> Anytime, man.

38:38

>> Post the video somewhere. Uh, if you

38:40

want to do a recap or or a

38:42

retrospective, I'm I'm happy to dive

38:44

deeper once this thing is like cooked

38:45

for a couple hours.

38:47

>> Peace until I'm next in San Fran, mate.

38:50

>> All right, sir. Enjoy.

38:51

>> See you.

38:52

>> Okay, that was Jeff Huntley. I am now

38:55

going to uh get back to work. Uh and

38:58

we're going to let this thing cook and

38:59

we will just leave it online on the

39:00

stream for a bit. So, I'm going to turn

39:03

off the OBS camera. I'm gone now. And uh

39:09

yeah, enjoy. We'll check back in in a

39:11

little bit. Cheers.

39:13

>> So, it appears that the anthropic one is

39:15

completely dead.

39:17

I'm wondering

39:20

what is going on here. Oops. I didn't

39:22

mean to cancel that one. Let's get that

39:24

one going again. So, just keeps

39:28

running the hook over and over again.

39:32

Let's see if we can control C this. I'm

39:34

The only thing I can assume is

39:36

stop says iteration 110. No completion

39:39

promise set. It's coming from my phone

39:42

like hearing the stream on my phone. Um,

39:45

yeah. I wonder what's going on here. Did

39:46

it delete the prompt? Prompt is here.

39:49

What happened? Stop hook error.

39:53

Stop says, "Yeah, what happened here?"

39:56

That's my wrong team session. What was

39:58

the last thing that happened before that

40:00

got stuck in this loop? Oh my god, look

40:03

at all these iterations. So, all

40:05

milestones complete. 202 test passing.

40:08

Complete. Project is feature complete.

40:11

Features listed and out of scope are

40:12

intentionally deferred or not planned.

40:14

Milestones complete. All milestones

40:16

complete.

40:18

Project is complete. Project complete.

40:21

All milestones done. Okay. So, this one

40:22

says it's baked.

40:25

[clears throat] All done.

40:27

Complete. All milestones complete. Done.

40:32

Project complete.

40:35

This is funny because it's all in one

40:37

context window. It's just seeing its own

40:39

context and just returning this check

40:41

mark. Whereas like Ralph starts fresh

40:44

and it's like learning again and then

40:45

it's more likely to come up with new

40:47

things to do.

40:49

Um, but what we're going to do is we're

40:51

going to actually pull down this repo

40:53

and we'll see how it goes. So,

40:58

let's just pop this guy open. Oops. All

41:01

right. I'm going to get logged in here

41:04

and we're going to explore this and see

41:05

how it did. H. [clears throat] So, this

41:07

was the plug-in one.

41:10

get poll and then tell me about this

41:14

project

41:16

and teach me how to use it. We'll see

41:19

how this goes.

41:22

All right, fine. I will get Oh, because

41:24

we opened it from not the CLI. All

41:26

right, we're going to pull this down

41:27

over here and see what Fast Ha coup can

41:29

do. I guess we'll check out the other

41:30

one, too.

41:32

See how far we've gotten. Teach me how

41:34

to use it and let's run through some

41:36

examples. Let's go fast haiku. this guy

41:39

on DSP. Clean up some of these other

41:41

ones and check out the code. All right,

41:43

we got CLI, we got the core, config,

41:45

parser, front matter, parser, get URL

41:48

parser. Oh, this is sick we got here.

41:51

Okay, tests all failing. Perfect. 100

41:53

pass nine failures. We got some good

41:56

GitHub URL parsing.

41:59

Cannot find package. Ah, so we should

42:01

install.

42:03

[clears throat] This one appears to have

42:05

more tests. Nope. Less tests on the

42:07

plug-in one. We'll have to see source

42:10

and how it differs slash

42:14

adheres to the specs. [clears throat] I

42:18

would guess that we're not going to get

42:19

the emergent behavior here

42:23

that we are used to getting from the

42:26

Ralph in a loop which is going to just

42:28

keep finding stuff to do if we've done

42:30

everything. One thing we could do is we

42:31

could extend it so that Oh, that's why.

42:37

Okay,

42:39

it's parsing that as a comment. Um,

42:42

booting anything in the out of scope

42:46

slash future work that's now in scope

42:52

and then we can go launch it again.

42:54

Okay. And we'll do the same one over

42:56

here. [clears throat]

43:02

I should probably

43:05

make sure they match. All right. Next

43:07

time it runs, it will pull that out of

43:09

scope stuff into scope. [clears throat]

43:12

Excellent. The tests were fixed by the

43:14

agent.

43:16

[clears throat and cough] Really um

43:18

really should have told this to not use

43:20

background agents. Really wish it would

43:23

just run all the agents in the

43:24

foreground, but such is life. Okay, so

43:29

in our Ralph pluginis

43:33

M3 partially implemented

43:36

not implemented. The spec has this but

43:38

it's not there. I wonder if our

43:39

implementation plan was broken. Let's

43:42

see this one. M3 remote sources partial

43:45

offline update lock files still coming

43:49

along. Well, this one didn't actually

43:50

say it was done. So, and this says M4

43:53

was not started. So, you could allege

43:56

that this one actually finished faster,

43:59

but at the end of the day, the real

44:00

answer is how do they work? So, we're

44:02

going to jump into

44:06

let's just do this guy. Customer Val

44:08

plugin.

44:11

Well, that's not good. This was for the

44:13

plug-in one. Human on the loop, folks.

44:16

Build it. Here we go. Okay. Interesting.

44:18

We really really taken the customization

44:22

flag. Uh, now what do I do?

44:25

[clears throat]

44:25

Okay, I built our stuff.

44:28

So, we want to do something like slow

44:31

down, buddy. We got to understand what

44:32

we did here. What I really want to do,

44:34

what's it called? I don't remember the

44:35

ticket number. Try new things about

44:37

find. We got lazy GP going on here.

44:39

We're not going to worry too much about

44:40

this. All these fun watch hooks

44:44

on build, on error, on shell,

44:47

committing with get, pushing up. Getting

44:49

stuff, folks. I think neither of these

44:50

are really fully baked yet, but um we're

44:54

let this cook for a little bit longer

44:55

and uh see where we get to. This one has

44:59

way more tests. Worth noting. Um

45:03

actually, what I want is to compare both

45:05

implementations. All right, we're going

45:07

to kick that off. We're going to let

45:08

them keep keep cooking. Um

45:12

should probably start a Ralph loop

45:13

locally to run through and just explore

45:15

and compare these things. But

45:17

[clears throat] anyways, we're going to

45:18

let this cook for a little bit more.

45:19

We'll be back. Okay, so we started at 2.

45:21

It's about 4:20 again. This one's been

45:23

done for a while, it feels like.

45:28

But we added some more stuff to scope

45:30

just so it could keep going. Um, I'm

45:32

actually going to go check out the

45:33

repos. [cough and clears throat]

45:36

So, here we'll be able to see here was

45:38

the plug-in one. A little bum this

45:40

didn't get a read me. Um, we can come

45:42

through and like look at the

45:43

implementation plan here. Seems like

45:47

everything is done. And here's our

45:48

progress log. Apparently, we have

45:51

parallel flags. Um, I like that this one

45:55

made a readme that explains how it works

45:58

and how to use it. Um, so I am on my

46:01

other workstation uh that is not being

46:03

used for streaming and uh I am testing

46:06

the actual use case for this which was

46:09

to take a bunch of skills from a claude

46:11

plugin and patch them with repo specific

46:15

rules. So, we'll be back to talk through

46:18

how that went. Um, and we'll be trying

46:20

it with both versions of the plugin. So,

46:23

um I'm going to go put the put the

46:24

Ralphs back on and we will uh keep

46:26

rocking. Okay. So, we did uh start

46:28

implementing this over there. We did

46:30

find one issue um which I from my other

46:34

workstation have opened

46:36

um because we really want it to flatten

46:40

the direct to like maintain the

46:42

directory structure but instead it like

46:44

flattens it um and so it's done some

46:47

root causing over there because it had

46:48

access to the source code. So we're

46:50

going to update our Ralph loop to not

46:51

only read from the specs but also um

46:55

from the known issues folder

46:58

um and see how that works. I've done

47:01

this a couple times before. Um so I'm

47:04

going to

47:07

pop this one open over here.

47:09

Um how can we the best way to install

47:13

the GitHub CLI on a

47:16

workstation? I'm asking there are there

47:18

are very easy questions to these

47:21

question. There are very easy answers to

47:22

these questions or you can install

47:24

whatever you want. Guess we'll do this

47:25

on both of these. I hope we can do this

47:27

unauthor.

47:31

Okay. Oh, we made the same

47:32

deterministically bad in a

47:35

non-deterministically in a

47:36

nondeterministic world.

47:39

That fun. They both made the exact same

47:41

mistake. Let's see if we can fetch

47:43

issues without logging in. Amazing. It

47:45

worked. This one's still having trouble.

47:47

We're going to add this to the original

47:48

prompt. We get rid of this. I guess we

47:50

better make sure there's an issue on the

47:52

other one, too. Okay, we got an issue

47:54

there. Now, let's see. Guess we should

47:57

make sure we have JQ. Nice. Thank you,

48:00

Ubuntu, for including JQ.

48:04

Or maybe it came with Claude. Who knows?

48:06

Okay, sick. My stop hook hit, so we

48:07

should be good here. I'm going to just

48:09

cancel the loop and make sure it's able

48:11

to do this stuff. This is the other nice

48:12

thing is you can just kill these things

48:14

and they'll pick back up wherever they

48:15

left off.

48:18

[clears throat] Oh, and this one is now

48:20

going along. Stop this one. We cannot

48:22

have two of these working on the same

48:24

thing. Yeah, it's fine. We could have

48:25

done that as our own separate step in

48:28

the loop script, but uh I think that's

48:29

fine for now.

48:31

Uh so now these things will pull in any

48:33

GitHub issues that get open. So go open

48:35

GitHub issues and see if you can uh

48:37

prompt inject my Ralphs into doing

48:39

something weird. Have fun.

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.