TRANSCRIPTEnglish

PID Tuning. With AI!

20m 21s3,398 words494 segmentsEnglish

FULL TRANSCRIPT

0:00

Every so often somebody tells me that

0:02

they used chat GPT or some other AI tool

0:05

to paid tune their quadcopter. And my

0:07

first reaction is that's dumb. That

0:10

doesn't seem like it's even possible.

0:12

And I suspect that chat GPT is just

0:15

saying smart sounding things like it

0:18

always does and then giving you terrible

0:19

advice and then like placebo effect.

0:21

Maybe it's better, maybe it's worse,

0:22

maybe it's a coin toss. I don't think it

0:24

can really pit tune. But I've got this

0:27

quad and I want to pit tune it. And

0:30

anytime somebody makes a radical claim

0:32

that makes me go, "That's dumb. That's

0:33

impossible." I could at least test it

0:35

out, right? That's what we're doing in

0:37

this video. We're going to use AI to pit

0:39

tune this quad. I'm Joshua Bardwell and

0:42

you're going to learn something today.

0:43

Before we get into the video, let me

0:45

make my case for why I'm skeptical that

0:48

AI or large language models can ptune a

0:51

quadcopter. It's not just me being a

0:54

sort of kmagin and like new technology

0:56

coming along and me going shaking my

0:58

fist at it, you know, old man yells at

1:01

cloud fashion.

1:04

Large language models and all sort of

1:05

artificial intelligence machine

1:07

learning. Uh the way that they work is

1:09

that they are trained on a certain data

1:11

set. So like let's say you wanted to

1:13

have a large language model that could

1:15

tell the difference between pictures of

1:16

dogs and cats. And in fact, one of the

1:19

earliest uh AI projects was a project of

1:22

visual identification similar to that

1:24

telling dogs from cats. And uh the way

1:27

it works is you feed the model a data

1:30

set which is a bunch of labeled pictures

1:33

like this is a picture of a dog, this is

1:34

a picture of a cat, this is a dog, this

1:36

is a cat. And after it sees so many of

1:38

them, it can tell the difference. It

1:39

knows dogs look like this, cats look

1:41

like that. And that's very very general

1:43

and very simplistic, but that is more or

1:45

less how large language models and AI

1:48

work. The problem arises when the AI is

1:52

presented with a piece of data that

1:54

isn't consistent with the data set that

1:56

it was trained on. Um, so for example,

1:59

large language models are very very good

2:01

at coding. And the reason that they're

2:03

good at coding, one of them is that they

2:05

have a massive body of knowledge. There

2:08

are so many projects that are public on

2:10

GitHub. The source code is there for

2:12

everyone to see. Every website is

2:14

public. You just download the website

2:16

and there's the whole thing. So there's

2:18

these massive data sets that they can

2:19

learn from. So when you ask them to do

2:21

something, there's a pretty good chance

2:22

that they have seen something like it.

2:24

And they're very, very good at putting

2:26

together those pieces into making what

2:28

seems like completely original,

2:30

completely novel solutions, but they're

2:32

not. Uh, I'm skeptical that large

2:36

language models have been trained on

2:39

Betaflight Blackbox logs or INV or

2:41

whatever blackbox logs and I'm skeptical

2:43

that the things that they have been

2:45

trained on are generalizable enough or

2:48

similar enough that they're going to be

2:50

able to produce good results. Now, there

2:52

may be some things that they can do. For

2:54

example, one of the things you do when

2:56

you tune a quadcopter is you tune the

2:58

filters. and the concept of filters, the

3:01

concept of taking gyro data and breaking

3:03

it down into into frequency components

3:05

and then filtering it. Those are

3:07

concepts that I I could see a large

3:09

language model being successful at. But

3:12

PID tuning I think is going to be much

3:13

harder. I'm I'm frankly skeptical that

3:16

it could even look understand the

3:19

contents of a blackbox log. Oh well, I

3:23

guess I was right. Uh they can't. Huh.

3:25

Video over. Far from it. And it says

3:28

yes, it can parse and analyze INAV

3:30

blackbox logs, but it admits that it

3:33

cannot directly ingest a raw binary txt

3:36

file, which um I wouldn't have thought

3:38

that it could. Okay. But then it gives

3:40

me some tips for how I can get the

3:42

information to it. And one of the things

3:43

it says is to give me the header

3:44

information. So here is the blackbox

3:47

file opened in a text editor. And if I

3:50

just grab this header information here,

3:54

boom, CtrlV,

3:57

I'm going to guess that it's going to do

3:58

an okay job at parsing this. This is

4:01

correct. This is correct. This is

4:04

correct. And this is correct. Um, these

4:08

are I don't know, are they standard

4:10

starting points? Um, this is the

4:12

starting pids for a 7 in. It's the uh

4:14

INAV 7in preset. They wouldn't be

4:17

standard starting points for a 5- in.

4:19

and it doesn't know what size drone I

4:21

have, but they're a fairly standard

4:23

starting point. It's one of the INAV

4:24

presets. It says the D term is moderate.

4:27

Well, it's literally the starting

4:30

preset, so you know, it's just probably

4:34

pretty normal. Well, I should hope

4:35

that's correct. That is the default

4:37

preset.

4:39

This is relatively low. If this is a

4:41

5-in build, this might feel if it's a

4:43

larger 7 to 10 in. That is correct. that

4:45

75 Hz gyro LPF was set uh based on INV's

4:49

recommendation in their quick tune for

4:51

10-in props. DTM LPF. Um this is a

4:56

neutral observation. Again, I didn't

4:58

change this. Uh dynamic notch is

5:00

enabled. Fine. This is an interesting

5:02

observation. With birectional DSHO,

5:04

motor RPM is reported immediately after

5:07

the flight controller sends the DSOT

5:09

packet to the ESC. The ESC responds with

5:12

a report of its current RPM. In

5:14

addition, it is reported every single

5:17

packet time. Every single time the

5:18

flight controller sends a D-shot packet,

5:20

the ESC responds with its RPM. With ESC

5:23

telemetrybased RPM reporting, it is a

5:26

separate process that's doing the

5:28

polling. And that process can be delayed

5:30

relative to the motor outputs. And

5:32

addition in addition, ESC telemetry is

5:34

pulled in a roundroin fashion. So the

5:36

flight controller pulls ESC 1 2 3 4 1 2

5:40

3 4. So the update frequency is much

5:42

slower.

5:44

The Betaflight dev's position is that

5:46

the way INAV does it, I'm not trying to

5:48

put words in their mouth, but my

5:49

impression is that they didn't do it.

5:51

They invented birectional DSHO

5:53

specifically so they could do it the way

5:55

they could do it. They could have done

5:56

it the way INAV did it and decided not

5:59

to presumably because they felt it was

6:00

inadequate. INAV's position obviously is

6:02

that it works fine. I've always left RPM

6:05

filtering off on INV for that reason cuz

6:07

I kind of believe the Betaflight devs

6:10

when they say that the way INV does it

6:13

basically doesn't work. Well, Gemini's

6:15

found another mistake that I made. Uh on

6:17

Betaflight, the raw gyro data or the un

6:20

unfiltered gyro data is recorded

6:21

automatically in recent versions. I

6:23

forgot that INV doesn't do that and you

6:25

have to set the the flight controller to

6:27

record that data manually. Oh well,

6:29

here's the first major flub for the AI.

6:31

set debug mode

6:34

equals

6:35

and

6:37

uh gyroscaled is not there. Is there

6:40

anything that might resemble gyroscal?

6:42

There isn't. I'm going to guess it's cuz

6:44

it records it automatically, but let me

6:46

just ask it. That's what Betaflight

6:48

calls it, by the way. I think maybe it's

6:50

a little confusing or aha. Yes, it's a

6:54

new INAV 9 thing. Gyro raw captures

6:57

direct software filtering. Uh, set debug

7:01

mode equals gyro raw. Except I'm pretty

7:04

sure it's already doing that. Uh,

7:08

debug mode is set to none. No, it's gyro

7:12

raw isn't there either. However, if we

7:16

look at the

7:18

uh file and we try to add graph custom

7:21

graph, we actually see gyro raw is

7:24

there. So I think what's happened is

7:26

that INV also automatically records the

7:28

raw driver data because it knows that

7:31

we're going to want it. Is there data

7:32

here? Yeah. Yeah. So it's already done

7:34

that. I don't need to set up the debug

7:36

mode. And this is the kind of place

7:38

where AI kind of trips over its own

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.

    PID Tuning. With AI! - Full Transcript | YouTubeTranscript.dev