Is There Any AI That Can Tune A Betaflight Drone? Could You Train One? - FPV Questions
完整文本记录
Um, Clipsy asks, "Is there any AI that
can tune from Blackbox info?" I actually
addressed that
recently on the Joshua Bardwell live
stream clips
channel. Uh, that's where Blunty clips
out uh, segments from the live stream
and uploads them for you to enjoy.
And I've
recently, let's just search. Can chat
GBT tune your drone?
No, it
cannot. Well, if you want to go watch
that clip to get the details, you can.
It's on the Joshua Bardwell Live Stream
Clips channel. The short version is I I
think the answer is no
because I am pretty sure that in order
to do PID
tuning
you you have to be able to parse the
blackbox data in ways that I don't think
Chat GPT can
specifically in order to take the gyro
data and convert it into a frequency
plot. You have to do a fast Forier
transform which is a mathematical
function that does that thing. And chat
GPT can't do a fast Forier
transform. So when chat GPT tells you to
do something with your filters and then
it works for you, I think it's just
hallucinating and by chance it got the
answer right.
And I also don't think that chat GPT has
the ability to calculate like step
response like PID toolbox
does.
So if if you're handing chat GPT a
blackbox log, I don't think it knows how
to parse a blackbox
log. It's possible that if you give it a
CSV data, it will be able to parse the
CSV data. And you can convert a blackbox
log to CSV, but I still don't think it
actually has the core logic that it
needs to be able to interpret and make
recommendations on the data. And people,
it drives me crazy. And if I'm wrong
about this, I will happily admit that
I'm wrong. Chat TPT can do some cool
[ __ ] you know, but I just don't think
it
can parse blackbox data. Just to be
clear, you're also people in chat maybe
not, but you're using chat GPT as a
blanket term for AI LLM in general.
Yeah, LLMs in general. Is there any LLM
that can do a fast for transform? Can
any LLM? Well, I mean, any of them could
do it with code, right? Like if you're
in cursor and you prompted one, like you
could get a fast for
transform, you know what I mean? Like
through code. If you have an LLM, if you
had an LLM that could access a Python
function that could do a fast fora
transform. Yes. Yeah. I mean, it would
have to know that it needed to do that.
Like you could go and cursor prompt the
AI to get you something that could tune
a drone with the thing and explain. You
know what I mean? That's the sort of the
idea. But then you would have to work,
right? That's not what they do. So here,
for example, please write me a Python
function to calculate FFT.
Uh a fast for a um an AI could do that.
But what people are doing is
they're what they're doing is they're
just
saying I dumped a blackbox log. I can't
find I can't find an example. They're
just saying, "I dumped a blackbox log
into chat GPT." And then chat GPT goes,
"Cool. I will help you tune your
blackbox log. We will get maximum step
response and you know, properly tune
your filters." And it just says a bunch
of
[ __ ] And then it's like based on
what I see in your blackbox log, I
recommend that you increase your P gain.
And it's like, you know what? Like I'll
put on a white lab coat and a
stethoscope and hold a clipboard and
I'll say, "Cool. I've looked at your
I've looked at your blood test results
and it seems that your cholesterol is
132. I recommend that you get that
number up. A good cholesterol is between
187 and 221. Don't I sound
confident? I I'm completely just talking
talking out my ass. And I think that
when when chat GPT or any AI pretends to
be like it knows that terms like P gain,
D gain, filters, step response,
overshoot, oscillation. It knows that
these are terms that are associated with
blackbox logging and it makes sentences
that sound convincing and makes
recommendations which you then follow
and maybe they work, but it's not
because it actually understood what was
in your blackbox log. I don't think it
can possibly know
that. So
yeah, and this also isn't to mean that
somebody can't eventually train
something to do this specifically. Like
one of the things we're seeing now is
agentic. Somebody mentioned that in the
chat, agentic AI, where you have like
one I think that's how DeepC handled it.
You have one big LLM model, but it's
it's it's basically asking individual
agents that are good at certain jobs. So
it'll have a agent that's good at math.
And so if it has math in its thing,
it'll go ask the math agent to solve it
for it and bring it back. So the idea is
that you would have specific agents that
understand these pieces and it can't
really get lost as easy because it knows
to ask the thing who doesn't have as
much context like who doesn't have
contact. But but here's the problem with
that.
It ha there has the training set, the
training data has to include
examples that let the large language
model learn what right looks like. Does
that make sense? Of course. Well, the
that's how LLMs work. And obviously a a
treatise on how LLMs work is not I'm not
qualified to give it. But the short
version is that you feed the LLM a lot
of
data
that is
correct or that from which you want to
draw
inferences and then you ask the LLM
questions about the data and it tells
you and talks to you about the data but
garbage in garbage
out. Did any LLM get trained on blackbox
logs?
Can it even parse a blackbox log? If I
hand it a blackbox log from Betaflight,
it's just a bunch of ones and zeros.
Does it know how to interpret that? How
would it even know?
And if it Yeah, you build an interpreter
like that. You would have to build it. I
mean, that's you would have to do but no
one's done that. My point is no one's
done that.
Yeah. So you would have to number one
you would have to build an
interpreter so that it could in
basically you would have to build pit
toolbox into chat GPT or give it an API
that let it access PID toolbox so that
it could look at the blackbox log and go
okay your frequency response is X your
sorry you know your your step response
is Y here are your current PIDs and
based on that and then it's got to make
a recommendation. So again, you would
have to train it on when P is P goes up,
here's what happens. When P goes down,
here's what happens. And here is what
we're looking for. We're looking for
this. This is our end state that we
want. And no one's done that. No one's
done that.
I think the alternative would be, is
there enough data out there? If it comes
all the forums and all the things and if
it get transcripts of YouTube videos
like things like that, right? like is
there enough data out there to tell you
how to tune a drone or like you you were
saying you need to specifically tune
something and then like you said there's
also in a blackbox interpreter so you
need the data out of it. So uh you know
that's something they're doing more more
though is like now their image