TRANSCRIPTIONEnglish

Svet s superinteligenco, robotizacija in globalna AI tekma (Marko Grobelnik) – AIDEA Podkast 208

2h 33m 47s23,604 mots1,840 segmentsEnglish

TRANSCRIPTION COMPLÈTE

0:00

Yes, thank you. Well, good. Yes, thank you for the invitation. Yes,

0:06

I've been here several times since I've been interested in this field, in this

0:11

Slovenian space. Yes, I've been here since high school, I've just

0:15

been doing this, no. So, you know what I was interested in? Just this

0:19

morning I was thinking about what it feels like when you spend most of your

0:26

life dealing with a certain field.

0:30

Then the whole world starts to be interested in this field.

0:35

What does that feel like? Yes, it's like a wave

0:39

coming from behind and you're half surfing on it, but no. So actually, at least

0:44

for me, it's a pleasure. I can see that for some people it might not be, because

0:49

they would like to

0:54

continue functioning on that small wave or small waves. Now this wave, no, of course

1:00

it requires activity, no. I mean, it's like surfing, and you get into a wave

1:04

and of course you have to be very, very active to stay on the wave, to

1:10

also enjoy this wave, but no. Mhm.

1:14

But it's every day, no, or several times a day you have to check what's happening,

1:18

try it out and so on. And that's something that not everyone can do, no.

1:24

Even my colleagues who work with AIM, but no. Um, it's just too much, no.

1:32

But it's incredible. Yes, of course not. Then you start to wonder

1:37

why they're doing all this now, no, why is

1:40

it important, no. But we'll probably say something about that

1:45

, no. But the feeling is fantastic for me. Yes, my colleagues too, so

1:51

we didn't believe that this would happen to us in our lifetime, what

1:55

's happening now, but no. That is, this level that

1:59

technology is at right now, you didn't think it would reach this level in

2:04

No, no, no. The creators, the creators didn't

2:08

expect it either, no. It was one such

2:13

step that wasn't clear, no. That's roughly like that, but no.

2:20

The success of this artificial intelligence now was mainly caused

2:25

by the enormous computing power that today,

2:31

especially large companies, can launch in a short time. That means in an hour, right away,

2:37

no. Mhm.

2:38

Well, and we knew that somewhere along the way there was probably such a result.

2:45

But we didn't know where the critical mass of this computing power was that would enable what

2:50

we have now, no. Well, and that happened somewhere in 2022,

2:55

let's say. That's when this leap occurred and I know these creators

3:01

at Google. Well, that's what happened at first and when I asked these people about it, I said,

3:07

how do you understand it now, right? Why is it now, why is it even working? They

3:14

said, we don't know. Mhm.

3:16

We don't know. We just know that if we run it longer, run it even harder, put more

3:21

data into the machine, the results are even better. Why it works so well,

3:26

we don't know, no. Now, after three years, four years, so to speak, now

3:31

the details are slowly starting to be revealed, why

3:37

this thing actually works, no, how these machines think now, because in the background

3:43

it's very simple, it's some high school mathematics, no, very

3:46

simple, no. Well,

3:51

I can explain this to any student, almost an elementary school student,

3:55

how it works. That's no problem, no. But when we multiply this into, I don't know,

4:01

hundreds of billions of these simple Lego bricks, no, well, then the thing starts

4:07

to take on other shapes, no. And it wasn't clear when these shapes would

4:13

come. It's like a fog, right? Mhm. Because you never know when

4:17

the moment will come when it will smoke so much that you will see the shapes of half of something,

4:24

say, some mountains or some landscape behind that fog, well, something

4:29

like that. Well, and that burned down somewhere in 22, no. But most people in

4:35

your field didn't believe in these laws of skejanja or what

4:41

do we call it in Slovenian? Yes, skejanja, yes, because skejanja somehow

4:44

Yes.

4:47

Very few people believed in this direction, but no. And probably most of these people were

4:53

in Open AI, which was still a non-profit organization at the time,

4:59

but they went in this direction at all, but no. Actually, as it happened, no,

5:03

if we were to try to reconstruct the sequence of events, these ideas had

5:09

been in the air for a long time, no, somewhere after, well, we can start there from 50 years

5:17

on, no, when we can follow them, no, half there after 2010, 2011, there was

5:24

this leap there, when these very powerful processors came out

5:30

and all of a sudden computers started to see, hear, speak

5:36

and half there was one big thorn left, which was not quite clear. That's

5:41

language, no. But we can master language to the extent that

5:46

a computer is at least an approximate interlocutor, no. That wasn't clear, no.

5:52

Mm. Well, and this thing kind of shifted somewhere after 20, no,

5:59

these outlines started to appear, that this actually looks like it could work, no.

6:04

And when they kept increasing the computational power, no,

6:11

that's the skating, no. Well, then at some point, the machines started

6:16

to give themselves answers that surprised us. You might remember that time in

6:21

June '22, when a Google engineer was so out of his mind, but he didn't

6:28

feel that there was a smart consciousness on the other side and then he went and did

6:33

some interviews and did a whole halo. Well, basically, they fired him at Google

6:38

because he was so scared that he was basically doing negative advertising, but no.

6:42

Let's say that was an example. There was a language model at Google

6:46

that he interacted with. Like that. This was

6:48

before GPT 3.5. So. Yes, yes, yes, yes. This, this was

6:53

something that was called lambda at the time.

6:55

Mhm. This is still now, if you look at Google

7:00

Lambda, you would find that it is something, but they never

7:07

advertised this brand, this name as a system, no. It's kind of half-hearted

7:13

at Google for a while, well. Now this Gemini has come,

7:17

actually the same line of ideas, the same people are still doing it. Only now, three or

7:22

four years later, no. Yes. So there was, for example

7:27

, the case of this engineer who was there the whole time, no, and for him it was such

7:31

a surprise that he was perhaps still a little mentally

7:34

unstable, well. Basically, that then he had this this this fear, that now something

7:41

happened that is what you need

7:47

to start looking at everything a little differently, no.

7:51

Well, but his team, this boss and the whole team, because it's a big team,

7:56

well, I know these people somehow and they were also quite positive about themselves, but no,

8:04

how is it that they were okay with it, no, at that time. Then Google got a little

8:07

scared, that they couldn't just release it to

8:12

everyone now, no. No, well, there was Open AI somewhere in the background,

8:18

but when they said, we'll release it, no. Well, and Google then

8:23

practically had internal problems for another year or two, how

8:29

to release products that would be safe at the same time, that would be in line with

8:33

Google's philosophy, no. Well, and then at some point they had

8:38

a lot of products, no, I talked to quite a few of them, but no, which

8:43

somehow their internal control didn't let them through, no, no. Well, and then

8:48

at some point they said, now it's over, now it has to go out, well. No, now they are

8:52

more or less in the last year, well, now 25 years are actually let's say then

8:59

in English I would say unlišal, no, these are their own. At the same time of course there are others.

9:05

There are no secrets, no ground notes, no. There are only three components, but no, data,

9:11

algorithms and processing power, no. We all know data more or less.

9:17

Algorithms are known. That's where I would say, these last improvements were there somewhere

9:23

in 2016, no. Well, after that, the only thing left is who has

9:28

the computing power. Well, Microsoft has the computing power, and Google and

9:34

Amazon. Not really now. Open AI went to Microsoft, but no.

9:38

Google had its own. Amazon didn't go in that direction. But it just distributes

9:44

foreign foreign models, no. Mhm. I think they are investors in

9:49

Antropic, right? Antropic. That's also, yes, I happen

9:52

to know the founder,

9:56

we were before that, the previous one was made by Dario,

10:00

no, Jack Clark, but OK.

10:03

So he was at the OECD, we were together, we were leading a working

10:10

group. But at one point he said, oh, now I have to say goodbye,

10:15

I won't be at Openaj anymore. He was at Openaj at the time. I said, I have to say goodbye,

10:20

I'll be working on a startup, no. Well, and that startup is then the startup was Anthropic, yes.

10:26

But they do very well, they do very interesting things. Well, I have to

10:30

say also practically for software engineering, for

10:35

programmers this has somehow become the main thing, no. They are so much better than

10:41

the others, well, efficiency, friendliness, support.

10:46

So the programming business has changed a lot in the last two years, not

10:50

exactly mainly thanks to, I would say,

10:53

Anthropic. Yes. Mhm. Mhm. Yes, I agree.

10:57

I follow this area quite a lot because of the nature of my work, but philosophically it really

11:02

drew me into it and I'm really happy to be able to talk to you today.

11:08

Now I had a couple of directions in mind anyway, but no, but now that we

11:12

started, it occurred to me that it might be

11:16

interesting, because we've never done this show to

11:21

elaborate on these components of language models or language model training a little more,

11:26

but no. Mhm. Mhm.

11:27

But I think that we train the model first, then reinforcement learning,

11:34

then inference. So it's possible that one, I don't want to say la,

11:40

but someone who is interested in this, but has never gone that far.

11:44

Well, let's say, if I were to say it in a very colloquial way, but no,

11:49

look, if we have an empty space, no, like this table right now, no,

11:55

m And now we put one line, one straight line,

11:59

this space now gets a structure. It already has two sides, doesn't it?

12:04

Mhm. m left and right, doesn't it? Then we put a new straight line across, no. It already has four

12:09

sides, it already has more structure, no.

12:14

And we put another, I don't know, a third, then a fourth, then 10o, then 100o, then a millionth, then

12:20

a billionth, and through more structure we introduce into this

12:24

space, no. Well, these language models that we use now, chat

12:29

GPT and so on, have about 500 billion of these lines of notes.

12:36

Otherwise, it's the same, right? Mhm.

12:38

Well, and now we're but these lines have to be placed right, right?

12:43

Yes, right? And they have to be placed right,

12:46

complicated. So, you know, maybe from school, right? Every

12:49

line has two parameters, right? How much is it inclined and how high is it, right.

12:53

Mhm. Well, that has to be calculated, right? That has to be calculated for 500 billion

12:59

lines, how are they stacked on each other just right and then this space gets so much

13:05

structure that, well, that's what we didn't know, no,

13:10

where is the critical mass of this structure, so that we can master language, right. Let's say to

13:14

master images, it took less, no. It took significantly less, no. For

13:18

sound, speech recognition, it also took significantly less, no. For language, it

13:23

was currently the most difficult, no. I mean, that was the, I would say, last great

13:27

great success, no. Well, and that's how this great great language model is built, no.

13:33

So we don't give data, no. So language, no, no, documents are what we have

13:38

on our own disks, document servers or even a copy

13:43

of the web, no. Everything that is digitized, more or less, they gave. Google clearly has

13:48

a whole copy of the web and they just gave it all together. Mm.

13:52

Well, and now it all translates to this, no.

13:57

If we have one word, which word is very

14:01

likely the next, no. It all translates only to this tiny problem, no, no. And

14:07

when we have two words, which is the third, when we have three words, which is the fourth,

14:12

no. And so on. Well, now it sounds like this. We've been able

14:17

to do this forever. That, that wasn't a problem. The problem is that it's not

14:20

text prediction, no. Basically, the next word auto-detection. Yes, what we didn't

14:25

have, we didn't have the context, no. Mhm.

14:27

And this copy of the web, no, or practically all the documents that are

14:33

available digitally, but they give so much context that this

14:37

next word is so good, no. Mhm.

14:40

But no, that it is, but no, that's what it all translates to, no, actually, but no. So,

14:46

well, and now this placing of lines that I explained earlier, actually

14:50

does nothing other than just make a set of suggestions, which is

14:56

the next probable word, which is a little more difficult, right? Which is more suitable,

15:01

which is less. Well, and then it rolls a dice

15:07

and chooses one of them, so those that have greater difficulty, are more

15:12

likely to be collected. And that's it. Mhm.

15:15

There's nothing more to say, no. That was what I was

15:20

talking to Boris Crgol in 2019 I think, or 2020 and he

15:26

showed me GPT2 at that time. Like that. Yeah. And

15:30

in the demo that he showed me, we put some text in and he half continues this

15:35

story. That is, the beginning of the story. My boyfriend's name was Kleman and he lived in

15:41

this village and then we gave this LMU, that is, the language part,

15:46

a task, or rather he just did everything he knew, to finish this story

15:52

or continue it, right? Continued

15:54

in some kind of meaningful way. Otherwise it was dry, no, but in such a meaningful way,

16:00

grammatically quite correct in English, otherwise not,

16:03

but not that, here, if we touch on intelligence, no, but now this is

16:09

an intelligent task, no, ok, this narrow one is, for example, I

16:14

have to finish the story and this system finishes it for me,

16:18

maybe this is some kind of intelligence,

16:21

no, this is some kind of illusion of intelligence, no?

16:25

Illusion

DÉBLOQUER PLUS

Inscrivez-vous gratuitement pour accéder aux fonctionnalités premium

VISUALISEUR INTERACTIF

Regardez la vidéo avec des sous-titres synchronisés, une superposition réglable et un contrôle total de la lecture.

INSCRIVEZ-VOUS GRATUITEMENT POUR DÉBLOQUER

RÉSUMÉ IA

Obtenez un résumé instantané généré par l'IA du contenu de la vidéo, des points clés et des principaux enseignements.

INSCRIVEZ-VOUS GRATUITEMENT POUR DÉBLOQUER

TRADUIRE

Traduisez la transcription dans plus de 100 langues en un seul clic. Téléchargez dans n'importe quel format.

INSCRIVEZ-VOUS GRATUITEMENT POUR DÉBLOQUER

CARTE MENTALE

Visualisez la transcription sous forme de carte mentale interactive. Comprenez la structure en un coup d'œil.

INSCRIVEZ-VOUS GRATUITEMENT POUR DÉBLOQUER

DISCUTER AVEC LA TRANSCRIPTION

Posez des questions sur le contenu de la vidéo. Obtenez des réponses alimentées par l'IA directement à partir de la transcription.

INSCRIVEZ-VOUS GRATUITEMENT POUR DÉBLOQUER

TIREZ LE MEILLEUR PARTI DE VOS TRANSCRIPTIONS

Inscrivez-vous gratuitement et débloquez la visionneuse interactive, les résumés IA, les traductions, les cartes mentales, et plus encore. Aucune carte de crédit requise.

    Svet s superint… - Transcription Complète | YouTubeTranscript.dev