TRANSCRIPTEnglish

3.6 Principles of Congestion Control

14m 49s2,386 words366 segmentsEnglish

FULL TRANSCRIPT

0:00

[Music]

0:09

in this section we're gonna take a big

0:12

picture view of congestion control

0:14

really looking for a principled

0:16

understanding of both the causes and the

0:19

costs of congestion and we'll identify

0:21

two basic approaches for dealing with

0:24

congestion in the first place in the

0:27

next section we're going to take a look

0:28

at TCP and how it actually implements

0:31

these principles but for now we're going

0:33

to step back and take the big picture

0:35

view so you can put your pens and

0:37

pencils down and sit back think listen

0:40

put your thinking caps on as the saying

0:43

goes I think you're going to really

0:44

enjoy this and remember congestion

0:46

control really is a top five issue in

0:50

computer networking so pay attention

0:52

well let's start by asking the most

0:55

basic question what do we mean by

0:57

congestion well in formally congestion

1:00

is all about too many sources sending

1:02

too many packets too fast for a

1:04

congested network link somewhere in the

1:06

network to handle now remember remember

1:09

that links have a transmission rate and

1:11

when the arrival rate of packets exceeds

1:14

the transmission rate queues will start

1:16

to build up and delays will get larger

1:18

and larger and if the links packet

1:21

buffers fill up completely arriving

1:23

packets will be dropped and lost it's

1:26

important to remember that congestion

1:28

control is really all about multiple

1:30

senders multiple senders sending too

1:32

fast and aggregate while flow control

1:34

which we studied in the last section is

1:36

really about speed matching between a

1:39

single sender and a single receiver so

1:42

with that is background let's dive into

1:44

the causes and the costs of congestion

1:47

will do this by starting with a simple

1:50

idealized scenario you know idealized

1:53

scenarios are something that professors

1:54

love to think about and then we'll start

1:57

adding increasingly more realistic

1:59

assumptions so let's start with the

2:01

scenario shown here well look at the

2:03

case of a single router where congestion

2:05

will occur and first assume that there's

2:07

an infinite amount of buffering that's

2:09

idealized of course the router has input

2:13

and output links of capacity are bits

2:15

per second and there are two flows the

2:18

read flow and the blue flow shown here

2:21

let's take a careful look at the

2:23

low rates associated with the red flow

2:25

on the sending side there's a lambda in

2:28

that's the rate at which data is being

2:31

sent from the application layer down to

2:33

the transport layer and at the receiving

2:35

side there's a lambda out which will

2:38

refer to as the throughput that's the

2:40

rate at which data is actually delivered

2:42

up to the application layer at the

2:45

receiver and we'll want to ask the

2:46

question what happens to the throughput

2:48

as we dial up the arrival rate on the

2:51

sending side

2:52

assuming that we increase the arrival

2:54

rate for the red and the blue flows

2:56

similarly well with the case of infinite

2:59

buffering no packets are lost every

3:01

packet that sent is eventually received

3:03

and so the throughput at the receiver

3:05

equals the sender's sending rate as

3:08

shown in the plot here note though here

3:12

that the x-axis ends at R over 2 since

3:15

if each sender would it be sending at a

3:17

rate faster than R over 2 the throughput

3:19

would simply max out at R over 2 per

3:22

flow that's because the routers input

3:24

and output links can't carry more than R

3:27

bits per second of traffic R over 2 for

3:31

each of the two flows well this looks

3:34

pretty good at least from the throughput

3:35

standpoint but remember we learned in

3:38

Chapter 1 that when the arrival rate to

3:40

a link comes close to the links

3:42

transmission rate large queuing delays

3:44

can occur as shown here and so we see

3:48

already that there are costs in this

3:50

case the Leie costs even in this

3:52

idealized scenario let's see now what

3:55

happens when we drop this unrealistic

3:57

assumption about infinite buffering so

4:01

let's consider the same scenario except

4:04

that now the amount of buffering is

4:06

going to be finite recall from our study

4:08

of reliable data transfer that we

4:10

learned that senders are going to

4:11

retransmit in the face of possible

4:13

packet loss due to buffer overflows or

4:16

corruption so we're going to need to

4:18

look a little bit more carefully now at

4:20

the sending rate in particular we'll

4:24

want to make a distinction between the

4:26

rate of original data that's being

4:28

passed down from the application we'll

4:30

denote that lambda in and the overall

4:33

rate at which the transport layer is

4:35

sending

4:36

data including retransmissions will

4:39

denote this as lambda in prime the rate

4:43

at which packets arrive to the router is

4:45

lambda in prime not lambda in make sure

4:48

you've got this and the distinction

4:49

between lambda n and lambda n Prime it's

4:52

really important and as noted here

4:55

lambda N equals lambda out and lambda n

4:57

prime is going to be greater than or

5:00

equal to lambda n since it's going to

5:03

include the retransmissions alright for

5:06

this case of finite buffers let's start

5:08

with an idealized scenario still and

5:10

let's imagine that somehow magically the

5:13

sender knows whether or not there will

5:15

be free buffer space for transmitted

5:17

packet so in this case no packets are

5:20

going to be lost the source sends a

5:22

packet just buffered at the router

5:23

eventually transmitted and received at

5:26

the receiver in this case no packets are

5:29

lost and the throughput lent out here on

5:32

the y-axis equals lambda in no problems

5:35

here well except for the queuing delays

5:38

as lambda n approaches R over 2 just as

5:41

in our first scenario so let's next

5:44

relax this assumption that the sender

5:46

somehow magically knows when there's

5:48

going to be free buffer space and

5:50

consider what happens when packets can

5:52

be lost at the router when they arrive

5:54

and there's no free buffer space here we

5:57

see a packet arriving to find the

5:59

buffers full it's lost and so a copy of

6:02

the packet is retransmitted by the

6:04

sender eventually and in this case finds

6:07

free buffer space advances into the

6:09

buffer and has eventually transmitted by

6:11

the router and received at the

6:13

destination in this case when

6:15

retransmissions occur because of known

6:17

loss the plot showing the overall

6:19

arrival rate versus throughput looks

6:21

something like this at a low arrival

6:24

rate buffers will almost always be

6:26

available and every originally

6:28

transmitted packet is going to make it

6:30

through so in the slow arrival region

6:33

the arrival rate including

6:34

retransmissions which really don't occur

6:37

very often essentially equals the

6:39

receiver throughput in a unit increase

6:41

in the overall arrival rate lambda N

6:43

equals a unit increase in the throughput

6:46

lambda out that

6:48

to say that the slope of this red curve

6:50

here is close to one what's much more

6:52

interesting is the high rival rate

6:54

region here here the arriving packets

6:57

increasingly include retransmitted

6:59

packets and so the receiver throughput

7:02

no longer increases one-for-one with an

7:04

increase in the overall arrival rate and

7:07

we see that as the overall arrival rate

7:09

on the x axis approaches are over two

7:12

can't go any higher the maximum

7:14

throughput at the receiver is actually

7:16

less than R over two this gap here and

7:20

this is important to note is because in

7:23

retransmitted copies of a packet take up

7:26

to n times the transmission capacity of

7:28

a single packet but in the end these n

7:31

packet transmissions contribute only a

7:33

single packet to the throughput well now

7:37

let's see what happens if we drop the

7:39

unrealistic assumption that all of the

7:40

retransmissions are actually needed

7:42

that's to say let's assume the sender

7:44

may timeout prematurely as shown in the

7:47

example here and actually deliver two

7:50

copies of a packet to the receiver in

7:52

this example the first packets delayed

7:54

and so the sender times out eventually

7:56

in retransmits and both packets here

7:58

eventually make it to the receiver the

8:02

receiver of course only delivers one

8:04

segments worth of data up to the

8:06

application layer even though to

8:07

duplicate segments were received with

8:09

these unneeded retransmissions this just

8:13

means that there are more overall

8:14

retransmitted segments in the flow some

8:17

needed some duplicates and so the

8:19

maximum throughput drops even further

8:21

from here to here well we can summarize

8:26

what we've learned from the second

8:28

scenario by noting that congestion loss

8:30

requires retransmission and that n

8:33

retransmitted packets take up to n times

8:35

the resources that means buffers

8:37

transmission capacity as a single packet

8:41

but in the end these n packets

8:43

contribute only a single packet to the

8:45

throughput and so the maximum end-to-end

8:47

throughput can actually be significantly

8:49

less than the actual transmission

8:51

capacity of a congested router in our

8:55

third scenario we'll consider four

8:58

senders and four receivers with a sender

9:00

receiver pair

9:01

separated by two routers the red flow

9:04

now crosses two hops there again

9:07

retransmissions so we'll want to be able

9:09

to distinguish between lambda n and

9:11

lambda n prime as before and again we'll

9:14

be interested in the red flows end and

9:17

throughput lambda out now the red flow

9:20

shares a link with the blue flow here

9:22

and the green flow here and this is

9:25

critical now for a red flow packet to be

9:28

successfully transmitted from host a to

9:30

hosi it must be successfully forwarded

9:34

by both routers and this is important

9:37

because if a red packet makes it through

9:39

the first router but is lost at the

9:41

second router it'll have to be

9:42

retransmitted again by the red sender

9:45

and cross that first router again

9:48

another way to look at this example is

9:51

that the first packet transmission which

9:53

has lost at the second hop will have

9:55

essentially wasted the link buffering

9:57

and transmission capacity getting

9:59

successfully through that first hop only

10:02

to be lost at the second hop router so

10:04

given this the question we want to ask

10:06

ourselves is what happens as lambda in

10:09

prime increases for all of the flows so

10:13

here's one way to visualize and think

10:15

about answering this question as lambda

10:18

n prime increases the arrival rate to

10:21

the first hop router on the path will

10:23

increase increasing the loss rate and

10:25

now think about it asymptotically

10:27

asymptotically as the sending rate of

10:30

the first hop sender say the red sender

10:32

here goes higher than R over to this

10:35

first hop traffic will crowd out all of

10:37

the second hop traffic in this dropping

10:40

of second hop traffic will happen at all

10:42

routers meaning that the end end to hop

10:45

throughput for all flows will go to zero

10:47

as we dial up the sending rate ho zero

10:51

throughput things really can't get any

10:53

worse than zero throughput and so

10:56

another lesson we learned is that in the

10:57

multi hop scenario all of the upstream

11:00

network resources the buffers the

11:02

bandwidth used to carry a packet to a

11:05

router where it's ultimately dropped are

11:07

going to be wasted and since that packet

11:09

won't make it to the destination in any

11:11

it'll have to be retransmitted again and

11:14

attempt to reuse all of the resources

11:16

again on the end-to-end path so let's

11:19

summarize what we've learned from these

11:21

three scenarios we learned from a

11:24

throughput point of view that the best

11:25

that can happen is that the throughput

11:27

equals the rate at which traffic's being

11:29

passed down from the sender's

11:31

application layers and once a link

11:33

reaches its capacity sending more

11:35

traffic can't increase the throughput

11:37

beyond the link capacity and we saw

11:40

again that as the link utilization

11:41

approaches 1 the arrival rate of traffic

11:44

to the link approaches the links

11:45

capacity that delays can become very

11:48

large when there is packet loss we saw

11:51

that retransmitted segments whether

11:53

they're needed or not can reduce the

11:55

maximum end and throughput since links

11:58

are carrying both original and

11:59

retransmitted traffic remember an

12:02

retransmitted packets will use up to n

12:05

times the transmission capacity and

12:07

buffering as a single packet but in the

12:09

end only contribute a single packet to

12:12

the throughput and finally we saw that

12:15

in the multi hop case upstream resources

12:18

buffers bandwidth use for packets that

12:20

are eventually lost downstream are

12:22

really wasted resources and that an

12:25

overly congested scenarios this can

12:27

cause the end-to-end throughput to

12:29

actually go to zero a phenomena that's

12:31

been referred to as congestion collapse

12:35

well so we've seen that congestion is

12:38

definitely a bad thing that's why we

12:40

want congestion controls in the first

12:41

place so we can avoid those costs that

12:44

we just saw the next section we're going

12:46

to take a look at how to CP actually

12:48

does congestion control but since we're

12:50

looking at big principles here let's

12:53

step back and let's identify two basic

12:55

approaches towards congestion control

12:57

and as we'll see TCP implements both of

13:00

these well the basic approach to

13:03

congestion control is simple when a

13:05

sender detects congestion it should

13:07

decrease its sending rate that would be

13:09

lambda in Prime in our previous examples

13:12

in the first approach towards congestion

13:14

control the sender in furs congestion by

13:17

lost packet indications that might be

13:19

timeouts or triple duplicate ACKs

13:21

or by the measured RTT

13:23

in this case the network layer provides

13:26

no explicit help in indicating

13:28

congestion it simply drops or delays

13:31

packets and it's from these events that

13:33

congestion is implicitly inferred at the

13:36

sender this is the original approach

13:39

towards congestion control taken by the

13:40

tcp/ip protocol stack and it remains a

13:44

central part of TCPS congestion control

13:46

algorithm today in the second approach

13:50

towards congestion control the network

13:52

layer provides explicit feedback to the

13:55

transport layer indicating congestion

13:58

and this can happen before there's

13:59

actually loss or excessive delay as

14:02

we'll see this feedback can be provided

14:04

in several different ways

14:06

some more recent versions of TCP

14:08

implement Network assisted congestion

14:10

control together with TCP s original

14:12

into n congestion control which is

14:15

actually required well I hope you found

14:18

this material on the principles of

14:20

congestion control to be interesting we

14:22

looked at the causes and also the costs

14:25

of congestion and we identified two

14:27

basic approaches towards dealing with

14:29

congestion a network assisted approach

14:31

and an end-to-end approach and armed

14:34

with these insights I think we're ready

14:36

now to dive into the details of how to

14:38

CP actually performs congestion control

14:41

we'll do that in the next section

14:45

[Music]

UNLOCK MORE

Sign up free to access premium features

INTERACTIVE VIEWER

Watch the video with synced subtitles, adjustable overlay, and full playback control.

SIGN UP FREE TO UNLOCK

AI SUMMARY

Get an instant AI-generated summary of the video content, key points, and takeaways.

SIGN UP FREE TO UNLOCK

TRANSLATE

Translate the transcript to 100+ languages with one click. Download in any format.

SIGN UP FREE TO UNLOCK

MIND MAP

Visualize the transcript as an interactive mind map. Understand structure at a glance.

SIGN UP FREE TO UNLOCK

CHAT WITH TRANSCRIPT

Ask questions about the video content. Get answers powered by AI directly from the transcript.

SIGN UP FREE TO UNLOCK

GET MORE FROM YOUR TRANSCRIPTS

Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.