Re: What's the best configuration for this setup?

From: Dancer <[email protected]>
Date: Sun, 24 Nov 1996 13:39:12 +1100

The TCP protocol is heavy wizardry, certainly. There are three major
factors, as I understand them (YMMV, and I may be misinformed. If I am, I
would appreciate clarification and correction, and not abuse. Thanks):

1) Large MTU values cause uneven bandwidth division. Many implementations
default to 1500. This number is usually fine for high-speed, low-latency
links (say, ethernet). Slower links get better (and more even) bandwidth
sharing from lower MTU values. Rule of thumb: Pick a power of 2, and add
40. (552 [=512+40], 296 [=256+40], etc).

2) High-delivery connections will swamp low-delivery connections, given the
same routing priorities for both. Say we have:

  A <-narrow link(s)-> B <- wider link -> C <- bulk link-> WORLD
                                                       \
                                                         --> D

Most of us probably live in a situation like this. We manage B, and the
majority of users are at A(n). C and D are at your upstream provider. WORLD
is that odd three-dimensional thing that's rumoured to exist.

Now, C is say, your provider's WWW proxy or maybe just the WWW server.
Conversely, D is their we-have-everything-in-the-hemisphere-mirrored ftp
server, and is usually reasonably busy. D is probably on a subnet through a
gateway, C is on the mainline.

Now, start transferring something from D to your workstation hanging off
the LAN at B. Hmm..How about Quake? Yeah, you can picture the boss' face on
the ogre..remember? When he gave you that unreasonable deadline? Seven days
for a complete product rewrite, requiring around 400 man-hours. All that
aside, it's coming down at a fair speed, say 2.4Kbytes/sec.

(D may be close, but it _is_ busy, and there are other people doing stuff
on the link between B and C as well. Even examples get easier without
users).

Now, we'll pull something else off the WWW server at C. Something big. C's
just sitting there, cooling it's heels, so-to-speak...There's oodles of
free bandwidth around it, and it starts to send. Wham(!) The file-transfer
from D goes through the floor until the file from C completes transmission.
Why? Well, it's obvious that the connection with D has backed off. Latency
counts. Every millisecond of turnaround time scores against a connection
when it comes to pecking order.

If latency between C and B is half that of the latency from D to B, then C
can get twice as much of the bandwidth as D. (Yeah, that's a slightly
simplistic view, but a lot of the skewing factors tend to cancel out
anyway.) Now what's the relationship between the latency between a user at
A(n) and source-site at WORLD(n)? How much will that connection suffer when
you start pulling the big files down from C?

3) (You'd almost forgotten there were three points, didn't you?) UDP
traffic. Bets are off as soon as you throw streaming protocols (or indeed,
any bulk UDP traffic) into the mix. That's FSP, RealAudio, Xing, CU-see-me,
and shit-hot-new-bandwidth-wasting-protocol-of-the-week(tm).
 These sorts of things are designed without most of the mechanisms that TCP
uses. These things are like pouring sand over the end of a pipe. Anything
that doesn't fit is lost, and nobody much cares (except you already paid
for it when it arrived at your provider, even if it couldn't fit down your
pipe. Technology mirrors commerce). Paint a few grains of sand red. They're
your TCP packets. How long do they take to get through there, and remember,
we hold up the connection until we know that they did (window sizes and
all, permitting).

Err, and no...I can't remember who started this thread either, but
yes...Due to factors (above) a small number of connections can hog the
lion's share of the bandwidth, although each connection ends up occupying
less actual time in total (we hope).

D

----------
> From: Duane Wessels <wessels@nlanr.net>
> To: squid-users@nlanr.net
> Subject: Re: What's the best configuration for this setup?
> Date: Sunday, November 24, 1996 12:27 PM
>
> dancer@brisnet.org.au writes:
>
> >Squid is told something like: 'max_bandwidth 44Kbps' and makes sure that
it
> >does not read more than 44kilobaud from document-fetches in any second
> >(neighbours don't count towards this number. They're presumably free,
being
> >on the inside of the link we're trying to protect, we hope). Delivery of
>
> I think this brings up the most important point. Assuming some kind
> of rate-limiting feature is added to the code, it MUST implemented in
> a manner flexible enough to meet everyone's needs. If we hardcode
> "neighbours don't count towards this number" then two days after the
> release three people will want just the opposite (only neighbors count).
>
> Whoever started this thread (sorry I forgot!) claimed that a single
> (FTP?) transfer seemed to consume the entire link bandwidth. TCP is
> supposed to have this nice load-sharing algorithm, so its not clear to
> me how this could happen.
>
> I think the first thing to try would be lowering the TCP receive buffer
> size ('tcp_recv_bufsize', squid-1.1).
>
> Duane W.
Received on Sat Nov 23 1996 - 19:39:27 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:37 MST