Re: Redirection and Load Balancing

From: John Todd <[email protected]>
Date: Thu, 06 Aug 1998 11:14:58 -0400

At 09:15 PM 8/6/98 +1000, Peter Marelas wrote:
>
>On Thu, 6 Aug 1998, John Cougar wrote:
>
>> On Tue, 30 Jun 1998, Stephen Ollis wrote:
>>
>> > I'm just curious: what does everyone else use for Redirection
>> > and Load Balancing on a farm of proxy servers?

[snip stuff about Alteon]

>> I've played around with a few different configurations with it, but
>> haven't yet successfully got it to act as a single point Squid proxy.
>> What I mean is: it'll load balance TCP/HTTP sessions across multiple
>> servers fine, but I couldn't get it to forward ICP reliably, since the
>> proxy ports must be configured along with the proxy IP address.

[snip]

>Given that the alteon switch does not eliminate all single points of failure,
>I wonder whether a PC running Unix that balances packets between systems
behind
>it would be a cheaper alternative. (i.e. Similar to cisco's local director,
>but not so pricey).
>--
>Regards
>Peter Marelas

  I've got a system that works more or less at the moment that has many of
the features that you describe. I was really trying to get it to market,
but the slow progress of getting the programs written has discouraged me
greatly (I'm not a programmer, so I've had to hire things out,) and the
project has stagnated for the last few months due to other projects I'm
working on taking precedence.

  Anyway, it's a Linux box (32m RAM/P166) with 2 100bt NICs that has been
stripped down and is running a modified bridge code. All packets on port
80 get intercepted and handed off to a very standard squid (with the
"rewrite header" patch.) Squid then is configured in proxy-only mode.
Requests for particular URLs are directed towards particular caches all the
time, with backups configured for each list of URLs or ranges of IP
addresses. There is no ICP involved. The redirector box actually is what
the end users are "connecting" to, even though they don't know it. The
redirector then contacts upstream caches via TCP and requests objects.

  The beauty is that this works with ANY cache, and handles failures of
caches quite gracefully (though it does not understand load issues or
latency - another "SMOP" that didn't get taken care of.) Additionally, no
client configuration is necessary, and NO configuration of caches or your
local network is necessary. You give the box an IP address (for it to make
outgoing connections) and stick it in the line of traffic - presto!

  I'm using PC104+ (P200 with a 2-port SMC 100bt ethernet) so this all fits
into a box the size of a desk phone. It's running a flash disk, so there's
no spinning or mechanical parts. The whole system fits into about 4mb of
space, give-or-take, in it's current form. Total cost to manufacture:
~$1500 in a crude box (a custom box would be better, but that's for later.)

  What I had also wanted to do (and did design) was a board that did
failover based on a watchdog timer. Since this system runs in a bridge
mode, once could (in theory) mechanically connect the two ethernets and
there would be no interruption of data. A set of 4PDT mercury-wetted
relays, linked to the watchdog, would handle this process. In the instance
of a hardware failure, the system would instantly go to "passthrough" mode.
 You'd lose current connections, but all future links would go through
unmolested. There still is the "single point of failure" problem, but this
may be acceptable to smaller ISPs who have lots of those, anyway. This
failover board would increase the price by $200 or so.

  Before you start barraging me with "It won't work because..." problems,
remember that this is a leaf-node box and not a core redirector. My
planned throughput for total traffic is a T3, and of that, only about
30mbps would be web. This is for end-user dial or leased lines - it's not
an accellerator for web farms. I'm not proposing that this is the ultimate
solution - it's a stopgap, and a cheap one, at that.

  Preliminary tests have shown that the bridge code on the Linux box can
handle ~70mbps of raw traffic and at least 7mbps of HTTP traffic from
clients (aggregate to/from) - probably much more, but 7mbps was the limit
of the testbed at the time (eg: 10mb ethernet starts crapping out past that
point.)

  Anyway, the hope of this post is that someone else will have lightbulbs
go off over their heads. I'm still convinced this is a good idea, but I
also know that it's going to be some time before I get around to this
project again and it's needed Right Now. Anyone interested in doing some
programming? :)

JT
Received on Thu Aug 06 1998 - 08:49:22 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:41:27 MST