Re: Features

From: Gregory Maxwell <[email protected]>
Date: Mon, 24 Nov 1997 09:37:50 -0500 (EST)

On Tue, 25 Nov 1997, Donovan Baarda wrote:

> On Mon, 24 Nov 1997, Gregory Maxwell wrote:
>
> > On Mon, 24 Nov 1997, Donovan Baarda wrote:
> >
> [snip]
> > > If squid could be hacked to store and forward compressed objects, you
> > > would get both benefits with reduced CPU overhead. Just an idea...
> > >
> > > ABO
> >
> > Personally, I dont think that full object compression would be that hot..
> > In a big cache it wouldnt be needed.. In a small cache, even with it,
> > objects wont stay around long enough to do any good.. Remember, that
> > compression is always more expensive then decompression..
> >
> > I think that link compression would be of a bigger advantage.. :)
> >
> storing the objects compressed and forwarding them without de-compressing
> them to other proxys (or clients if they can handle it) gives the same
> benefits as link compression with other benefits;
>
> 1) the object is only compressed once for the first direct fetch.
>
> 2) In a multi-level heirachy it is only compressed once and passed without
> further compression or decompression between proxys.
>
> 3) the object is only decompressed before sending to the final client, or
> even better, by the client itself.
>
> 3) reduced disk/memory usage
>
> You said yourself that compression is more expensive than decompression.
> Link compression requires that the object be compressed (and decompressed)
> for each hit.
>
> As for how you would implement it... I dunno :-)

Wow.. That is impressive.. I didn't think of it that way..

If we only use LZO then this would be quite good...

An option option would be added: "Store compressed"
Then objects would only be decompressed before going out on a non
compressed pipe..

The compression would only be done for text files..

Heres a benchmark:
yahoo.tar is a partial recursive wget of yahoo html only (actually two
small gifs snuck in there..) System is Linux pre2.1.66 on Dual166/MMX with
64megs of ram. yahoo.tar is prob entirerly in cache.

yahoo.tar 42301440 bytes
yahoo.tar.lzo 11400223 Compressed with lzop -1 (fast compression) 6sec on
                       one cpu of dual166/mmx .
yahoo.tar.lzo 11226695 Compressed with lzop -6 14sec
yahoo.tar.lzo 8287105 Compressed with lzop -9 (SLOWEST) 2m24s on same.
yahoo.tar.lzo Decompression (compression level irrelevent) 1.5 seconds
              (decomression to /dev/null) One cpu of dual 166/mmx.
yahoo.tar.gz 8994035 Compressed with gzip -1 (fastest) 22seconds.
yahoo.tar.gz 7122507 Compressed with gzip -9 (slowest) 1m11.5s
yahoo.tar.gz decompression 10.5seconds
yahoo.tar.bz2 5118748 Compressed with bzip2 -9 (default) 4m55s
yahoo.tar.bz2 decompression in 48.1s

Bzip2 was just included as a reference, it's much too slow for our use but
it's compression is the best you can find....

I direct your attention to lzo's decompression speed.. On a single cpu of
my dual 166 it's pulling 28.2Megs a second.. That far faster then 100Mb/s
ethernet..

Gzip gets better compression and its't alot worse then lzo.. But it's
decompression is 10x slower.. (zlib = gz)

Lzo is the only one here capible of saturating 100MB's ethernet for
decompression speed.. Also, lzo has much lower memory requirements..
(Decompression needs like 10bytes or so I recall)..

I would suggest that Squid use lzo.. On the server side it can select the
compression level depending on it's cpu load.. Even at the worst it's
getting about 4:1...

Earlier I was against complete compression... But now, I am completely for
it.. :)

Anyone intrested in my solution for images? I've come up for the solution
to them too.. :)
Received on Mon Nov 24 1997 - 07:06:40 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:43 MST