Re: Features

From: Alex Rousskov <[email protected]>
Date: Mon, 24 Nov 1997 09:51:43 -0600 (CST)

Hi there,

        Just a few numbers for the "compression" discussion. When
preparing for NLANR Caching Workshop, I did some measurements using logs
from primary servers and NLANR root proxies.

        As far as I remember, if you compress _every_ file with "gzip -9",
you get about 20% savings in bandwidth. "Gzip -6" gives about 18%. For
each file, the minimum of compressed and uncompressed sizes is used. In
reality, since about 20% of traffic through proxies is not cachable, one
would get something around 10-15%. There was a research study at DEC that
reported similar numbers, I think (they also analyzed "delta-compression").

        BTW, I would be very careful with estimating the average
compression ratio based on one site (e.g., www.yahoo.com). A particular
site may have very good/bad ratios because of its content properties. A
better test would be to "gzip -c" your cache swap content at night. Still
not perfect because we are interested in what is transferred, not in what
is cached.

Alex.

P.S. _If_ we really need good compression, in most cases, it could be done
"statically" during low load periods (usually at night) when disk, CPU,
and memory are idle. This way, it does not matter how slow the compression
algorithm is. (It implies that you store compressed files rather than
compressing docs on the fly).
Received on Mon Nov 24 1997 - 07:57:16 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:43 MST