Re: Can I *not* have an on-disk cache?

From: Clifton Royston <[email protected]>
Date: Tue, 13 Jul 1999 11:14:25 -1000 (HST)

Scott Hess writes:
> > Or optimize the algorithms for main RAM storage - I admit I'm still
> > shaken by the revelation that the code for Squid will keep cached in
> > main RAM is entirely and admittedly sub-optimal. I think that explains
> > a lot of performance bottlenecks there.
>
> What you'd really want in this case is not a RAM-optimized cache, but a
> RAM-optimized accelerator. Everything I've ever heard or seen on web access
> patterns is that temporal locality is not nearly good enough to make a
> RAM-based cache worthwhile, except for very specific access patterns. In a
> CPU cache, you expect to hit the cache in excess of 90% of the time - with a
> Squid cache, you're talking more like 30% of the time.

I think this has been the underlying assumption, from the days of very
limited main RAM, but I'm not sure it really holds any longer. The
reason is that I've read a number of reports of people getting a
significant hit rate (claiming up to 40+%) with a couple 100 MB of disk
devoted to their Squid cache. I've got 512MB RAM in the machine I'm
testing on, and 256MB is about the smallest RAM size we're likely to
buy new servers with. If one can get a significant hit rate with 200MB
disk, one ought to get exactly the same hit rate - but deliver it many
times faster - with 200+MB RAM.

A thought experiment says: Assume one gets x% of hits with 400MB cache,
and y% with 40GB cache. In round numbers, off-the-cuff, assume x=25%,
y=50% (i.e. one can double the hit rate only by increasing the cache
size 100 times.) Now configure one Squid with all its cache in 400MB of
RAM, and have it fetch misses from a parent Squid with 40GB of disk and
minimal RAM; assume network delays between the two are negligible.
This combination ought to deliver x% of hits at very high speed, and
y-x% at more typical lower (disk-driven) speeds; in fact, the *average*
delivery rate on the y% total hits ought to be nearly twice the speed,
because of the x% of hits which are delivered many times faster.

I would then argue that a single Squid web cache which effectively uses
two-tier caching from large RAM amounts to disk is a degenerate case of
the above, and if the aging/purging algorithms for RAM use are properly
implemented, it should perform faster than this scenario, because it
has less communications and other general overhead.

Am I missing something? (Note, by the way, that I would not expect
Polygraph to report this - because its tests are deliberately
constructed to exercise the entire disk cache; i.e. the queries are not
following a *truly* Zipf-distributed pattern, which would be harder to
generate.)
  -- Clifton

-- 
 Clifton Royston  --  LavaNet Systems Architect --  cliftonr@lava.net
        "An absolute monarch would be absolutely wise and good.  
           But no man is strong enough to have no interest.  
             Therefore the best king would be Pure Chance.  
              It is Pure Chance that rules the Universe; 
          therefore, and only therefore, life is good." - AC
Received on Tue Jul 13 1999 - 15:03:52 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:47:22 MST