Features

From: Gregory Maxwell <[email protected]>
Date: Sun, 23 Nov 1997 19:54:08 -0500 (EST)

Havn't been on the squid lists for awhile. (Got unsubed automagicly a
while back).. So I apoligise if any of this has been discussed before.

Some ideas for 1.2:

reload_into_ims for 1.1. This is really helpful for me... Many users are
unwilling to take the reload button lightly.. It would be nice if this
feature were further enhanced so that more then X reloads by client C in Y
time-units will cause a reload (for servers with broken IMS).. This could
taken even further, if there is Z reloads in Y time-units then for a
direct (incase the hiearchy is stale).

Also, options to ignore, or multiply http1.1 cach-control varibles would
be great.. For small in house caches with SLOW connections this is a must
(I've got my home refresh rules set to keep many file types for a week
before IMSing, on the rare ocassion we are concerned about freshness we
just hit reload to do a IMS).. Hard drives are cheap, T1s cost tons!

Compressed intercache compressions. I've made a lame hack of this for my
home squid: I use ssh to setup a non-encrypted compressed pipe and use
cache_host_acl to direct .html,.txt,.htm files through it.. This speeds of
page loads GREATLY! This was one of the features I was working on when I
was working on the cache for the Mnemonic browser. Both the gzip style and
LZO type compression methods would work well here.. (Esp now with
persistant connections, as the compressor will no longer get the 'hit' at
the start)

Improved object dumping rules: When squid runs out of disk swap space, it
dumps the oldest loaded objects.. So if I have an object that gets loaded,
gets lots of hits, and the IMS always says the object has not changed, it
will be flushed out before an object that gets far less hits and comes up
stale ever time there is an IMS... This is not good, esp for smaller
caches... With the mnemonic cache I came up with a complicated formula for
computing the goodness of objects.. But really, tossing objects that have
the oldest and fewest HITS, rather then being the oldest..

In the mnemonic cache I also had decided on implimenting a 'blacklist'.
A blacklisted object will be fetched proxy-only..
Every time an object gets found stale when IMSing, (or gets purged because
it has low goodness) the cache keeps track of that. If an object goes a
while and the stale/good ratio is VERY poor (like reload every time) it
gets added to blacklist.. The black list entry recordes the time it
entered the blacklist and how many times rule has been hit.. The blacklist
has a settable fixed size and purges out the low-hit old entries.. This
prob. wouldn't be of much benifit to a decent sized cache, but on a under-
sized cache it would mostlikely be of great benifit.. The cost of
implimenting this should be tiny..

If anyone is intrested in my Mnemonic cache document.. I'm sure I can dig
it up.. It's a real mess (I wrote most of it while without sleep), but I
think it's has some good ideas like the ones above.. It's mostly
applicable to the small cache of a webbrowser, but some it's ideas could
be useful in squid (though I might have already talked about the good ones
above! :) )....

:)
Received on Sun Nov 23 1997 - 17:09:27 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:42 MST