Re: [squid-users] Sub-minute caching

From: Henrik Nordstrom <[email protected]>
Date: Sun, 4 Nov 2001 23:05:56 +0100

sub-minute caching of content that cannot be revalidated only makes sense in
accelerators. But in accelerators they can make a lot of sense if you have a
handful of such pages that is accessed very frequently.

Another alternative to hacking the source is to have the origin server
provide a Last-Modified: <now> header, combined with a suitable Expires
header. This will completely bypass the sub-minute check in Squid.

Also, make sure the clocks are properly in sync. Use NTP on all your servers
and accelerators. When you are dealing with sub-minute caching, even a few
seconds clock drift may be quite important.

As this content is dynamically generated there may be more causes to why it
is not being cached than only the sub-minute limit. If you want help in
diagnosing why, enable log_mime_hdrs in squid.conf, and send two lines from
access.log of the same object where you think the second request should have
been a cache hit but was not.

Regards
Henrik Nordstr�m
Squid Hacker

On Sunday 04 November 2001 13.45, Troels Arvin wrote:
> Hello,
>
> At my work, we have been using Squid as an accelerating cache for � a
> year now, with great results.
>
> I would like to make Squid cache URLs which will time out within e.g. 20
> seconds. However, Squid seems to be programmed to forget about URLs
> which will not live for a least a minute.
>
> Therefore, I used this patch with STABLE1:
>
> --- squid-2.4.STABLE1-orig/src/refresh.c Sun May 13 03:38:24 2001
> +++ squid-2.4.STABLE1/src/refresh.c Sun May 13 03:38:50 2001 @@
> -327,7 +327,7 @@
> * 60 seconds delta, to avoid objects which expire almost *
> immediately, and which can't be refreshed. */
> - int reason = refreshCheck(entry, NULL, 60);
> + int reason = refreshCheck(entry, NULL, 1);
> refreshCounts[rcStore].total++;
> refreshCounts[rcStore].status[reason]++; if (reason < 200)
>
> The patch doesn't seem to work with STABLE 2. I'm about to investigate
> the problem. - But before I do that: Is there a better way to obtain
> sub-minute caching with Squid?
>
> And before someone tells me that sub-minute caching is stupid: I don't
> think so. If the web-site receives 20 requests per second for a given
> (potentially database-based) URL, then a lot of Apache (and database)
> work may be saved if Squid caches the URL for - say - 20 seconds.
Received on Sun Nov 04 2001 - 15:17:20 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:03:53 MST