Hi,
I have three memory only caches set up 7GB of memory each (the
machines have 12GB of physical memory each). Throughput is fairly high
and this setup works well in reducing the number of requests for
smaller files from my backend storage with lower latency that a disk
and mem. solution. However, the cache's on of the machines fill up
every 2-3 days and Squid's CPU usage subsequently goes up to 100%
(These are all dual SMP machines and system load average remains
around 0.7). FD's, the number of connections and swap are all fine
when the CPU goes up so the culprit is more than likely to be cache
replacement.
I am using heap GDSF as the policy. The maximum size in memory is set
to 96 KB. I am using squid-2.6.STABLE6-4.el5 on Linux 2.6.
Is there anything I can do to improve expensive cache replacement
apart from stopping and starting Squid every day?
J
Received on Mon Nov 26 2007 - 04:39:30 MST
This archive was generated by hypermail pre-2.1.9 : Sat Dec 01 2007 - 12:00:02 MST