Re: too many page faults

From: Kendall Lister <[email protected]>
Date: Thu, 6 Jan 2000 18:03:47 +1100 (EST)

On Wed, 5 Jan 2000, Clifton Royston wrote:

> On Wed, Jan 05, 2000 at 07:35:02PM -0600, Jay Wilson wrote:
> > I am running RH6.0 and 2.3devel3. I have two machines setup as siblings.
> > Both are K6-300 w/128m and 13G hard drive.I am noticing a VERY large amount
> > of page faults. I read in the Squid FAQ's that the ratio of http requests to
> > page faults should be between 0.0 and 0.1 for best performance but my ratios
> > hover around 0.35 or more. I have cache_mem set to 24m and cache_swap_low
> > set to 60. What else can I do to improve performance? I can see a noticeable
> > latency under heavy load.
>
> It sounds like the 128MB you have in there is just barely enough for
> the Squid process itself, and not for anything else, including the
> kernel, so it's swapping. Here I'm using Henrik's rule of thumb of
> Squid RAM = cache_mem + (8MB per GB cache) = 24+(8*13)=128MB. I'd try
> sticking another 64MB in each machine as being the simplest solution.

Your solution is effective, but surely the "simplest" solution would be to
reduce cache_mem by 16 Mb, as this should be plenty for the operating
system to run in. This can be done without even interrupting the operation
of the machines, and if it doesn't regain quite enough RAM, _then_ go
through the process of buying more and installing it.

--
 Kendall Lister, Systems Operator for Charon I.S. - kendall@charon.net.au
  Charon Information Services - Friendly, Cheap Melbourne ISP: 9589 7781
Received on Thu Jan 06 2000 - 11:26:55 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:50:15 MST