Re: FD leak, except, not really

From: Apiset Tananchai <[email protected]>
Date: Tue, 16 Jun 1998 14:35:11 +0700 (ICT)

On 15 Jun 1998, Michael O'Reilly wrote:

> Eric Stern <estern@packetstorm.on.ca> writes:
> > I noticed this strange thing happening to my squid1.2beta22 recently.
> >
> > File descriptor usage for squid:
> > Maximum number of file descriptors: 3000
> > Largest file desc currently in use: 1007
> > Number of file desc currently in use: 116
> > Available number of file descriptors: 2884
> > Reserved number of file descriptors: 100
> >
> > The # in use is staying pretty stable, but the "Largest file desc
> > currently in use" keeps going up, like it is not reusing FD #'s. It will
> > keep going up until it reaches the max, then squid will jump the reserverd
> > number way up, and things stop working, requiring a restart.
>
> Yes, seeing the same thing. Squid reports ~450 fd's used, the kernel
> reports closer to 2500. (in fact the squid eventually dies due to
> running out of FDs).

Look like I'm seeing the same thing here. :) Squid now report ~540 fds
inuse but 'ls /proc/<squid id>/fd | wc -l' report ~2970. Response time is
terrible (refresh cachemgr.cgi takes 7-10 secs).

As of now squid report 2979 as the largest file desc currently in use.
I'll see if it ever reach 3000 and if squid would die.

I'm running squid-1.2beta2[02] on Linux 2.0.34+3000fd patch
PII 300Mhz, 384MB Ram, squid now use ~280MB.

--
aet
Received on Tue Jun 16 1998 - 00:25:54 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:43 MST