Re: [squid-users] All url_rewriter processes are busy x Too many open files

From: Marcio Augusto Stocco <[email protected]>
Date: Thu, 3 Apr 2008 09:15:09 -0300

On Wed, Apr 2, 2008 at 1:18 AM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> > Is there any way to increase SQUID_MAXFD from 8192 to 65536, so I can
> > try using the sugested number of url_rewriter processes?
> >
>
> Squid 2.6: --with-maxfd=65536
> Squid 3.x: --with-filedescriptors=65536

At the time I was not sure this would work, but I recompiled squid
with this option and it's working now.

> For our info, you say you are handling thousands of users;
> and what release of squid is it?

Gentoo Linux 2007.0
Kernel 2.6.20.14
TProxy 2.0.6
Squid 2.6.STABLE17

> what request/sec load is your squid maxing out at?

   Number of clients accessing cache: 3493
   Average HTTP requests per minute since start: 1882.1
   client_http.requests = 367.345547/sec
   Maximum number of file descriptors: 16384
   Largest file desc currently in use: 13115
   Number of file desc currently in use: 12885
   Available number of file descriptors: 3499
   cpu_usage = 20.541035%

> Please use Squid 2.6STABLE19 or 3.0STABLE4

I had lots of problems matching kernel, squid and tproxy versions, but
I will try to upgrade to 2.6.STABLE19.

Thanks for your help,
Marcio.
Received on Thu Apr 03 2008 - 06:15:18 MDT

This archive was generated by hypermail 2.2.0 : Thu May 01 2008 - 12:00:04 MDT