Hi Adrian,
What would cause it to fail after a specific time though - if the cache_mem
is already full and its using the drives? I would have thought it would fail
immediately ?
Also there are no log messages about failures or anything...
Thanks
Dave
-----Original Message-----
From: Adrian Chadd [mailto:adrian@creative.net.au]
Sent: Thursday, November 08, 2007 8:05 PM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)
On Thu, Nov 08, 2007, Dave Raven wrote:
> Hi Adrian,
> I've got diskd configured to be used for objects over 500k - the
> datacomm run is all 13K objects so essentially it's doing nothing.
> Interestingly though I see the same stuff if I use ufs only, or just
diskd.
Ok.
> I am using kqueue - I will try to get you stats on what that shows. If I
> push it too far (1800 RPS) I can see squid visibly failing - error
messages,
> too much drive load etc. But at 1200RPS it runs fine for > 10 minutes -
I'd
> really like to get this solved as I think there is potential for a lot of
> performance.
>
> I've just run a test now at 300RPS and it failed after 80 minutes -- very
> weird...
Well, firstly rule out the disk subsystem. Configure a null cache_dir and
say
128mb RAM. Run Squid and see if it falls over.
There's plenty of reasons the disk subsystem may be slow, especially if the
hardware chipsets are commodity in any way. But Squid won't get you more
than
about 80-120 req/sec out of commodity hard disks, perhaps even less if you
start
trying to use modern enormous disks.
Adrian
-- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -Received on Thu Nov 08 2007 - 14:26:46 MST
This archive was generated by hypermail pre-2.1.9 : Sat Dec 01 2007 - 12:00:02 MST