[SQU] SMP & Performance when disk i/o is not an issue

From: Costas Tavernarakis <[email protected]>
Date: Mon, 18 Dec 2000 13:21:29 +0200

Hi all,

In my case, the hardware is speeding up disk i/o so much that
it's not the bottleneck with squid - CPU utilization is.

I run squid with plenty of hardware:
My company insists on running everything on Solaris/SPARC, so
my platform options are severly limited. I've grabbed a Sun 420R,
with 4x450MHz CPUs (4MB cache each) and 4GB RAM, and added a
Gigabit Etherchannel at 2Gbps and an external A3500FC for the
cache_dirs. This is an external raid controller, with 128Mb
write-back bettery-backed cache and 24 U2W 10Krpm scsi disks,
communicating with the server over two fiberchannel loops.
I configured it having 10 2-disk RAID-1 drive groups, all
mounted noatime and used as 10 separate 18GB cache_dirs
for squid. I'm using async i/o for them.

Using either a s2_-hno sourceforge snapshot or a s2_4 snapshot,
I get performance rates peaking at ~100 reqs/sec, with hit_svc
times increasing severely at peak hours. It's clear the system,
although it has plenty of hardware, cannot cope with the load
it receives.

It seems that disks never get even close to melting (at least,
this is what iostat reports, and also what my eyes see when
looking at the disk leds). Also, network bandwidth to internet
is not an issue (my 155Mbps link peaks at ~50% utilization).
System memory really is FAR from filling up (Solaris 8 has
improved a real lot in that area), and my current bottlenect is
with the CPU utilization of ONE of the system CPUs.
It seems squid gets to use only a very little more than 25% of
the system CPU time (on a 4CPU system, this 100% of ONE cpu's
time), never getting more than 26% or so.

It seems that, since the disks are not melting, async i/o load
sharing over the CPUs is not much of an improvement.

Does anybody have any ideas about how to spread the CPU load
accross the multiple CPUs? One idea is to use multiple squid
processes on different ports, using CARP to spread the load
between them. I don't know how good this approach is, because
this means memory segmentation, redundant acl checks ...
Anything better?

--
To unsubscribe, see http://www.squid-cache.org/mailing-lists.html
Received on Mon Dec 18 2000 - 04:24:22 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:57:01 MST