RE: [squid-users] Httpd Accelerator

From: Jon <[email protected]>
Date: Thu, 28 Apr 2005 15:08:58 -0400

Thanks for the great advice. My Squid process is using ~1747 MB of RAM.
CPU and disk are pretty fast, Xeon 3.06 GHz with Ultra320 SCSI drives. CPU
load is pretty high during peak, ~60%.

I recently purchased a new server to try out putting each cache directory on
its own disk. It has 4 SCSI drives, Opteron 250s with 4GB of RAM. Thanks
for the advice, I learned a lot.

-----Original Message-----
From: Matus UHLAR - fantomas [mailto:uhlar@fantomas.sk]
Sent: Thursday, April 28, 2005 6:53 AM
To: Squid Users
Subject: Re: [squid-users] Httpd Accelerator

Hello,

please set up quoting in your MTA properly...

> On 26.04 21:01, Jon wrote:
> > I've been using Squid for a couple of months as a server accelerator and
> > it was great. But recently our site traffic has increased. Now I'm
> > having issues where Squid would exit and restart back up during heavy
> > load. At most it could serve out ~84 Mbps before it crashes. My server
> > has 4 GB of RAM; I tweaked the kernel for message queues, shared memory,
> > increased nmbclusters and file descriptors. Is there other settings I
> > can tune to increase its performance? I know my description is a little
> > vague but I'll be happy to submit my setting if anyone is interested.
> > Maybe it has reached the limit and I need to add another squid?

> From: Matus UHLAR - fantomas [mailto:uhlar@fantomas.sk]

> What is your cache_mem setting and maximum_object_size_in_memory? what
> memory replacement policy do you use? Do you use disk cache? If so, what
> disk layout do you use, what storage system and what is your
> maximum_object size and disk replacement policy?

On 27.04 15:27, Jon wrote:
> cache_mem 512 MB

How much memory does squid use? If it's under 2 GB, you can increase
cache_mem and will get better memory hit rate, and thus less disk I/O.

...with FreeBSD on ia32 architecture oyur processes can eat up to 2GB of RAM
(if not more...check it), but you'l probably have to recompile your kernel
to allow such big data segment size.

(do not check the memory usage imediately after start, wait a few days until
memory cache fills up)

> maximum_object_size_in_memory 1024 KB

too much probably, I'd set up lower size to get more objects to memory,
and thus have less disk I/O for small files.

> maximum_object_size 2048 KB
> cache_replacement_policy heap GDSF
> memory_replacement_policy heap GDSF
>
> I use diskd with 3 cache directories on a RAID 0

Oh, you have NOT read the FAQ before you installed the machine, did you?

You should NOT run SQUID cache on RAID0 disks, it will not benefit from
stripping more disks. Running one cachd_dir on each drive is more effective
and you won't loose all your cache if any of your disks fails.

Another question is, how much are your CPU and disks loaded. I mentioned the
disk I/O two times, if this is the bottleneck, you can get faster disks (but
I'd try it just after splitting RAID to 3 drives and tuning other
parameters)

if CPU is your problem, you should check if you don't have ineffective ACL's
(it's quite common reason why squid is slow) and buy a better CPU...

-- 
Matus UHLAR - fantomas, [email protected] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
(R)etry, (A)bort, (C)ancer
Received on Thu Apr 28 2005 - 13:09:05 MDT

This archive was generated by hypermail pre-2.1.9 : Sun May 01 2005 - 12:00:04 MDT