Re: Performance problem on Solaris 2.5.1

From: John Sloan <[email protected]>
Date: Tue, 9 Jun 1998 14:47:05 +0100 (BST)

On Tue, 9 Jun 1998, Ery Punta Hendraswara wrote:

> Dear,
>
> We have Squid V 1.1.21 running on Ultra Sparc 2, RAM 512MB, Harddisk
> 24G, with OS Solaris 2.5.1. We encounter performance degradation after
> running the squid for about 10-12 Hours, we have already follow the
> instruction that we got from the mailing list digest, but we still
> encounter the problem.
> Right now we use GNUMalloc package for the malloc routine and GCC
> 2.7.2.1 as the compiler. The statistic form the cache information show
> the average connection for 1 hour about 50.000-80.000 connections. There
> are 20.000 to 30.000 users we have that connected to the internet via
> that solaris box (actually we have 3 identical machine).
> The main configuration of squid.conf show like this :
>
> cache_swap 17000
> cache_mem 200
>
> httpd_accel 202.134.0.227 80
> httpd_accel_with_proxy on
> httpd_accel_uses_host_header on
>
> we running the machine on transparent proxy mode inconjunction with
> Cisco 7500 router. IP filter 3.2.7 have been installed on the solaris
> box as the redirector. The main problem we got for 10 Hours period, the
> memory shown by the vmstat always decrement (it's seem memory leak) into
> smaller number (we notice almost 70% of the swap memory used). Paging
> activity is so high, and the disk activity shown by the iostat at the
> level 100%!.

The number to look for on vmstat is the scanrate (sr). This is the rate
at which memory is being scanned for pages which can be swapped out.

 procs memory page disk faults cpu
 r b w swap free re mf pi po fr de sr m1 m1 m1 s0 in sy cs us sy id
 0 0 0 480520 16976 0 0 190 48 48 0 0 69 113 22 0 1034 2222 418 20 27 53
 0 0 0 480520 16376 0 0 153 24 24 0 0 43 47 3 0 706 2101 342 20 30 50
 0 0 0 480512 15408 0 0 115 35 35 0 0 39 44 4 0 723 2101 308 23 38 40
 0 0 0 480512 14600 0 0 83 51 51 0 0 37 36 4 0 671 2082 284 21 43 37

As you can see, this machine has a 0 sr, wo while it has some pi and po
activity, it's not actually swapping.

> Could be the memory allocation problem solve by using any other malloc
> library, for example DL malloc ? although we got problem on the
> compiling the DL malloc library for our squid 1.1.21. Or, the Metadisk
> daemon we use can eat all the io process ? (the iostat show very high
> number), avg disk service about 20ms ...
> Is there anybody can help us, we really desperately to get the squid
> machine fine tuned :(

We're running squid 1.1.20 on an Ultra 1/170 with 512Mb memory and 10Gb of
cache. I use one of the mallocs linked from the squid pages, which claims
to be a precursor the the GNU malloc. With this setup, and 1.6 million
hits/day, we see fairly low swap activity. Top reports the process size
as 295Mb.

Peak connection rates were about 90,000 requests/hour (ICP and HTTP).

My conclusion from earlier configurations what that if squid starts
swapping more than minimally performance will go into the toilet, hence
the large ammount of memory configured. I did try the NOVM version for a
while, but the squid process ran out of filehandles. Tuning the kernel to
allow 4096 filehandles and recompiling squid seemed to produce a squid
binary which would crash periodically. I haven't tried with a more recent
version of squid, though. [This was 1.1.18 or so].

If you stick with the VM version, I would tune cache_mem right down. The
benefit you get from it is fairly minimal, after all. We ran with
cache_mem of 24Mb for a long time. I have upped it to 128Mb on the above
machine, since it wasn't swapping at all.

John
Received on Tue Jun 09 1998 - 06:56:04 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:39 MST