[squid-users] kernel: Out of Memory: Killed process 16482 (squid).

From: Bin Liu <[email protected]>
Date: Tue, 23 Nov 2004 11:24:49 +0800

Hi, folks,

My squid box has terminated service twice for these two days. Before
this, it was running pretty well for about two weeks without any
config change. It serves that load about 200 req/s (peak), and 130
req/s (mean).

--------- System informatio ------------
- AMD Opteron 248 * 2
- S2882 Thunder K8s Pro
- RAM 1GB
- 2 SCSI Seagate 10k for RAID-0

# uname -a
Linux NGate 2.4.21-20.EL.NGate #2 SMP Mon Nov 8 13:26:37 CST 2004
i686 athlon i386 GNU/Linux

# /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.5.STABLE7
configure options: --prefix=/usr/local/squid --with-aufs-threads=32
--with-pthreads --with-aio --with-dl --enable-storeio=ufs,aufs,diskd
--enable-removal-policies=lru,heap --enable-kill-parent-hack
--enable-snmp --enable-poll --disable-ident-lookups
--disable-hostname-checks --enable-underscores --enable-stacktraces
--enable-dl-malloc --enable-wccpv2

# cat /usr/local/squid/etc/squid.conf
visible_hostname NGate.com

hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \? .cgi .pl .php .asp .cfm
no_cache deny QUERY

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i .gif 1440 90% 129600 reload-into-ims
refresh_pattern -i .swf 1440 90% 129600 reload-into-ims
refresh_pattern -i .jpg 1440 90% 129600 reload-into-ims
refresh_pattern -i .bmp 1440 90% 129600 reload-into-ims
refresh_pattern -i .pdf 0 90% 129600 reload-into-ims
refresh_pattern -i .zip 0 90% 129600 reload-into-ims
refresh_pattern -i .rar 0 90% 129600 reload-into-ims
refresh_pattern -i .exe 0 90% 129600 reload-into-ims
refresh_pattern . 1 20% 4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8

acl SSL_ports port 443
acl Safe_ports port 80
acl CONNECT method CONNECT

http_access allow manager localhost
http_access allow CONNECT SSL_ports
http_access deny !Safe_ports
http_access deny to_localhost
http_access allow all

acl snmppublic snmp_community public
snmp_access allow snmppublic all
snmp_incoming_address 0.0.0.0
snmp_outgoing_address 0.0.0.0

dns_nameservers 127.0.0.1

half_closed_clients off

logfile_rotate 0

maximum_object_size 51200 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 100 KB

store_avg_object_size 13 KB
store_objects_per_bucket 128
cache_dir aufs /cache 30720 128 256

cache_mem 128 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

coredump_dir /cache
cache_access_log /var/log/cache/access.log
cache_log /var/log/cache/cache.log
cache_store_log none

forwarded_for off

httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
httpd_accel_single_host off

http_port 3333
icp_port 0
snmp_port 3401

wccp_router xxx.xxx.xxx.xxx

And I got something in /var/log/messages

Nov 23 03:20:02 NGate kernel: Mem-info:
Nov 23 03:20:02 NGate kernel: Zone:DMA freepages: 1055 min: 1056
low: 1088 high: 1120
Nov 23 03:20:02 NGate kernel: Zone:Normal freepages: 702 min: 703
low: 3968 high: 5728
Nov 23 03:20:02 NGate kernel: Zone:HighMem freepages: 0 min: 0
low: 0 high: 0
Nov 23 03:20:02 NGate kernel: Free pages: 1757 ( 0 HighMem)
Nov 23 03:20:02 NGate kernel: ( Active: 216824/176, inactive_laundry:
465, inactive_clean: 0, free: 1757 )
Nov 23 03:20:02 NGate kernel: aa:1666 ac:14 id:3 il:0 ic:0 fr:1055
Nov 23 03:20:02 NGate kernel: aa:213121 ac:2113 id:547 il:0 ic:0 fr:702
Nov 23 03:20:02 NGate kernel: aa:0 ac:0 id:0 il:0 ic:0 fr:0
Nov 23 03:20:02 NGate kernel: 1*4kB 1*8kB 1*16kB 1*32kB 1*64kB 0*128kB
0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 4220kB)
Nov 23 03:20:02 NGate kernel: 2*4kB 2*8kB 0*16kB 9*32kB 1*64kB 1*128kB
1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 2808kB)
Nov 23 03:20:02 NGate kernel: Swap cache: add 8508114, delete 8508114,
find 4226233/6345266, race 0+25
Nov 23 03:20:02 NGate kernel: 2992 pages of slabcache
Nov 23 03:20:03 NGate kernel: 256 pages of kernel stacks
Nov 23 03:20:03 NGate kernel: 677 lowmem pagetables, 0 highmem pagetables
Nov 23 03:20:03 NGate kernel: Free swap: 0kB
Nov 23 03:20:03 NGate kernel: 229376 pages of RAM
Nov 23 03:20:03 NGate kernel: 0 pages of HIGHMEM
Nov 23 03:20:03 NGate kernel: 4702 reserved pages
Nov 23 03:20:03 NGate kernel: 2671 pages shared
Nov 23 03:20:03 NGate kernel: 0 pages swap cached
Nov 23 03:20:03 NGate kernel: Out of Memory: Killed process 16482 (squid).
Nov 23 03:20:03 NGate kernel: Out of Memory: Killed process 16484 (squid).
Nov 23 03:20:03 NGate kernel: Out of Memory: Killed process 16485 (squid).
Nov 23 03:20:03 NGate kernel: Out of Memory: Killed process 16486 (squid).

I've read some releative sections in squid FAQ, it told me that "as
the software has matured, we believe almost all of Squid's memory
leaks have been eliminated, and new ones are least easy to identify."
So, this can't be memory leak? Maybe something wrong in my squid.conf?
Is maximum_object_size and cache_mem a bit too high?
Received on Mon Nov 22 2004 - 20:24:50 MST

This archive was generated by hypermail pre-2.1.9 : Wed Dec 01 2004 - 12:00:01 MST