[squid-users] Unexpected memory consumption using multicast ICP cache_peer with proxy-only

From: John Craws <john.craws_at_gmail.com>
Date: Tue, 14 Dec 2010 17:50:33 -0500

Hi,

I am seeing a level of memory consumption that I do not understand
from a squid instance configured to use a single cache_peer over
multicast ICP.

Please disregard the setup unless it is actually relevant. This is for
testing purposes and not meant to be used as is.
The same behavior has been observed on a more standard, multi-machine setup.
Tried on 3.1.4, 3.1.9 and 3.2.0.3.

[Instance 1 (SQUID1 / 127.0.0.11)] squid1.conf

visible_hostname squid1
debug_options ALL,1
http_access allow all
acl manager proto cache_object
acl localhost src 127.0.0.0/24
http_access allow manager localhost
cache_mgr cachemgr
cachemgr_passwd cachemgr
cache deny all
http_port 127.0.0.11:3128
http_port 127.0.0.11:3129 transparent
coredump_dir .
cache_mem 8 MB
icp_port 3130
icp_access allow all
cache_peer 239.255.99.8 multicast 3128 3130 ttl=64
icp_query_timeout 2000
cache_peer 172.16.5.69 sibling 3128 3130 multicast-responder proxy-only
access_log access.log
cache_log cache.log
cache_store_log none
pid_filename squid1.pid

[Instance 2 (SQUID2 / 172.16.5.69)] squid2.conf

visible_hostname squid2
debug_options ALL,1
http_access allow all
acl manager proto cache_object
acl localhost src 127.0.0.0/24
http_access allow manager localhost
cache_mgr cachemgr
cachemgr_passwd cachemgr
http_port 172.16.5.69:3128
http_port 172.16.5.69:3129 transparent
coredump_dir .
cache_mem 512 MB
icp_port 3130
icp_access allow all
mcast_groups 239.255.99.8
icp_query_timeout 2000
access_log access.log
cache_log cache.log
cache_store_log none
pid_filename squid2.pid

---------------------------------------

A lighttpd origin server listens at 127.0.0.33. It is used strictly to
prime the SQUID2 cache.
Jmeter is used to perform the following (also validated with
squidclient and wget).

Step 1:
SQUID2 is primed with a large (20 000) number of small (20k) distinct objects.
SQUID1 has not received a single request.

Step 2:
SQUID1 receives a continuous flow of requests for objects that have
already been cached by SQUID2 (from the origin server).
All requests lead to an entry in access.log that is similar to

1292361190.281����� 1 127.0.0.11 TCP_MISS/200 20848 GET
http://127.0.0.33/media/20k.jpg-20362 - SIBLING_HIT/172.16.5.69
application/octet-stream

Result:
Memory consumption by SQUID1 increases rapidly and constantly.
Below is the output of a very simple script that simply outputs the
result of the ps command every 5 seconds:

[ETIME, CMD, RSS, VSZ, %CPU, %MEM]
01:16 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
01:21 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
01:26 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
01:31 ./3.1.9/sbin/squid -d 2 -N 15948 55028 2.8 0.4
01:36 ./3.1.9/sbin/squid -d 2 -N 22256 61220 7.4 0.5
01:41 ./3.1.9/sbin/squid -d 2 -N 28704 67684 11.8 0.7
01:46 ./3.1.9/sbin/squid -d 2 -N 34944 74016 15.6 0.8
01:51 ./3.1.9/sbin/squid -d 2 -N 41424 80472 19.2 1.0
01:56 ./3.1.9/sbin/squid -d 2 -N 47784 86796 22.5 1.2
02:01 ./3.1.9/sbin/squid -d 2 -N 54272 93248 25.6 1.3
02:06 ./3.1.9/sbin/squid -d 2 -N 60716 99704 28.3 1.5
02:11 ./3.1.9/sbin/squid -d 2 -N 66844 105908 31.0 1.7
02:16 ./3.1.9/sbin/squid -d 2 -N 73004 111976 33.3 1.8
02:21 ./3.1.9/sbin/squid -d 2 -N 79644 118700 35.6 2.0
02:26 ./3.1.9/sbin/squid -d 2 -N 85916 124888 37.6 2.1
02:31 ./3.1.9/sbin/squid -d 2 -N 91932 130948 39.5 2.3
02:36 ./3.1.9/sbin/squid -d 2 -N 98268 137284 41.3 2.5
02:41 ./3.1.9/sbin/squid -d 2 -N 104452 143476 43.0 2.6
02:46 ./3.1.9/sbin/squid -d 2 -N 111108 150188 44.6 2.8
02:51 ./3.1.9/sbin/squid -d 2 -N 117404 156392 46.2 2.9
02:56 ./3.1.9/sbin/squid -d 2 -N 123960 162984 47.6 3.1
03:01 ./3.1.9/sbin/squid -d 2 -N 129896 168912 48.9 3.3
03:06 ./3.1.9/sbin/squid -d 2 -N 136360 175364 50.2 3.4
03:11 ./3.1.9/sbin/squid -d 2 -N 142368 181416 51.3 3.6
03:16 ./3.1.9/sbin/squid -d 2 -N 148480 187464 52.4 3.7
03:21 ./3.1.9/sbin/squid -d 2 -N 154592 193660 53.5 3.9
03:27 ./3.1.9/sbin/squid -d 2 -N 160800 199856 54.2 4.1
03:32 ./3.1.9/sbin/squid -d 2 -N 166948 205920 55.2 4.2
03:37 ./3.1.9/sbin/squid -d 2 -N 172984 211980 56.1 4.4
03:42 ./3.1.9/sbin/squid -d 2 -N 179048 218044 57.0 4.5
03:47 ./3.1.9/sbin/squid -d 2 -N 184972 223976 57.8 4.7
03:52 ./3.1.9/sbin/squid -d 2 -N 191468 230432 58.6 4.8
03:57 ./3.1.9/sbin/squid -d 2 -N 197524 236496 59.4 5.0
04:02 ./3.1.9/sbin/squid -d 2 -N 203284 242300 60.1 5.1
04:07 ./3.1.9/sbin/squid -d 2 -N 209296 248352 60.8 5.3
04:12 ./3.1.9/sbin/squid -d 2 -N 215384 254412 61.5 5.4
04:17 ./3.1.9/sbin/squid -d 2 -N 221652 260612 62.2 5.6
04:22 ./3.1.9/sbin/squid -d 2 -N 227964 266944 62.9 5.8
04:27 ./3.1.9/sbin/squid -d 2 -N 233808 272876 63.5 5.9
04:32 ./3.1.9/sbin/squid -d 2 -N 239676 278668 64.0 6.1
04:37 ./3.1.9/sbin/squid -d 2 -N 245720 284732 64.6 6.2
04:42 ./3.1.9/sbin/squid -d 2 -N 251520 290532 65.1 6.4
04:47 ./3.1.9/sbin/squid -d 2 -N 257524 296592 65.7 6.5
04:52 ./3.1.9/sbin/squid -d 2 -N 263496 302524 66.2 6.7
04:57 ./3.1.9/sbin/squid -d 2 -N 269452 308464 66.7 6.8
05:02 ./3.1.9/sbin/squid -d 2 -N 275408 314396 67.1 7.0
05:07 ./3.1.9/sbin/squid -d 2 -N 281600 320592 67.6 7.1
05:12 ./3.1.9/sbin/squid -d 2 -N 287572 326636 68.1 7.3
05:17 ./3.1.9/sbin/squid -d 2 -N 293692 332700 68.5 7.4
05:22 ./3.1.9/sbin/squid -d 2 -N 299728 338764 68.9 7.6
05:27 ./3.1.9/sbin/squid -d 2 -N 305676 344700 69.3 7.8
05:32 ./3.1.9/sbin/squid -d 2 -N 311668 350632 69.8 7.9
05:37 ./3.1.9/sbin/squid -d 2 -N 317704 356692 70.1 8.1
05:42 ./3.1.9/sbin/squid -d 2 -N 323652 362624 70.5 8.2
05:47 ./3.1.9/sbin/squid -d 2 -N 329700 368676 70.9 8.4

I would definitely appreciate any help on understanding the issue.

Thank you,

John Craws
Received on Tue Dec 14 2010 - 23:49:22 MST

This archive was generated by hypermail 2.2.0 : Wed Dec 15 2010 - 12:00:03 MST