On Wednesday 11 September 96, at 9 h 39, the keyboard of Duane Wessels
<wessels@nlanr.net> wrote:
> I think what happens is that squid fork()'s a number of times before
> it exec()'s the dnsservers. So you end up with a number of large
> squid processes hogging all the VM.
Daniel Azuelos <dan@pasteur.fr> made a similar suggestion. Running Squid
with debug(14,3) shows that DNS servers are started quite fast:
[12/Sep/1996:09:08:28 +0200] ipcache.c:1053: ipcacheOpenServers:
Starting 5 'dns_server' processes
[12/Sep/1996:09:08:28 +0200] ipcache.c:1093: ipcacheOpenServers:
'dns_server' 0 started
[12/Sep/1996:09:08:28 +0200] ipcache.c:1093: ipcacheOpenServers:
'dns_server' 1 started
[12/Sep/1996:09:08:28 +0200] ipcache.c:1093: ipcacheOpenServers:
'dns_server' 2 started
[12/Sep/1996:09:08:28 +0200] ipcache.c:1093: ipcacheOpenServers:
'dns_server' 3 started
[12/Sep/1996:09:08:28 +0200] ipcache.c:1093: ipcacheOpenServers:
'dns_server' 4 started
So may be, if exec is a bit long, I have to bear five Squids, each
weighting 200 megabytes... Since I have 320 Mbytes of RAM but only 600
Mbytes of swap and we do not use lazy allocation, this could be the
problem. This would explain why another Alpha with the same software (OS
and Squid) has no problem (its Squid is much smaller). How does NLANR do?
What is their Squid memory size?
The only mystery is that fork is supposed to return ENOMEM in that case.
> I think I have fixed this for v1.1. You might try adding a
> sleep() or usleep() call just before the function returns
> in the parent process.
We'll wait to produce the problem again with the higher debug level, then
we'll try it. One second between each server seems fine (even if you
launch 20 of them) and will avoid portability problems with usleep.
Received on Thu Sep 12 1996 - 00:25:05 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:59 MST