RE: squid.1.NOVM.20+retry memory usage report bug

From: Mike Mitchell <[email protected]>
Date: Thu, 26 Feb 1998 11:55:38 -0500

> -----Original Message-----
> From: Michael Pelletier [SMTP:mikep@comshare.com]
> Sent: Wednesday, February 25, 1998 3:57 PM
> To: Lionel Bouton; Lionel Bouton
> Cc: squid-users@nlanr.net; squid-users@nlanr.net
> Subject: Re: squid.1.NOVM.20+retry memory usage report bug
>
> On Wed, 25 Feb 1998, Lionel Bouton wrote:
>
> > Since I upgraded from NOVM.17+retry to NOVM.20+retry, I noticed that
> the
> > amount of memory reported by cachemgr.cgi for 'Pool for disk I/O'
> > started to grow. It was usually under 200/300KB with NOVM.17 but now
> it
> > is over 20 MB. I suspected that the real amount of memory used was
> > nearly the same between 17 and 20 because the process's size was
> around
> > 32-35 MB for both versions. Now I'm pretty sure it is the case
> because
> > the amount of memory reported by cachemgr.cgi is greater than the
> size
> > of the process. And the difference keeps growing...
>
> Based on what I recall Mike Mitchell saying, the 1.NOVM.20+retry patch
> is
> pretty much identical to the 1.NOVM.17+retry patch, just with a few
> tweaks
> to make it apply properly. I'm sifting through the details of the new
> #ifdef-ified NOVM patch to insure that there's no unintentional
> differences right now... Nothing aside from the extra #ifdef lines
> and a
> couple of spaces here and there are present in the new ifdef patch.
>
> Has anyone running the pre-February 23 version of the 1.NOVM.20+retry
> patch observed this problem?
>
> My VM version running the #ifdef'd patch is clicking along just fine,
> with
> low disk I/O pools, for what that's worth. Not quite sure how the VM
> version compares with the NOVM version in this regard.
>
> -Mike Pelletier.
        [Mike Mitchell]

        It looks like this is caused by a bug in the routine
'destroy_MemObject()' in
        the file 'store.c'. It isn't directly related to the retry
patch. The one
        line that reads

             safe_free(mem->e_abort_msg);

        should read

             if (mem->e_abort_msg != NULL)
                 put_free_8k_page(mem->e_abort_msg);

        The safe_free() routine doesn't keep any statistics, it just
free's the data.
        The 'mem->e_abort_msg' pointer was set via 'get_free_8k()',
which increments
        the counters for the disk usage pool. Since it was free'd
without decrementing
        the counters, it looks like a memory leak. There really isn't a
memory leak,
        it is just not keeping track properly.
Received on Thu Feb 26 1998 - 09:01:27 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:39:00 MST