Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

From: Rajkumar Seenivasan <rkcp613_at_gmail.com>
Date: Mon, 1 Nov 2010 22:25:18 -0400

Hi,
Can someone pls help fix my 2 issues?
I wish there was an equivalent of "reference_age" in 3.1

thanks.

On Thu, Oct 28, 2010 at 12:28 PM, Rajkumar Seenivasan <rkcp613_at_gmail.com> wrote:
> switching from LFUDA to GDSF didn't make much of a difference.
>
> I assume the following is happening...
> I pre-cache around 2 to 3GB of data everyday and get 40 to 50% HITS everyday.
> Once the cache_dir size reaches the cache_swap_low threshold, squid is
> not aggressive enough in removing the old objects. Infact, I think
> squid is not doing anything to remove the old objects.
>
> So the pre-caching requests are not getting into the store and the HIT
> rate goes down big time.
> When this happens, if I increase the store size, I can see better HIT rates.
>
> What can be done to resolve this issue? Is there a equivalent of
> "reference_age" for squid V3.1.8?
> cache mgr always reports swap.files_cleaned = 0.
> my understanding is that this counter will show the # of objects
> removed from the disk store based on the replacement policy.
>
> I changed the cache_replacement_policy from "heap GDSF" to "lru"
> yesterday to see if it makes a difference.
> Removal policy: lru
> LRU reference age: 1.15 days
>
>
> issues with memory usage:
> both squids are running with 100% mem usage (15GB). Nothing else is
> running on these 2 servers.
> Stopping and starting the squid doesn't bring down the memory usage.
>
> The only way to release memory is to stop squid, move the cache dir to
> something_old, recreate and start squid with empty cache
> AND DELETE the old cache dirs.
> If I don't delete the old cache dir, memory is not getting released.
>
> squid runs in accel mode and serves only sqllite and xml files. nothing else.
>
> Squid Cache: Version 3.1.8
> configure options: �'--enable-icmp'
> '--enable-removal-policies=heap,lru' '--enable-useragent-log'
> '--enable-referer-log' '--enable-follow-x-forwarded-for'
> '--enable-default-hostsfile=/etc/hosts' '--enable-x-accelerator-vary'
> '--disable-ipv6' '--enable-htcp' '--enable-icp'
> '--enable-storeio=diskd,aufs' '--with-large-files'
> '--enable-http-violations' '--disable-translation'
> '--disable-auto-locale' '--enable-async-io'
> --with-squid=/root/downlaods/squid/squid-3.1.8
> --enable-ltdl-convenience
>
> Please help.
>
> thanks.
>
>
>
>
>
>
>
>
>
>
> On Fri, Sep 24, 2010 at 1:16 PM, Rajkumar Seenivasan <rkcp613_at_gmail.com> wrote:
>> Hello Amos,
>> see below for my responses... thx.
>>
>>>>>> ? 50% empty cache required so as not to fill RAM? => cache is too big or RAM not enough.
>> cache usage size is approx. 6GB per day.
>> We have 15GB of physical memory on each box and the cache_dir is set for 20GB.
>> I had cache_swap_low 65 and cache_swap_high 70% and the available
>> memory went down to 50MB out of 15GB when the cache_dir used was 14GB
>> (reached the high threshold).
>>
>>>>>> What was the version in use before this happened? 3.1.8 okay for a while? or did it start discarding right at the point of upgrade from another?
>> We started testing with 3.1.6 and then used 3.1.8 in production. This
>> issue was noticed even during the QA. We didn't have any caching
>> servers before.
>>
>>>>>> Server advertised the content-length as unknown then sent 279307 bytes. (-1/279307) Squid is forced to store it to disk immediately (could be a TB
>>>>>> about to arrive for all Squid knows).
>> I looked further into the logs and the log entry I pointed out was
>> from the SIBLING request. sorry about that.
>>
>>>>>> These tell squid 50% of the cache allocated disk space MUST be empty at all times. Erase content if more is used. The defaults for these are less
>>>>>> than 100% in order to leave some small buffer of space for use by line-speed stuff still arriving while squid purged old objects to fit them.
>> Since our data changes every day, I don't need a cache dir with more
>> than 11GB to give enough buffer. On an average, 6GB of disk cache is
>> used per day.
>>
>>>>>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.
>>>>>> Used or available?
>> 11GB used out of 20GB.
>>
>>>>>> The 10MB/GB of RAM usage by the in-memory index is calculated from an average object size around 4KB. You can check your available RAM roughly
>>>>>> meets Squid needs with: �10MB/GB of disk cache + the size of cache_mem + 10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
>>>>>> peak traffic. This will give you a rough ceiling.
>>
>> Yesterday morning, we changed the cache_replacement_policy from "heap
>> LFUDA" to "heap GDSF", cleaned up the cache_dir and started squid
>> fresh.
>>
>> current disk cache usage is 8GB (out of 20GB). ie. after 30 hours.
>> Free memory is 1.7GB out of 15GB.
>>
>> Based on your math, the memory usage shouldn't be more than 3 or 4GB.
>> In this case, the used mem is far too high.
>>
>>
>> On Thu, Sep 23, 2010 at 12:21 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
>>> On Wed, 22 Sep 2010 15:09:31 -0400, "Chad Naugle"
>>> <Chad.Naugle_at_travimp.com>
>>> wrote:
>>>> With that large array of RAM I would increase those maximum numbers, to
>>>> let's say, 8 MB, 16 MB, 32 MB, especially if you plan on using heap
>>> LFUDA,
>>>> which is optimized for storing larger objects, and trashes smaller
>>> objects
>>>> faster, where heap GSDF is the opposite, using LRU for memory for the
>>> large
>>>> objects to offset the difference.
>>>>
>>>> ---------------------------------------------
>>>> Chad E. Naugle
>>>> Tech Support II, x. 7981
>>>> Travel Impressions, Ltd.
>>>>
>>>>
>>>>
>>>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 3:01 PM >>>
>>>> Thanks for the tip. I will try with "heap GSDF" to see if it makes a
>>>> difference.
>>>> Any idea why the object is not considered as a hot-object and stored in
>>>> memory?
>>>
>>> see below.
>>>
>>>>
>>>> I have...
>>>> minimum_object_size 0 bytes
>>>> maximum_object_size 5120 KB
>>>>
>>>> maximum_object_size_in_memory 1024 KB
>>>>
>>>> Earlier we had cache_swap_low and high at 80 and 85% and the physical
>>>> memory usage went high leaving only 50MB free out of 15GB.
>>>> To fix this issue, the high and low were set to 50 and 55%.
>>>
>>> ? 50% empty cache required so as not to fill RAM? => cache is too big or
>>> RAM not enough.
>>>
>>>>
>>>> Does this change in "cache_replacement_policy" and the "cache_swap_low
>>>> / high" require a restart or just a -k reconfigure will do it?
>>>>
>>>> Current usage: Top
>>>> top - 14:33:39 up 12 days, 21:44, �3 users, �load average: 0.03, 0.03,
>>> 0.00
>>>> Tasks: �83 total, � 1 running, �81 sleeping, � 1 stopped, � 0 zombie
>>>> Cpu(s): �0.0%us, �0.1%sy, �0.0%ni, 99.3%id, �0.0%wa, �0.0%hi, �0.0%si,
>>>> 0.6%st
>>>> Mem: �15736360k total, 14175056k used, �1561304k free, � 283140k buffers
>>>> Swap: 25703960k total, � � � 92k used, 25703868k free, 10692796k cached
>>>>
>>>> � PID USER � � �PR �NI �VIRT �RES �SHR S %CPU %MEM � �TIME+ �COMMAND
>>>> 17442 squid � � 15 � 0 1821m 1.8g �14m S �0.3 11.7 � 4:03.23 squid
>>>>
>>>>
>>>> #free
>>>> � � � � � � �total � � � used � � � free � � shared � �buffers
>>> cached
>>>> Mem: � � �15736360 � 14175164 � �1561196 � � � � �0 � � 283160
>>> 10692864
>>>> -/+ buffers/cache: � �3199140 � 12537220
>>>> Swap: � � 25703960 � � � � 92 � 25703868
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle <Chad.Naugle_at_travimp.com>
>>>> wrote:
>>>>> Perhaps you can try switching to heap GSDF, instead of heap LFUDA.
>>> What
>>>>> are also your minimum_object_size versus your _maximum_object_size?
>>>>>
>>>>> Perhaps you can also try setting the cache_swap_low / high back to
>>>>> default (90 - 95) to see if that will make a difference.
>>>>>
>>>>> ---------------------------------------------
>>>>> Chad E. Naugle
>>>>> Tech Support II, x. 7981
>>>>> Travel Impressions, Ltd.
>>>>>
>>>>>
>>>>>
>>>>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 2:05 PM >>>
>>>>> I have the following for replacement policy...
>>>>>
>>>>> cache_replacement_policy heap LFUDA
>>>>> memory_replacement_policy lru
>>>>>
>>>>> thanks.
>>>>>
>>>>> On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle <Chad.Naugle_at_travimp.com>
>>>>> wrote:
>>>>>> What is your cache_replacement_policy directive set to?
>>>>>>
>>>>>> ---------------------------------------------
>>>>>> Chad E. Naugle
>>>>>> Tech Support II, x. 7981
>>>>>> Travel Impressions, Ltd.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 1:55 PM >>>
>>>>>> I have a strange issue happening with my squid (v 3.1.8)
>>>>>> 2 squid servers with sibling - sibling setup in accel mode.
>>>
>>> What was the version in use before this happened? 3.1.8 okay for a while?
>>> or did it start discarding right at the point of upgrade from another?
>>>
>>>>>>
>>>>>> after running the squid for 2 to 3 days, the HIT rate has gone down.
>>>>>> from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.
>>>>>>
>>>>>> store.log shows that even fresh requests are NOT getting stored onto
>>>>>> disk and getting RELEASED rightaway.
>>>>>> This issue is with both squids...
>>>>>>
>>>>>> store.log entry:
>>>>>> 1285176036.341 RELEASE -1 FFFFFFFF 7801460962DF9DCA15DE95562D3997CB
>>>>>> 200 1285158415 � � � �-1 1285230415 application/x-download -1/279307
>>>>>> GET http://....
>>>>>> requests have a max-age of 20Hrs.
>>>
>>> Server advertised the content-length as unknown then sent 279307 bytes.
>>> (-1/279307) Squid is forced to store it to disk immediately (could be a TB
>>> about to arrive for all Squid knows).
>>>
>>>>>>
>>>>>> squid.conf:
>>>>>> cache_dir aufs /squid/var/cache 20480 16 256
>>>>>> cache_mem 1536 MB
>>>>>> memory_pools off
>>>>>> cache_swap_low 50
>>>>>> cache_swap_high 55
>>>
>>> These tell squid 50% of the cache allocated disk space MUST be empty at
>>> all times. Erase content if more is used. The defaults for these are less
>>> than 100% in order to leave some small buffer of space for use by
>>> line-speed stuff still arriving while squid purged old objects to fit them.
>>>
>>> The 90%/95% numbers were created back when large HDD were measured MB.
>>>
>>> 50%/55% with 20GB cache only makes sense if you have something greater
>>> than 250Mbps of new cachable HTTP data flowing through this one Squid
>>> instance. In which case I'd suggest a bigger cache.
>>>
>>> (My estimate of the bandwidth is calculated from: % of cache needed free /
>>> 5 minute interval lag in purging.)
>>>
>>>
>>>>>> refresh_pattern . 0 20% 1440
>>>>>>
>>>>>>
>>>>>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.
>>>
>>> Used or available?
>>>
>>> cache_dir...20480 = 20GB allocated for the cache.
>>>
>>> With 11GB is roughly 50% (cache_swap_low) of the 20GB. So that seems to be
>>> working.
>>>
>>>
>>> The 10MB/GB of RAM usage by the in-memory index is calculated from an
>>> average object size around 4KB. You can check your available RAM roughly
>>> meets Squid needs with: �10MB/GB of disk cache + the size of cache_mem +
>>> 10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
>>> peak traffic. This will give you a rough ceiling.
>>>
>>> Amos
>>>
>>
>
Received on Tue Nov 02 2010 - 02:25:21 MDT

This archive was generated by hypermail 2.2.0 : Tue Nov 02 2010 - 12:00:03 MDT