RE: SPAM: Re: [squid-users] Cache that will not grow in size

From: Mark Engels <[email protected]>
Date: Mon, 30 Apr 2012 06:44:50 +0000

Thanks for the reply Eliezer.

I've had a read through the more readable explanation and well. It definitely was more readable. I think Ill need to re-read the percentage field a few more times before I grasp it completely (rather busy here). However Ive taken your advice onboard and ive modified the values to read as 8640 90% 43800. I also added ipa and dmg file extensions to the original flv pattern as they were up there on our users more frequent requests. Originally I had copied the values from a blog somewhere on the net so I wasn’t aware there was a maximum value.

Tracking our users web history per site is rather tricky as we have 1300 users using approximately 700gig per month and there are constantly changing web filters, so what shows as a high use site may actually be blocked by the time I goto change any refresh patterns. However in saying that, I get fairly good reports from our ISA server which sits one step up.

Would you have any other suggestions to provide?


-----Original Message-----
From: Eliezer Croitoru
Sent: Friday, 27 April 2012 4:13 PM
On 27/04/2012 08:37, Mark Engels wrote:
> Hello everyone,
>
> Ive been working on a squid cache appliance for a few weeks now (on and off) and things appear to be working. However I seem to have an issue where the cache size simply refuses to grow in size. My first attempt had the cache stall at 2.03gb and with this latest build im stalling at 803mb.
>
> I haven’t a clue on where to go or what to look at for determining
> what could be wrong and im hopeing you could be of assistance ☺ also
> any tips for better performance or improved caching would be greatly
> appreciated. (Yes I have googled and I think ive applied what I could
> but it’s a little over my head a few weeks in and no deep linux
> experience)
>
>
> Some facts:
>
> Ive been determining the cache size with the following command, du –hs
> /var/spool/squid Squid is running on a centOS6.2 machine Squid is
> version 3.1.10 CentOS is running in a hyperV virtual machine with
> integration services installed VM has 4gb ram and a 60gb HDD allocated
> Squid is acting as a cache/error page handler box only. There is the
> main proxy sitting one step downstream with squid setup in a “T”
> network (the main cache can skip squid and go direct to the net if
> squid falls over on me, hyperV issue)
>
>
> Config file:
>
> Acl downstream src 192.168.1.2/32
> http_access allow downstream
>
> cache_mgr protectedemail_at_moc.sa.edu.au
>
> < all the standard acl rules here>
>
> http_access allow localnet
> http_access allow localhost
> http_access deny all
>
> # Squid normally listens to port 3128
> http_port 8080
>
> # We recommend you to use at least the following line.
> hierarchy_stoplist cgi—bin ?
>
> # Uncomment and adjust the following to add a disk cache directory.
> cache_dir ufs /var/spool/squid 30000 16 256
>
> # Leave coredumps in the first cache dir coredump_dir /var/spool/squid
>
> # Change maxinum object size
> maxinum_object_size 4 GB
>
> # Define max cache_mem
> cache_mem 512 MB
>
> #Lousy attempt at youtube caching
> quick_abort_min -1 KB
> acl youtube dstdomain .youtube.com
> cache allow youtube
>
from the next refresh patterns it seems you might not quite understand the meaning of the patterns syntax and options.
the first thing i suggest is to look at:
http://www.squid-cache.org/Doc/config/refresh_pattern/
a more "readable" place is :
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+7.+Disk+Cache+Basics/7.7+refresh_pattern/

and try to read it once or twice so you will know how to benefit from it.
also try to read some info about caching here:
http://www.mnot.net/cache_docs/
and a tool that will help you to analyze pages for cachebility is redbot:
http://redbot.org/

there is a maximum time that a object can stay in the cache server as it's a cache server and not hosting service.
it's max of 365 days (total of 525600 minutes) if i remember right so it's useless to use "999999" as a max time for object freshness.
if you want to cache youtube videos you lack a little bit of knowledge about it so just start with basic caching tweaking.
also you should check your users browsing habits in order to gain maximum cache efficiency.
until then you will get a solid caching goals you wont need to shoot so hard.
one very good tool to analyze your users habits is "sarg".

if you need some help i can assist you with it.
just as an example this site: http://djmaza.com/ is heaven of cache proxy server but until you wont analyze it you wont know what to do with it.
you can see in this link:
http://redbot.org/?descend=True&uri=http://djmaza.com/
how the page is built.

Regards,
Eliezer

> # Add any of your own refresh_pattern entries above these.
> refresh_pattern -i \.flv$ 10080 90% 999999 ignore-no-cache
> override-expire ignore-private refresh_pattern -i
> \.(gif|png|jpg|jpeg|ico|bmp)$ 40320 90% 40330 refresh_pattern -i
> \.(iso|avi|wav|mp3|mp4|mpeg|swf|x-flv|mpg|wma|ogg|wmv|asx|asf|dmg|zip|
> exe|rar)$ 40320 90% 40330 refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i
> (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320

Received on Mon Apr 30 2012 - 07:00:06 MDT

This archive was generated by hypermail 2.2.0 : Mon Apr 30 2012 - 12:00:04 MDT