RE: [squid-users] Changing cache location

From: Lightfoot.Michael <[email protected]>
Date: Tue, 11 Mar 2003 14:21:51 +1100

> -----Original Message-----
> From: Gary Hostetler [mailto:whostet@nccvt.k12.de.us]
> Sent: Tuesday, 11 March 2003 11:21 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Changing cache location
>
>
> I want to change my cache from the default in squid to
> /home/cache where the partition size is 31 gigs and I want to
> reduce the cache size from 25 gigs to 21 gigs. When I tried
> it it failed. When I set it back to the default it wouldn't
> work until I put the original file size back in which was 25
> gigs. I want to reduce to 21 gigs as recommended by some
> people (70% of partition) and set it to /home/cache/. Did I
> need to delete the default cache directory and how will that
> affect the swap file there.
>
The simplest is to blow away the current cache after changing squid.conf
and restart it all with squid -z. Others may confirm this, but I
suspect that you can't change the name of the cache directory and
preserve the cache unless you delete the swap.state file and let squid
rebuild it by reading every cache directory.

You can reduce your cache size by changing the size in squid.conf then
waiting until squid unlinks enough files to reduce the size to what you
want. This naturally takes some time, squid will not simply start
deleting objects wholesale, it will gradually adjust itself to the new
regime.

The relationship between partition size and cache size is complex. If
you are using Berkeley style filesystems such as UFS (on Unixes such as
Solaris) or ext2 (on Linux), there will be a gradual fragmentation of
the cache filesystems with the number of free fragments slowly taking
over the free list until free blocks drops to zero. This can be
mitigated by only utilising 70-80% of the filesystem space for cache
files. I have my current caches set to 80% of the space (on Solaris UFS
filesystems) and monitor tehm each night by running a script that does a
fsck -n and emails me the last line:

474247 files, 5062380 used, 929041 free (34209 frags, 111854 blocks,
0.5% fragmentation)

Notice that the fragmentation is not causing a problem at the moment,
but something may have to be done in a month or so. At that time I will
tar the whole thing onto tape, rebuild the filesystem and tar it all
back.

If you are using an extent-based filesystem such as VxFS (Veritas
Filesystem on Solaris and a number of other commercial Unixes) you do
not suffer from fragmentaiton nearly as much and have the advantage of
being able to defragment the filesystem while it is mounted. At a
previous site I recommended that all our top level proxy caches be
changed to VxFS for this very reason (they were all 8GB x 4 except one
which was 8GB x 2.)

Michael Lightfoot
Unix Consultant
ISG Host Systems
Comcare
+61 2 62750680
Apologies for the rubbish that follows...
------------------------------------------------------------------------
NOTICE: This e-mail message and attachments may contain confidential
information. If you are not the intended recipient you should not use or
disclose any information in the message or attachments. If received in
error, please notify the sender by return email immediately. Comcare
does not waive any confidentiality or privilege.
Received on Mon Mar 10 2003 - 20:21:59 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:14:00 MST