[squid-users] faster to not cache large or streamed files?

From: Adam <[email protected]>
Date: Tue, 29 Apr 2003 19:05:19 -0700

Hello again,

We are in discussions about which would be a more effective use of our
2.5STABLE2 squid cache. We guestimate that the majority of ftp's, .exe,
.mp3's, .asf, etc. are files that only that user will access. Therefore it
seems to us that there is little advantage to writing the item to disk and
saving it for some potential future user. However there are some file types
that do get downloaded a lot (e.g. the desktop is all Win2K boxes and admins
do microsoft update over and over). We have a single 18GB scsi disk (in a
diskpack) on its own controller. However we've only allocated 7GB of it,
the rest of the disk is, for now, unused:
      cache_dir aufs /cache 707256 16 256
      cache_mem 256 MB

I've tuned the filesystem for "space" and have mounted with the noatime and
logging mount options. The server has 1GB of RAM of which only about
5-600MB is ever used (per top). We bumped up cache_mem because the extra
RAM was available.

Additionally we have selected squid's "heap GDSF" cache replacement policy
so as to get a higher hit rate (at the expense of larger objects) and we
have delay_pools so any streaming stuff get's throttled to 8K during the day
and 16K after hours.

So I guess I am asking what would be the optimum configuration to minimize
unnecessary disk writes (I/O probably being our main bottleneck if we have
one)? To let "heap GDSF" do it's job and leave the caching mechanisms in
place or add something like this?:

acl streamorlarge urlpath_regex -i \.zip$ \.asf$ \.tar$ \.asx$ \.wmv$ \.mpg$
\.rm$ \.mov$ \.iso$ \.mpeg$
no_cache deny streamorlarge

The idea was to save the slow-down of squid having to write to disk. Or
perhaps something else? Just fishing for fresh ideas/perspectives.

thanks,

Adam
Received on Tue Apr 29 2003 - 20:06:05 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:15:36 MST