On 13.10 13:45, Brennon Church wrote:
> Has anyone here come across problems when using squid to cache larger
> files? I know about the upper limit on the size of objects when using
> Squid 2.x, and I have the maximum_object_size set to 1048576KB (1G), so
> that shouldn't be a concern. I'm coming across two problems:
maximum size with squid 2.5 is 2GB-1B. However I'm not sure how can you
set such value, except using 2147483647 or 2097151 KB
> 2) After those times where a larger file (again, 600M or so) succeeds,
> the object is there, and I'm able to download the file again from the
> cache rather than directly from the site. Shortly afterwards, however,
> the object is overwritten by something else. I am using the ufs cache
> type, and it's been given 10 Gigs of space, plenty for the tests I'm
> running. I've also upped the sub directories to 256 and 256, so there
> should be plenty of object "placeholders" available. In fcct, when I
> look in the cache directories only the first few subdirectories within
> the first 00 directory are being used.
It's probably because your cache is filled up and the file has 'expired'
You probably should increase your cache size, not number of files it can
store. And using heap LFUDA replacement policy should help you too.
-- Matus UHLAR - fantomas, [email protected] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. "They say when you play that M$ CD backward you can hear satanic messages." "That's nothing. If you play it forward it will install Windows."Received on Wed Oct 13 2004 - 15:08:13 MDT
This archive was generated by hypermail pre-2.1.9 : Mon Nov 01 2004 - 12:00:02 MST