Hi Eliezer,
Thanks a lot for your detailed message.
It's good to know that L1 and L2 are only there to avoid reaching the File system limit of files per inode.�
I think I need to redesign the whole thing from the bottom up as you outlined. Balancing the load between two squid nodes that efficiently use the hardware resources is better than using one squid node that does not efficiently use the available resources. I think using WCCP to redirect traffic and to balance the load is a good choice and it works perfectly for me so far. It also offers good reliability in case one node gets down.
Thanks once again for your support.
Best regards,
Firas
----- Original Message -----
From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
To: squid-users_at_squid-cache.org
Cc:
Sent: Tuesday, September 3, 2013 1:29 AM
Subject: Re: [squid-users] Number of Objects Inside aufs Cache Directory
Hey there,
Since squid holds an internal DB there is no lookup that should be done
on the file system level such as "ls |grep file_name" in order to find
the file and to fetch all the details from the indoe.
the L1 and L2 is to not reach the FS limit of files per inode..
let say with a L1 directory you have a limit of 65k files for ext FS.
when using a L1 >L2>files you reach a higher amount of files at the
upper limit...
like instead of having a 1X65k limit you would have 128X256X65k which is
about: 2,129,920,000
The FS by itself wont be the limit but the cpu ram etc..
I have seen a comparison between xfs and ext4 and it seems to me like
xfs shows that there is a limit to what you can expect from a FS to do..
Also GlusterFS showed a very smart way of handling couple things with
hash based distribution of files over couple nodes.
using a 15k sas drives you can see that there is a limit to what speed
the HDD can do but still when you have enough ram and CPU you can let
the OS handle both the request and the next coming read\write that are
scheduled for the disk.
In any case there should be a limit to how many IO one cpu and HDD can
handle at the same time..
when a raid system is being used and it has more ram then the local
system I can understand the usage of such a raid device but as I said
before..
Squid implementation should be taken one step at a time and a LB based
on routing should be used to also distribute the load between couple
systems.
now lets take the system up from layer 1 to layer 7..
layer 1 would be the copper and the limit is either 1Gbps or 10Gbps in a
case of optical..
I would assume you do have a core router that needs to know about the
load of each instance periodically(30 secs).
A juniper router can take it on the 600-800 Mhz while doing routing only.
A linux server can take with a 2-3 Ghz a bit more then just what this
juniper can... if designed right..
A small keepalived and some load balancer magic on any of the enterprise
class OS would do the trick in a basic routing mode..
once the layer 2-3 is up you can work on the layer 4 and up which is the
level of squid.
from the load balancer to the proxies create a small internal network
and make sure that the traffic will be marked when incoming and outgoing
so that the LB will send the egress traffic to the edge and not the
clients..
now try to think it over and add the tproxy to the proxies step by step.
so the system can be 1-2 LB that can take the full network load and each
proxy that can take over only 1-2 GB loaded the balance of the network..
This is nice to have 1 server that has 64 or 128 cores but the OS needs
to know and use all the resources in a way that the application will
handle only the 4-7 level and leave all the rest to the kernel.
For now it's a dream and still TIER-1 TIER-2 and other providers use
squid and they are happy so it's not the software that blame for
something that doesn't work as expected.
Regards,
Eliezer
On 09/03/2013 12:45 AM, Golden Shadow wrote:
> Hi Amos,
> Thanks for your reply.
>
> I've read the "few hundreds" thing in "Squid: The Definitive Guide" by Duane Wessels and I think his recommendation is related to performance of squid more than to file system constraints. I quoted the following from the book:
>
> "Some people think that Squid performs better, or worse, depending on the particular values for
> L1 and L2. It seems to make sense, intuitively, that small directories can be searched faster
> than large ones. Thus, L1 and L2 should probably be large enough so that each L2 directory has
> no more than a few hundred files."
>
> Best regards,
> Firas
>
> ________________________________
> From: Amos Jeffries <squid3_at_treenet.co.nz>
> To: squid-users_at_squid-cache.org
> Sent: Monday, September 2, 2013 3:33 AM
> Subject: Re: [squid-users] Number of Objects Inside� aufs Cache Directory
>
>
> On 2/09/2013 8:40 a.m., Golden Shadow wrote:
>> Hello there!
>>
>> I've read that the number of first level and second level aufs subdirectories should be selected so that the number of objects inside each second level subdirectory should be no more than few hundreds.
>
> Not sure where that came from. "few hundreds" sounds like advice for
> working with FAT-16 formatted disks - or worse. Most of the modern
> filesystems can handle several thousands easily.
>
> The real reason for these parameters is that some filesystems start
> producing errors. For example; Squid tores 2^24 objects in its cache_dir
> but - FAT16 fails with more than 2^14 files in one directory - and IIRC
> ext2 and/or ext3 starts giving me trouble around 2^16 files in one
> directory. I'm not sure about other OS, I've not hit their limits myself.
>
>> In one of the subdirectories of the following cache_dir on my 3.3.8 squid, there is more than 15000 objects! In other subdirectories, there is ZERO objects, is this normal?!
>
> Yes. It depends entirely on how many objects are in the cache. The
> earlier fileno entries fill up first, so the directories where those
> fileo map to will show lots of files while later ones do not.
>
> NOTE: when changing these L1/L2 values the entire cache_dir fileno->file
> mapping gets screwed up. So you need to erase the contents and use an
> empty location/directory to build the new structure shape inside.
>
>> cache_dir aufs /mnt/cachedrive2/small 250000 64 256 min-size=32000 max-size=200000
>
> If you want to increase that spread you can change the 64 to 128. It
> should halve the number of files in the fullest directory.
> With 250GB storing 32KB-200KB sized objects you are looking at a total
> of between 1,310,720 and 8,192,000 objects in that particular cache.
>
> Amos
>
Received on Mon Sep 02 2013 - 23:24:11 MDT
This archive was generated by hypermail 2.2.0 : Tue Sep 03 2013 - 12:00:04 MDT