Re: [squid-users] Ignoring query string from url

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sat, 01 Nov 2008 17:26:11 +1300

nitesh naik wrote:
> Henrik,
>
> url rewrite helper script works fine for few requests ( 100 req/sec )
> but slows down response as number of requests increase and it takes
> 10+ second to deliver the objects.
>
> Is there way to optimise it further ?
>
> url_rewrite_program /home/zdn/bin/redirect_parallel.pl
> url_rewrite_children 2000
> url_rewrite_concurrency 5
>
> Regards
> Nitesh

Perhapse increase the concurrency and reduce the children?
Each child takes up a memory footprint and CPU time to switch between.
Trading against a small wait in the batch queue for some objects. Where
you should have better marginal speed gain with maybe 20 concurrent and
500 children.

But check the stats on how many children are being used anyway to reduce
that down in absolute resource usage.

Amos

>
> On Thu, Oct 30, 2008 at 3:16 PM, nitesh naik <niteshnaik_at_gmail.com> wrote:
>> There was mistake on my part I should have used following script to
>> process concurrent requests. Its working properly now.
>>
>> #!/usr/bin/perl -an
>> BEGIN { $|=1; }
>> $id = $F[0];
>> $url = $F[1];
>> $url =~ s/\?.*//;
>> print "$id $url\n";
>> next;
>>
>> Regards
>> Nitesh
>>
>> On Thu, Oct 30, 2008 at 12:15 PM, nitesh naik <niteshnaik_at_gmail.com> wrote:
>>> Henrik,
>>>
>>> With this approach I see that only one redirector process is being
>>> used and requests are processed in serial order. This causes delay in
>>> serving the objects and even response for cache object is slower.
>>>
>>> I tried changing url_rewrite_concurrency to 1 but with this setting
>>> squid is not caching the Object. I guess I need to use url rewrite
>>> program which will process requests in parallel to handle the load of
>>> 5000 req/sec.
>>>
>>> Regards
>>> Nitesh
>>>
>>> On Mon, Oct 27, 2008 at 5:18 PM, Henrik Nordstrom
>>> <henrik_at_henriknordstrom.net> wrote:
>>>> See earlier response.
>>>>
>>>> On m�n, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
>>>>> Henrik,
>>>>>
>>>>> What if I use following code ? logic is same as your program ?
>>>>>
>>>>>
>>>>> #!/usr/bin/perl
>>>>> $|=1;
>>>>> while (<>) {
>>>>> s|(.*)\?(.*$)|$1|;
>>>>> print;
>>>>> next;
>>>>> }
>>>>>
>>>>> Regards
>>>>> Nitesh
>>>>>
>>>>> On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
>>>>> <henrik_at_henriknordstrom.net> wrote:
>>>>>> Sorry, forgot the following important line in both
>>>>>>
>>>>>> BEGIN { $|=1; }
>>>>>>
>>>>>> should be inserted as the second line in each script (just after the #! line)
>>>>>>
>>>>>>
>>>>>> On m�n, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
>>>>>>
>>>>>>> Example script removing query strings from any file ending in .ext:
>>>>>>>
>>>>>>> #!/usr/bin/perl -an
>>>>>>> $id = $F[0];
>>>>>>> $url = $F[1];
>>>>>>> if ($url =~ m#\.ext\?#) {
>>>>>>> $url =~ s/\?.*//;
>>>>>>> print "$id $url\n";
>>>>>>> next;
>>>>>>> }
>>>>>>> print "$id\n";
>>>>>>> next;
>>>>>>>
>>>>>>>
>>>>>>> Or if you want to keep it real simple:
>>>>>>>
>>>>>>> #!/usr/bin/perl -p
>>>>>>> s%\.ext\?.*%.ext%;
>>>>>>>
>>>>>>> but doesn't illustrate the principle that well, and causes a bit more
>>>>>>> work for Squid.. (but not much)
>>>>>>>
>>>>>>>> I am still not clear as how to write
>>>>>>>> help program which will process requests in parallel using perl ? Do
>>>>>>>> you think squirm with 1500 child processes works differently
>>>>>>>> compared to the solution you are talking about ?
>>>>>>> Yes.
>>>>>>>
>>>>>>> Regards
>>>>>>> Henrik

-- 
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.1
Received on Sat Nov 01 2008 - 04:26:16 MDT

This archive was generated by hypermail 2.2.0 : Sat Nov 01 2008 - 12:00:04 MDT