Re: [SQU] setting INCOMING_HTTP_MAX

From: Markus Stumpf <[email protected]>
Date: Wed, 6 Sep 2000 20:36:14 +0200

Hoi Raul,

On Tue, Sep 05, 2000 at 07:12:24PM -0500, Raul Alvarez Venegas wrote:
> Is there any special tool you are using to stress-test squid?. I'm using
> Squid 2.4.DEVEL4

Basically I do the transition of data from our old cache to the new one
in the same step.
I took a recent snapshot of the "log" file (where the list of all the
objects is kept) and user the Unix "split" command to make (in our case)
8 equally sized hunks out of it. (xaa - xah)
Then I copied
        xaa xab xac -> host1
        xad xae xaf -> host2
        xag xah -> host3
On each host I start the following shell script for each of the files:

------------------------------------------------------------------------
#!/bin/ksh
i=0;
while read a b c d e url
do
    ( ./client -h put.your.caches.ip.here -p 80 -s $url & )
    let i=i+1;
    [ 0 = `expr $i % 100` ] && echo -n $i
    echo -n "."

    # start for more background fetches
    for j in 1 2 3 4
    do
        read a b c d e url
        ( ./client -h put.your.caches.ip.here -p 80 -s $url & )
        let i=i+1;
        [ 0 = `expr $i % 100` ] && echo -n $i
        echo -n "."
    done
    sleep 1
done < $*
------------------------------------------------------------------------

The "client" programm is the one that comes with squid.
The script basically starts (in the background) 5 processes that load
a URL into the cache and sleeps 1 second.
So I get about 30 HTTP requests per second (according to the stats
created from the squid vitals scripts).
You can de-/increase this by starting less/more "client" processes
in the "for j" loop.

We have configured the old squid as a sibling to the new one and raised
the icp_query_timeout to 3000. That slows the new one down a bit, but
gives the old (and heavily loaded one) a better chance to answer and
thus increase the SIBLING_HITS.

This stress test is kinda unfair to the new one, as it has to get nearly
everything either DIRECT or from the old cache. But you can e.g. restart
on host3 (that has one file less to process) the script again for one of
the files (with a timely offset of a few minutes) this should give you a
HIT rate of ... aehm ... 1/9th ;-)

        \Maex

-- 
SpaceNet GmbH             |   http://www.Space.Net/   | Stress is when you wake
Research & Development    | mailto:maex-sig@Space.Net | up screaming and you
Joseph-Dollinger-Bogen 14 |  Tel: +49 (89) 32356-0    | realize you haven't
D-80807 Muenchen          |  Fax: +49 (89) 32356-299  | fallen asleep yet.
--
To unsubscribe, see http://www.squid-cache.org/mailing-lists.html
Received on Wed Sep 06 2000 - 12:41:17 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:55:12 MST