RE: [squid-users] Cacheing in front of webserver

From: Aldo S. Lagana <[email protected]>
Date: Wed, 30 Jan 2002 16:36:32 -0500

A few other options:

1 - run multiple web servers on a DMZ (and get them to replicate to
one another - using rsync or windows replication) and use DNS
round-robin for the public addresses.

2 - run a load balancing server in front of the multiple web servers
(LVS for Linux) and again run rsync or windows replication on the web
servers to replicate the content - this method allows you to get away
with using only one public IP address, but is more complicated and
requires an additional server/load balancer.

aldo

-----Original Message-----
From: Eric Persson [mailto:eric@persson.tm]
Sent: Wednesday, January 30, 2002 4:25 PM
To: squid
Subject: [squid-users] Cacheing in front of webserver

Hi !

I have webserver which is very loaded, and I have done quite much to
optimize it. I heard from a friend that it was possible to lighten the
load on a webserver by putting a cache like squid in front of it and
have all visitors accessing squid and squid accessing the webserver. Is
this above, true? I assume it is further down in the mail.

I plan to set this up with squid on port 80 and apache on port 81, maybe
on the lo interface.

So, my questions are the following:

Will this work at all?

How does squid handle querystrings? On this site, almost all traffic
goes trough index.php and depending on querystrings it returns different
pages. Can this be handled by squid?

How will cookies sent by the webserver be handled? Can a login function
on the webserver be handled correctly through squid?

Thanks for your time..

        Eric

-- 
[ eric persson | eric@persson.tm | www.persson.tm ]
Received on Wed Jan 30 2002 - 14:39:08 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:06:00 MST