Re: Slow webserver and request flood

From: Duane Wessels <[email protected]>
Date: Fri, 31 Mar 2000 11:02:19 -0700

On Tue, 28 Mar 2000, Christian Rozsenich wrote:

> We are operating a database based website.
> Some pages take a few seconds, until the webserver delivers them due to the
> nature of the SQL-queries.
>
> When pages in the squid expire, we experience a request flood on the
> webserver/database, as many threads of the squid go to the database with the
> same request, as the entry in the cache expired and the new pages takes some
> time to be calculated.
> Is this a feature or a bug?
> Is there a chance to have a lock behaviour on subsequent threads requesting
> the same URL until the first thread gets results delivered from the
> webserver and stored in the cache?

This is by design.

Squid doesn't know if a response is "sharable" until its read
the HTTP reply headers. Thus, if 100 requests for a URL come
in at once, and its slow to get the response, Squid sends 100
requests to the origin server.

One way to speed it up is to have the origin server send back
the headers quickly, and then take a long time to send back
the body. If you can do that, then it reduces the number of
requests that squid makes to the origin server.

Duane W.
Received on Fri Mar 31 2000 - 11:05:04 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:52:31 MST