To troubleshoot you need to start collecting some data.. The very first
thing is to check your cache.log in case there is some extraordinary
condition which gets logged there by Squid. If something is logged in
cache.log then it almost certainly is an interesting hint to what is
going on.
The next thing you need to determine is if these "broken image" requests
at all reached Squid and got logged in access.log, or if they were
caused by communication errors preventing the request from reaching
Squid.
To determine this you need to isolate the specific page request in
access.log and then see if the missing images are logged shortly after
that from the same user of if there is any other access.log entries from
that user which may be related.
If the request are logged the look at how these was logged. The
interesting columns is
* The URL
* The Squid core (TCP_MISS/HIT/...)
* The reply size
* Time it took to deliver
If the requests never got recorded in Squid access.log then it is likely
the errors are caused by communication errors. Here you can try
disabling persistent client connections and some other actions.
Increasing the read timeout may also help, but to have it corrected
further analysis of the actual traffic is required.
The very best method to diagnose what really happened is to try to catch
the error in a network dump of the traffic. If you can set up a mirror
port on your switch then this can be done completely non-intrusive on
another box, if not you need to save the packet traces on the proxy
server which adds a bit of CPU and disk usage...
To save packet traces use "tcpdump -s 1600 -w packets.pcap".
To analyse the packet traces use ngrep -I packets.pcap with suitable
filters to identify the session once you know you have captured the
event in a trace.
To filter down a pcap dump into more manageable pieces, like all traffic
relating client IP X use "tcpump -r packets.pcap -w
filtered_packets.pcap host ip.of.client" or a similar filter.
Regards
Henrik
Brian Kondalski wrote:
>
> It's really hard to attach a log entry to these users as I have no control
> over them (this is a high traffic website pushing terabytes of data
> in a month) and at any time I have 4000+ people hitting the site, all of
> which I do not know or have access to.
>
> What should I looke for in the log files? I normally have access.log off
> as it gets way too large quickly.
>
> Brian
>
> On Mon, Feb 24, 2003 at 11:39:56AM +0100, Henrik Nordstrom wrote:
> > l�r 2003-02-22 klockan 21.01 skrev Brian Kondalski:
> >
> > > Does any one have any ideas on what I can look at to trouble shoot this problem further?
> > > What can I ask the users? I have been unable to reproduce this problem at all on
> > > local browsers (various types, various OSes). Any help our clues here would be greatly
> > > appreciated. Currently we are only using squid to cache a portion of the website and
> > > I want to turn it lose on it all. But until I solve this problem it's hard to justify
> > > doing this.
> >
> > Do you get anything strange in access.log in response to the requests
> > from these users? Pay specific attention to the HTTP status codes..
> >
> > --
> > Henrik Nordstrom <hno@squid-cache.org>
> > MARA Systems AB, Sweden
Received on Mon Feb 24 2003 - 10:51:41 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:13:35 MST