RE: [squid-users] log server

From: George Hong <[email protected]>
Date: Sun, 22 Aug 2004 08:17:52 +0800

>
> > I have several squid cache servers setting us as reversed proxy for
a
> > large website. We need to provide a single log file everyday.
Instead of
> > spending hours on combining several huge access.log files into one,
I'm
> > wondering whether I can use a log server and point the log file to
it so
> > that the log file is already well organized. If I want to implement
it,
> > where should I start?
>
> I would merge the logs after the fact. As each logfile is already
sorted
> the process of merging the logs is very light weight apart from the
disk
> I/O in reading/writing the log data. (sort -m)
>
Since it's a busy website the log file is quite huge and we need to
provide the unified log in a relatively short time (1-2 hr). So to do
the merge in every 24 hours will not meet the time limit because
transferring the files will use a lot of the time. There is a workaround
I can think of is to rotate the log file and merge them every hour. But
I'd still prefer the clean way to use a log server.
> > One way to solve the issue I can think of is to mount the log
server's
> > disk on the cache servers. But mount is not reliable and I don't
like
> > it. It might have write lock issues since multiple servers are
writing
> > to it at the same time.
>
> This won't work very well.. NFS is not very keen of multiple writers
to
> the same file and you may well end up with corrupted records and/or
lost
> information.
>
> It has been considered using syslog for the access log but
unfortunately
> syslog is a bit limited both in performance and the allowed record
size,
> and in addition the network transport is quite unreliable.
You might want to consider multilog
(http://cr.yp.to/daemontools/multilog.html) which is better than syslog.

>
> Regards
> Henrik
Received on Sat Aug 21 2004 - 18:18:27 MDT

This archive was generated by hypermail pre-2.1.9 : Wed Sep 01 2004 - 12:00:02 MDT