LOG_DB_DAEMON
NAMESYNOPSIS
DESCRIPTOIN
DESCRIPTION
CONFIGURATION
VERSION INFORMATION
DATA EXTRACTION
TODO
AUTHOR
COPYRIGHT AND LICENSE
NAME
log_db_daemon − Database logging daemon for Squid
SYNOPSIS
log_db_daemon DSN [options]
DESCRIPTOIN
This program writes Squid access.log entries to a database. Presently only accepts the squid native format
DSN |
Database DSN encoded as a path. This is sent as the access_log file path. |
Sample
configuration:
access_log daemon:/host/database/table/username/password
squid
to leave a parameter unspecified use a double slash: access_log daemon://database/table/username/password squid
Default "DBI:mysql:database=squid"
−−debug |
Write debug messages to Squid stderr or cache.log |
DESCRIPTION
This module exploits the new logfile daemon support available in squid 2.7 and 3.2 to store access log entries in a MySQL database.
CONFIGURATION
Squid
configuration
access_log directive
The path to the access log file is used to provide the database connection parameters.
access_log daemon:/mysql_host:port/database/table/username/password squid
The ’daemon’ prefix is mandatory and tells squid that the logfile_daemon helper is to be used instead of the normal file logging.
The last
parameter tells squid which log format to use when writing
lines to the log daemon. Presently squid format is
supported.
mysql_host:port
Host where the mysql server is running. If left empty, ’localhost’ is assumed.
database
Name of the database to connect to. If left empty, ’squid_log’ is assumed.
table
Name of the database table where log lines are stored. If left empty, ’access_log’ is assumed.
username
Username to use when connecting to the database. If left empty, ’squid’ is assumed.
password
Password to use when connecting to the database. If left empty, no password is used.
To leave all fields to their default values, you can use a single slash:
access_log daemon:/ squid
To specify only the database password, which by default is empty, you must leave unspecified all the other parameters by using null strings:
access_log daemon://///password squid
logfile_daemon directive
This is the current way of telling squid where the logfile daemon resides.
logfile_daemon /path/to/squid/libexec/logfile−daemon_mysql.pl
The script must be copied to the location specified in the directive.
Database
configuration
Let’s call the database ’squid_log’ and
the log table ’access_log’. The username and
password for the db connection will be both
’squid’.
Database
Create the database:
CREATE DATABASE squid_log;
User
Create the user:
GRANT INSERT,SELECT,CREATE ON squid_log.* TO 'squid'@'localhost' IDENTIFIED BY 'squid'; FLUSH PRIVILEGES;
Note that only CREATE, INSERT and SELECT privileges are granted to the ’squid’ user. This ensures that the logfile daemon script cannot change or modify the log entries.
Table
The Daemon will attempt to initialize this table if none exists when it starts.
The table created should look like:
CREATE TABLE access_log ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, time_since_epoch DECIMAL(15,3), time_response INTEGER, ip_client CHAR(15), ip_server CHAR(15), http_status_code VARCHAR(10), http_reply_size INTEGER, http_method VARCHAR(20), http_url TEXT, http_username VARCHAR(20), http_mime_type VARCHAR(50), squid_hier_status VARCHAR(20), squid_request_status VARCHAR(20) );
VERSION INFORMATION
This document refers to "log_db_daemon" script version 0.5.
The script has
been developed and tested in the following environment:
squid−2.7 Squid−3.2
mysql 5.0.26 and 5.1
perl 5.8.8
OpenSUSE 10.2
DATA EXTRACTION
Sample
queries.
Clients accessing the cache
SELECT DISTINCT ip_client FROM access_log;
Number of request per day
SELECT DATE(FROM_UNIXTIME(time_since_epoch)) AS date_day, COUNT(*) AS num_of_requests FROM access_log GROUP BY 1 ORDER BY 1;
Request status count
To obtain the raw count of each request status:
SELECT squid_request_status, COUNT(*) AS n FROM access_log GROUP BY squid_request_status ORDER BY 2 DESC;
To calculate the percentage of each request status:
SELECT squid_request_status, (COUNT(*)/(SELECT COUNT(*) FROM access_log)*100) AS percentage FROM access_log GROUP BY squid_request_status ORDER BY 2 DESC;
To distinguish only between HITs and MISSes:
SELECT 'hits', (SELECT COUNT(*) FROM access_log WHERE squid_request_status LIKE '%HIT%') / (SELECT COUNT(*) FROM access_log)*100 AS percentage UNION SELECT 'misses', (SELECT COUNT(*) FROM access_log WHERE squid_request_status LIKE '%MISS%') / (SELECT COUNT(*) FROM access_log)*100 AS pecentage;
Response time ranges
SELECT '0..500', COUNT(*)/(SELECT COUNT(*) FROM access_log)*100 AS percentage FROM access_log WHERE time_response >= 0 AND time_response < 500 UNION SELECT '500..1000', COUNT(*)/(SELECT COUNT(*) FROM access_log)*100 AS percentage FROM access_log WHERE time_response >= 500 AND time_response < 1000 UNION SELECT '1000..2000', COUNT(*)/(SELECT COUNT(*) FROM access_log)*100 AS percentage FROM access_log WHERE time_response >= 1000 AND time_response < 2000 UNION SELECT '>= 2000', COUNT(*)/(SELECT COUNT(*) FROM access_log)*100 AS percentage FROM access_log WHERE time_response >= 2000;
Traffic by mime type
SELECT http_mime_type, SUM(http_reply_size) as total_bytes FROM access_log GROUP BY http_mime_type ORDER BY 2 DESC;
Traffic by client
SELECT ip_client, SUM(http_reply_size) AS total_bytes FROM access_log GROUP BY 1 ORDER BY 2 DESC;
Speed
issues
The MyISAM storage engine is known to be faster than the
InnoDB one, so although it doesn’t support
transactions and referential integrity, it might be more
appropriate in this scenario. You might want to append
"ENGINE=MYISAM" at the end of the table creation
code in the above SQL script.
Indexes should be created according to the queries that are more frequently run. The DDL script only creates an implicit index for the primary key column.
TODO
Table
cleanup
This script currently implements only the
"L" (i.e. "append a line to the
log") command, therefore the log lines are never purged
from the table. This approach has an obvious scalability
problem.
One solution would be to implement e.g. the "rotate log" command in a way that would calculate some summary values, put them in a "summary table" and then delete the lines used to caluclate those values.
Similar cleanup code could be implemented in an external script and run periodically independently from squid log commands.
Testing
This script has only been tested in low-volume scenarios
(single client, less than 10 req/s). Tests in high volume
environments could reveal performance bottlenecks and
bugs.
AUTHOR
Marcello Romani, [email protected] Amos Jeffries, amosjeffries@squid−cache.org
COPYRIGHT AND LICENSE
Copyright (C) 2008 by Marcello Romani
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available.
Introduction
- About Squid
- Why Squid?
- Squid Developers
- How to Donate
- How to Help Out
- Getting Squid
- Squid Source Packages
- Squid Deployment Case-Studies
- Squid Software Foundation
Documentation
- Quick Setup
- Configuration:
- FAQ and Wiki
- Guide Books:
- Non-English
- More...
Support
- Security Advisories
- Bugzilla Database
- Mailing lists
- Contacting us
- Commercial services
- Project Sponsors
- Squid-based products
Miscellaneous
- Developer Resources
- Related Writings
- Related Software:
- Squid Artwork
Web Site Translations
Mirrors
- Website:
- gr il pl ... full list
- FTP Package Archive