RE: [squid-users] Reading Cache of my Squid

From: Chris Perreault <[email protected]>
Date: Mon, 7 Jun 2004 07:39:19 -0400

This may or may not help. There are programs that will capture a website.
There is also software that will grab the content from a webpage and also
grab content from any pages that have a link on the original webpage.
Generally you can set the software to go X number of links deep off of each
page. No idea how long it took to write something like that. From your log
files, you could possibly use such a piece of software to grab the content
of pages listed in the logs.

A recent post gave the setting to retrieve cached hits, that someone wanted
to use for disaster recovery. This will only let you get the paged cached
though, not every webpage someone requested. (not all pages get cached).

Chris Perreault

-----Original Message-----
From: Harry Prasetyo K [mailto:ary_prast@inf.its-sby.edu]
Sent: Monday, June 07, 2004 5:23 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Reading Cache of my Squid

Well, forgive me for my poor knowledge. Therefore i missused words for
explaining my problems. Well how about fetching the URL objects that have
been cached in the squid cache ? How squid can manage the request that is
need to forward to internet or to look into the cache ? Or anyone knows some
program (source code) on C or Perl which can do it ?

i'll be thankful for any help ...
HP

>
> Squid doesn't know what a webpage is; only browsers have an idea
> about that.
> Keep that in mind, squid caches url objects (only).
> Pehaps this viewpoint implies changing your requirement(s) or whatever
> your needs are.
>
> M.
>
>

-- 
** Architecture And Network Laboratory **
Received on Mon Jun 07 2004 - 05:39:03 MDT

This archive was generated by hypermail pre-2.1.9 : Thu Jul 01 2004 - 12:00:02 MDT