Re: [squid-users] wrong diskspace count and assertion failed: diskd/store_dir_diskd.c:1930: "buf"

From: Henrik Nordstrom <[email protected]>
Date: Sat, 11 Feb 2006 15:08:47 +0100 (CET)

On Thu, 19 Jan 2006, H wrote:

> let's see if I understand what you say here, you say that if either q1 or q2
> is set to a high enough value to use all 96 buffers the assertion failes

Correct.

> and otherwise the msgs ar hold when reaching the q1 or q2 value and assertion
> do not fail?

Correct.

As long as the q1/q2 values are configured within reasonable limits Squid
should work fine (or at least not fail with this assert).

There has not been any effort to address this as a I/O queue of 96 is way
more than sufficient to fully saturate most I/O systems around.

> even so, I mean even if the error msg really is saying what you tell us here I
> can not accept that squid ***crashes*** then for that and empties the cache
> dirs then

You are more than welcome to submit a patch for this.

I am quite buzy on other items, and as I am mainly a Linux user diskd is
not high on my priorities.

Squid is a community effort.

> If this error really means no buffers left then diskd processess should crash
> also and ever when the msgs are busy

Well, the diskd processes gets restarted when Squid is.

> - but in this case squid gets it right and fails only for this
> particular transaction and can go on with diskIO soon the msgqueues are
> free
>
> Makes no sense to me that for buffers beeing out the diskspace count goes
> wrong but if so it is really necessary to check and repair this particular
> thing and for me this is a very serious bug

Indeed. The disk usage indication should not go wrong, but Squid can get
quite confused if it gets restarted while rebuilding the swap.state index.

This is a separate issue from the diskd I/O queue limits, but can be
triggered by the diskd assertion.

> since this happens independent of usage, means it happens on low traffic
> (<1000Kbit/s) and heavier (2-4MB/s) and also on heavy servers with 15-20MB/s
> after a certain time and not under a certain charge I like to discard "buffer
> out" - for me here is a leak somewhere which cause the problem, temporary
> buffer out I can no state on my system when monitoring this particular event.

Could be.

> Obvious I should have more request on a heavy server so the problem should
> happen here more but it does not. So if your answer is it then I should get
> some relevant numbers with ipcs -o|a but the system is clear. Or is there
> something else?

The I/O queue limitation is purely internal to Squid, and gets cleared
when Suqid is restarted (either manually or automatic on crash)

Regards
Henrik
Received on Sat Feb 11 2006 - 07:08:52 MST

This archive was generated by hypermail pre-2.1.9 : Wed Mar 01 2006 - 12:00:03 MST