<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-15"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
But I'm talking much higher numbers, I have had a log_fifo_size of
20000 and it still dropped several thousand messages.<br>
<br>
I have just taken the shotgun approach to the math and increased the
log_fifo_size on the log server by the max number of dropped messages
in the stats log. So now I've got a log_fifo_size of 50000 on my
logserver and added 30000 to each agent syslog-ng.<br>
<br>
I will see if I have any dropped stats in a few days to see how it's
going...<br>
<br>
-h<br>
<br>
<pre class="moz-signature" cols="72">Hari Sekhon
</pre>
<br>
<br>
Balazs Scheidler wrote:
<blockquote cite="mid1170406383.6020.24.camel@bzorp.balabit" type="cite">
<pre wrap="">On Thu, 2007-02-01 at 18:24 +0000, Hari Sekhon wrote:
</pre>
<blockquote type="cite">
<pre wrap="">yes this is what I was thinking but the server also dropped some. I will
try to increase the fifo by the amount of dropped stats on the server,
make it 40000 instead of 20000 and then on the agents give them a fifo
of 30000 just to be safe. From what I've measure it doesn't seem to take
much ram so this should be ok. Although this wasn't done at the times of
heavier load when the machine's fifo is filling up.
If anybody knows more on this or has some info on the mem consumption
related to doing this then please let me know
</pre>
</blockquote>
<pre wrap=""><!---->
We have investigated a similar issue in-house, and as it seems the
performance improvements in 1.6.10 and .11 and 2.0.0 increase the load
on the destination buffers significantly, which in turn can cause
message loss in cases where it did not occur before.
For example, previously file destinations could only drop messages, if
there were at least 100 log connections, producing a log message in a
single poll iteration. (assuming log_fifo_size(100), the default).
Later versions have a much higher probability of internal message drops.
This can be solved with increasing log_fifo_size(), the out-of-the-box
defaults seem to be unadequate right now.
The problem is:
* message sources read maximum 30 messages at a single poll iteration
* each socket connection on /dev/log is a message source in this sense
(e.g. if there are 10 programs connecting to /dev/log, then a single
input iteration can produce as much as 10*30 = 300 messages)
* these messages are put into the destination's fifo, with a default
size of 100 messages.
Previously each source generated at most 1 message per poll cycle, now
it is increased to 30. It is trivial to see that the default
log_fifo_size() value is not adequate.
I'm thinking about increasing the default log_fifo_size to 1000 in 1.6.x
syslog-ng 2.0 is also affected, albeit a bit differently:
* the number of messages fetched in a single go can be controlled by a
configuration option (per source fetch_limit() option)
* its default value is 10 instead of 30
I too think, that fifo size would be forced to a minimum value of 1000.
</pre>
</blockquote>
</body>
</html>