[syslog-ng] Profiling syslog-ng

Balazs Scheidler bazsi at balabit.hu
Fri Feb 17 18:12:12 CET 2006


On Thu, 2006-02-16 at 08:11 -0500, John Morrissey wrote:
> On Thu, Feb 16, 2006 at 09:56:30AM +0100, Balazs Scheidler wrote:

> I've been tinkering with different delay times. 40 msec greatly increases
> the number of descriptors per poll(), to 30-50, but decreases CPU
> utilization under very heavy log load (950 descriptors) to ~12%. Do you have
> a guesstimate at what point I should stop, where the latency of the delay
> plus processing might be too much?

Each channel in syslog-ng (at least in 1.6.x) is read-polled once the
connection is opened, the increase in latency does not really affect the
sender itself as no application layer acknowledgement is made (though
I'm planning on this one), the only result in waiting more is that it
increases the memory load, as received messages are stored in socket
buffers.

If you are using datagram sockets this might mean some more dropped
messages, for stream sockets this might block sending applications after
the window is filled.

The other effect is output, outputs are also polled, files are written
when poll indicates writability, this also gets delayed because of the
sleep, e.g. messages are kept in memory for a bit more time as well. I'm
not sure this is really a problem as messages are accumulated in the
input buffers...

I think the output side is affected only because the input becomes more
bursty, a lot more messages are processed in a single poll loop, so you
have to make sure your output fifo sizes are sized accordingly.

So I'd say for stream connections the negative impact is negligible,
apart from the additional latency (timestamps + file writes)

I would not recommend more than 0.1sec (100ms) as that might skew local
timestamps significantly and that would mean sitting 100ms worth of logs
in your input buffers.

-- 
Bazsi



More information about the syslog-ng mailing list