[syslog-ng] time_sleep in high udp environment?
Balazs Scheidler
bazsi at balabit.hu
Tue Dec 22 20:50:55 CET 2009
On Tue, 2009-12-22 at 09:50 -0600, Tim Rupp wrote:
> Hi, we were debugging message drops on our syslog-ng server (rpm
> syslog-ng-3.0.5-1.rhel5.x86_64) and found out that if we commented out
> the time_sleep global option we had set (value of 20) that all the
> drops went away.
>
> Initially we had set this value while reading through the pdf
> documentation (page 136) figuring that since we had potentially a lot
> of traffic that this would boost performance.
>
> So I guess I'd like to know why we seemed to see the opposite behavior
> with this option enabled. We don't really do tcp connections, so maybe
> this option was useless to begin with. Anyways, it made me curious, so
> if anyone can enlighten me I'd appreciate it.
time_sleep() is only useful if you have large numbers of file
descriptors to be polled. In the case of TCP, this may happen if you
have a lot of incoming connections. In the case of UDP, this is not
true, there only one fd is used to poll for incoming packets.
The reason is simple: the main poll iteration of syslog-ng may become
quite expensive for a large number of fds, and in this case it is
beneficial to wake up less often (e.g. run the main poll iteration fewer
times), and wait for incoming messages to be buffered in the receive
buffer.
However this delay increases the latency between polls (naturally),
which may mean that your udp receive buffer fills up, thus drops start
to happen.
If you still have a large number of file descriptors, you may want to
increase the UDP socket buffer, that might also solve the drops problem.
But in case your syslog-ng server is not overloaded, then you may not
need time_sleep() at all.
The time_sleep option should be used with caution and only in case
there's a problem to solve, that's why it is not set by default.
--
Bazsi
More information about the syslog-ng
mailing list