[syslog-ng] UDP drops with syslog-ng 3.24.1-1
brian hoffman
brianhoffman at yahoo.com
Mon May 11 16:08:06 UTC 2020
Ok, that helps. Knowing that the hashing is less than perfect could explain why one buffer is filling up while others seem to have capacity. I'll also look into setting up the log-fetch-limit. I did have a couple of followup questions.
- How does the so-rcvbuf of the kernel (rmem_max) correlate to the so-rcvbuf in syslog-ng, especially when using so-reuseport. Should the rmem_max be the sum of all the so-rcvbuf settings in syslog-ng? Is rmem_max a per CPU buffer? In general, how should these variables be scaled in unison? In our case, our so-rcvbuf is 441,326,592 across 4 so-reuseports, while our rmem_max is 268,435,456.
- We record all of our syslog-ng stats at 5 minute intervals with syslog-ng-ctl. If this never shows any drops, would that imply that this is more of an issue with kernel buffers than syslog-ng itself?
- Is there a syslog-ng command that will show its queues, to see how those are being utilized?
On Monday, May 11, 2020, 06:50:55 AM EDT, László Várady (lvarady) <laszlo.varady at oneidentity.com> wrote:
Hello,
so-reuseport(1) is a good way of scaling UDP sources.
In that case, datagrams are distributed to the appropriate UDP source using a hash based on (peer IP address, port) and (local IP address, port).This is the default balancing method of the SO_REUSEPORT kernel feature. Due to the hash implementation, there is also a high possibility of hashing collision, which results in uneven distribution of datagrams among sources.If that's a problem, custom BPF programs can be written that can use a different balancing method. AFAIK, this is not an option on RHEL 7 because of the old kernel version.(We have such a feature called udp-balancer(), but it is not available in the open-source edition:https://support.oneidentity.com/technical-documents/syslog-ng-premium-edition/7.0.19/administration-guide/33#TOPIC-1387384)
Other than that, the following syslog-ng options can affect performance:
- so-rcvbuf()/so-sndbuf(): Sets the maximum socket receive/send buffer in the kernel. The Linux kernel doubles this value (to allow space for bookkeeping). If syslog-ng stops processing new messages because, for example, the flags(flow-control) flag is set and destinations are slow, or just because it can't read datagrams fast enough; then kernel buffers will be filled and new datagrams will be dropped by the kernel.
- log-iw-size(): The size of the initial source window. This value is used during flow control to stop reading new messages when more than log-iw-size() number of message is kept in memory.
- log-fetch-limit(): The maximum number of messages fetched from a source during a single poll loop. This is a performance-only option, the default is 10. You might want to increase this (to 100 for example) and test your performance. Please set this option together with log-iw-size(): log-iw-size() >= log-fetch-limit()
- flags(flow-control): This is a log path option that enables flow-controlled message processing. You should probably NOT use this for UDP logs, details are below.
UDP sources are non-blocking/event-based in syslog-ng, but they act like single-threaded entities, so you can consider a so-reuseport(1) source instance as an object that is bound to a single CPU core.This is why an even datagram distribution would be important to achieve the best "speed".
Enabling flags(flow-control) on log paths containing your UDP sources probably won't help you avoid message loss, because it only makes sure that syslog-ng itself will not drop messages, but when you have log-iw-size() number of messages queued in syslog-ng, UDP sources will stop reading new messages. Due to this, kernel buffers will be filled, so datagrams will eventually be dropped by the kernel instead of syslog-ng.
When flow-control is NOT configured in any of the log paths that contain your UDP sources, syslog-ng reads datagramsas fast as possible (note the "single CPU core per source" limitation).In this case, syslog-ng will drop new messages when our destination queues are filled, for example when destinations are slow or connections are lost. The destination queue size can be set with the log-fifo-size() option. You can monitor dropped messages using the 'syslog-ng-ctl stats' command.
--László Várady
From: syslog-ng <syslog-ng-bounces at lists.balabit.hu> on behalf of brian hoffman <brianhoffman at yahoo.com>
Sent: Thursday, May 7, 2020 17:21
To: syslog-ng at lists.balabit.hu <syslog-ng at lists.balabit.hu>
Subject: [syslog-ng] UDP drops with syslog-ng 3.24.1-1 CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe.
We've been centrally logging with syslog-ng for about 5 years now. Over that time, the number of sources has grown significantly, and at some point we crossed a line where drops were happening (a quick survey of 3 million syslog packets yielded 420 unique currently sending hosts). After much research and experimentation, we've been able to get to the point where throughout the day there are 0 drops for the most part. This was achieved by installing the latest syslog-ng (not the RedHat packaged one) and creating a source for each CPU. Occasionally, though, we still have periods of drops so I'm trying to eliminate these last few.
Here are some relevant configuration items:2 RedHat 7.8 VMs (load balanced via an F5) with 16GB memory and 4 CPUs each running syslog-ng-3.24.1-1.el7.x86_64.net.core.rmem_default = 212992net.core.rmem_max = 268435456
log_fifo_size(268435456);
source s_network {network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp1"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp2"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp3"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp4"));network(ip("0.0.0.0") port(514) transport("tcp") max_connections(200) keep_alive(yes) so_rcvbuf(67108864));};
We are limited to UDP, unfortunately, because we do not have control over the devices/networks/etc. that are sending to us, but we have changed as many of the internal senders and destinations to TCP as we can.
With a script I created to view the packets, including drops, as well as the individual RECVQs, the issue can be illustrated.
Here's what things look like normally:Thu May 7 10:48:15 EDT 2020, 27003 IP pkts rcvd,26980 IP pkts sent,24951 UDP pkts rcvd, 28075 UDP pkts sent,0 UDP pkt rcv err
RECVQ-1=2176RECVQ-2=0RECVQ-3=0RECVQ-4=0Thu May 7 10:48:16 EDT 2020, 28453 IP pkts rcvd,28426 IP pkts sent,26185 UDP pkts rcvd, 29180 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=0RECVQ-2=0RECVQ-3=4352RECVQ-4=0Thu May 7 10:48:17 EDT 2020, 28294 IP pkts rcvd,28276 IP pkts sent,26277 UDP pkts rcvd, 28709 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=2176RECVQ-2=0RECVQ-3=0RECVQ-4=0
The RECVQs are sparsely used, and there are no errors.
Around 9pm every night, the packet counts go up significantly (probably due to backup related logs):Wed May 6 21:00:08 EDT 2020, 66382 IP pkts rcvd,66366 IP pkts sent,39405 UDP pkts rcvd, 67592 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=1595008RECVQ-2=106217088RECVQ-3=53694976RECVQ-4=31858816Wed May 6 21:00:09 EDT 2020, 69317 IP pkts rcvd,69338 IP pkts sent,44446 UDP pkts rcvd, 75958 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=13056RECVQ-2=126397312RECVQ-3=75568128RECVQ-4=41626880Wed May 6 21:00:10 EDT 2020, 71205 IP pkts rcvd,71227 IP pkts sent,43657 UDP pkts rcvd, 74603 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=920448RECVQ-2=146122752RECVQ-3=100951168RECVQ-4=52622208Wed May 6 21:00:12 EDT 2020, 69578 IP pkts rcvd,69454 IP pkts sent,124465 UDP pkts rcvd, 163367 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=13140864RECVQ-2=44494848RECVQ-3=125579136RECVQ-4=0
Still, though, it's handling it with no errors. But then at some point a threshold is reached and errors start piling up:Wed May 6 21:00:20 EDT 2020, 63177 IP pkts rcvd,63291 IP pkts sent,0 UDP pkts rcvd, 0 UDP pkts sent,38011 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=200357376RECVQ-3=292948352RECVQ-4=28890752Wed May 6 21:00:21 EDT 2020, 69501 IP pkts rcvd,69464 IP pkts sent,0 UDP pkts rcvd, 1 UDP pkts sent,42158 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=223551360RECVQ-3=314995584RECVQ-4=41735680Wed May 6 21:00:23 EDT 2020, 69962 IP pkts rcvd,69978 IP pkts sent,0 UDP pkts rcvd, 2 UDP pkts sent,43775 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=244732544RECVQ-3=338239616RECVQ-4=53858176Wed May 6 21:00:24 EDT 2020, 68266 IP pkts rcvd,68216 IP pkts sent,0 UDP pkts rcvd, 0 UDP pkts sent,43118 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=265258752RECVQ-3=360643712RECVQ-4=65362688
The common denominator I've found is that one of the RECVQs hits 536,871,424. This number seems to be almost exactly double (512 difference) the rmem.max/log_fifo_size (268,435,456). Even though there seems to be capacity in the other RECVQs, just one of those hitting that magic number seems to be enough to throw things out of whack. During this time, the CPU usage for syslog-ng also drops:
top - 21:00:11 up 2 days, 8:40, 2 users, load average: 1.09, 1.41, 1.32 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7214 root 20 0 2111388 1.7g 5392 S 123.6 10.9 3080:46 syslog-ng top - 21:00:14 up 2 days, 8:40, 2 users, load average: 1.00, 1.39, 1.31 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7214 root 20 0 2111388 1.7g 5392 S 24.6 10.9 3080:46 syslog-ng top - 21:00:17 up 2 days, 8:40, 2 users, load average: 1.00, 1.39, 1.31 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7214 root 20 0 2111388 1.7g 5392 S 7.6 10.9 3080:47 syslog-ng
As best I can tell, based on the reading I've done, the message counts we're getting should be doable, but it's still not clear exactly how to size some of these options as they relate to UDP (like log_fetch_limit, which we do not have set).
Anything else I should try?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.balabit.hu/pipermail/syslog-ng/attachments/20200511/31ff1185/attachment-0001.html>
More information about the syslog-ng
mailing list