[syslog-ng] UDP drops with syslog-ng 3.24.1-1

brian hoffman brianhoffman at yahoo.com
Thu May 7 15:21:01 UTC 2020


We've been centrally logging with syslog-ng for about 5 years now.  Over that time, the number of sources has grown significantly, and at some point we crossed a line where drops were happening (a quick survey of 3 million syslog packets yielded 420 unique currently sending hosts).  After much research and experimentation, we've been able to get to the point where throughout the day there are 0 drops for the most part.  This was achieved by installing the latest syslog-ng (not the RedHat packaged one) and creating a source for each CPU.  Occasionally, though, we still have periods of drops so I'm trying to eliminate these last few.
Here are some relevant configuration items:2 RedHat 7.8 VMs (load balanced via an F5) with 16GB memory and 4 CPUs each running syslog-ng-3.24.1-1.el7.x86_64.net.core.rmem_default = 212992net.core.rmem_max = 268435456
log_fifo_size(268435456);

source s_network {network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp1"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp2"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp3"));network(ip("0.0.0.0") port(514) transport("udp") so_rcvbuf(441326592) so-reuseport(1) persist-name("udp4"));network(ip("0.0.0.0") port(514) transport("tcp") max_connections(200) keep_alive(yes) so_rcvbuf(67108864));};
We are limited to UDP, unfortunately, because we do not have control over the devices/networks/etc. that are sending to us, but we have changed as many of the internal senders and destinations to TCP as we can.
With a script I created to view the packets, including drops, as well as the individual RECVQs, the issue can be illustrated.
Here's what things look like normally:Thu May  7 10:48:15 EDT 2020, 27003 IP pkts rcvd,26980 IP pkts sent,24951 UDP pkts rcvd, 28075 UDP pkts sent,0 UDP pkt rcv err
RECVQ-1=2176RECVQ-2=0RECVQ-3=0RECVQ-4=0Thu May  7 10:48:16 EDT 2020, 28453 IP pkts rcvd,28426 IP pkts sent,26185 UDP pkts rcvd, 29180 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=0RECVQ-2=0RECVQ-3=4352RECVQ-4=0Thu May  7 10:48:17 EDT 2020, 28294 IP pkts rcvd,28276 IP pkts sent,26277 UDP pkts rcvd, 28709 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=2176RECVQ-2=0RECVQ-3=0RECVQ-4=0
The RECVQs are sparsely used, and there are no errors.
Around 9pm every night, the packet counts go up significantly (probably due to backup related logs):Wed May  6 21:00:08 EDT 2020, 66382 IP pkts rcvd,66366 IP pkts sent,39405 UDP pkts rcvd, 67592 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=1595008RECVQ-2=106217088RECVQ-3=53694976RECVQ-4=31858816Wed May  6 21:00:09 EDT 2020, 69317 IP pkts rcvd,69338 IP pkts sent,44446 UDP pkts rcvd, 75958 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=13056RECVQ-2=126397312RECVQ-3=75568128RECVQ-4=41626880Wed May  6 21:00:10 EDT 2020, 71205 IP pkts rcvd,71227 IP pkts sent,43657 UDP pkts rcvd, 74603 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=920448RECVQ-2=146122752RECVQ-3=100951168RECVQ-4=52622208Wed May  6 21:00:12 EDT 2020, 69578 IP pkts rcvd,69454 IP pkts sent,124465 UDP pkts rcvd, 163367 UDP pkts sent,0 UDP pkt rcv errRECVQ-1=13140864RECVQ-2=44494848RECVQ-3=125579136RECVQ-4=0
Still, though, it's handling it with no errors.  But then at some point a threshold is reached and errors start piling up:Wed May  6 21:00:20 EDT 2020, 63177 IP pkts rcvd,63291 IP pkts sent,0 UDP pkts rcvd, 0 UDP pkts sent,38011 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=200357376RECVQ-3=292948352RECVQ-4=28890752Wed May  6 21:00:21 EDT 2020, 69501 IP pkts rcvd,69464 IP pkts sent,0 UDP pkts rcvd, 1 UDP pkts sent,42158 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=223551360RECVQ-3=314995584RECVQ-4=41735680Wed May  6 21:00:23 EDT 2020, 69962 IP pkts rcvd,69978 IP pkts sent,0 UDP pkts rcvd, 2 UDP pkts sent,43775 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=244732544RECVQ-3=338239616RECVQ-4=53858176Wed May  6 21:00:24 EDT 2020, 68266 IP pkts rcvd,68216 IP pkts sent,0 UDP pkts rcvd, 0 UDP pkts sent,43118 UDP pkt rcv errRECVQ-1=536871424RECVQ-2=265258752RECVQ-3=360643712RECVQ-4=65362688
The common denominator I've found is that one of the RECVQs hits 536,871,424.  This number seems to be almost exactly double (512 difference) the rmem.max/log_fifo_size (268,435,456).  Even though there seems to be capacity in the other RECVQs, just one of those hitting that magic number seems to be enough to throw things out of whack.  During this time, the CPU usage for syslog-ng also drops:
top - 21:00:11 up 2 days,  8:40,  2 users,  load average: 1.09, 1.41, 1.32   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND  7214 root      20   0 2111388   1.7g   5392 S 123.6 10.9   3080:46 syslog-ng top - 21:00:14 up 2 days,  8:40,  2 users,  load average: 1.00, 1.39, 1.31   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND  7214 root      20   0 2111388   1.7g   5392 S  24.6 10.9   3080:46 syslog-ng top - 21:00:17 up 2 days,  8:40,  2 users,  load average: 1.00, 1.39, 1.31   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND  7214 root      20   0 2111388   1.7g   5392 S   7.6 10.9   3080:47 syslog-ng
As best I can tell, based on the reading I've done, the message counts we're getting should be doable, but it's still not clear exactly how to size some of these options as they relate to UDP (like log_fetch_limit, which we do not have set).
Anything else I should try?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.balabit.hu/pipermail/syslog-ng/attachments/20200507/6c9d9df7/attachment.html>


More information about the syslog-ng mailing list