[syslog-ng]performance test questions

Roberto Nibali syslog-ng@lists.balabit.hu
Thu, 06 Mar 2003 16:33:46 +0100


New round of testing. This time I've tested:

syslog-ng-1.6.0rc1+20030306
libol-0.3.9
macros.c generated by gperf from Achim

and Dmitry's suggestion of increasing the SO_RCV buffer. I've simply set the 
rmem entries in the proc-fs to 512M and this already helped a lot.

It looks a _lot_ nicer now:

> Using 1 client (5000 messages total):
> --------------
> syslogd as a client (UDP transfer)
>    no   MACRO         : ~5000 messages (100%) #loss encountered but minimal
>    file MACRO         : ~4250 messages (83%)
>    file+template MACRO: ~3200 messages (64%)
> 
> syslog-ng as a client (UDP transfer)
>    no   MACRO         : ~3500 messages (100% / 70%)
>    file MACRO         : ~2800 messages (80%  / 66%)
>    file+template MACRO: ~2250 messages (64%  / 70%)
> 
> syslog-ng as a client (TCP transfer)
>    no   MACRO         : ~5000 messages (100%)
>    file MACRO         : ~4300 messages (83%)  #between 3400 and 5000 :(
>    file+template MACRO: ~3600 messages (72%)

Using 1 client (5000 messages total):
--------------
syslogd as a client (UDP transfer)
    no   MACRO         :  5000 messages (100%) #no loss at all
    file MACRO         : ~4979 messages (99%)  #loss: packet receive errors
    file+template MACRO: ~4408 messages (88%)  #loss: packet receive errors

syslog-ng as a client (UDP transfer)
    no   MACRO         : ~3905 messages (78%) \
    file MACRO         : ~3400 messages (68%)  > #loss: packet receive errors!!
    file+template MACRO: ~3362 messages (67%) /

syslog-ng as a client (TCP transfer)
    no   MACRO         :  5000 messages (100%)
    file MACRO         :  5000 messages (100%)
    file+template MACRO:  5000 messages (100%)

What strikes me as particularly odd is that syslog-ng as a client in UDP mode 
still performs worse than syslogd in burst mode.

In case you're interested: The 5000 messages are 142Bytes in length and sent in 
0.9s without any sleeping between sending. This results in almost exactly 
1MByte/s which is close to link saturation (no IRQ or RX queue starvation 
noticed from NIC driver code) on my 10Mbit/s test network.

If I find time I'll run the tests with 3 clients too. But meanwhile I'm all for 
the inclusion of the gprof'd code since rewriting doesn't look like a feasible 
option in the current state of development (shortly before stable release).

Best regards,
Roberto Nibali, ratz
-- 
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc