[syslog]$ du 15289740 ./11-29 I'm chucking 15 gigs of syslog per day, to the tune of: [11-29]$ cat * | nice wc -l 40784743 On top of that, the entirety of it is thrown to the mercy of a perl based log analyzer (single threaded, no less), which in turn filters and logs to a db, at an average rate of 472 lines per second. Cheers. =) - billn On Tue, 30 Nov 2004, Jay Guerette wrote:
( I hope I don't offend anyone with an attachment, it's only 8k. )
My largest single-server syslog-ng implementation currently handles over 13M lines per day, totalling about 1.8Gb per day. I've only recently been able to gather this data by creating a process to count incoming lines, sum their lengths, and report via syslog at 1 minute intervals. See attached graph ( if the attachment survived ).
I added this configuration before all other entries to make sure it sees everything:
<syslog-ng.conf> destination syslog-perf { program(syslog-perf); }; log { source(syslog); destination(syslog-perf); }; </syslog-ng.conf>
I originally tried this in Perl, then Bash, but neither could keep up with the incoming messages. This works like a champ. It compiles on Linux. The output format is specific to my syslog-to-rrd implementation, but you get the idea. It is suitable for an installation that is assured of at least 1 message for each reporting interval!
<syslog-perf.c> #include <stdio.h> #include <time.h> #include <syslog.h>
#define BUFFER_SIZE 8192 #define REPORT_INTERVAL 60
void main(void) {
char buf[BUFFER_SIZE]; long count, bytes; time_t lastupdate;
lastupdate = time(NULL); while (fgets(buf, BUFFER_SIZE, stdin)) { count++; bytes += (strlen(buf) - 1); if (time(NULL) > (lastupdate + REPORT_INTERVAL)) { openlog("127.0.0.1", LOG_NDELAY, LOG_LOCAL3); syslog(LOG_INFO, "Syslog-ng\\Lines=%d Syslog-ng\\Bytes=%d", count, bytes); closelog(); lastupdate += REPORT_INTERVAL; count = 0; bytes = 0; } }
} </syslog-perf.c>