On Wed, 2008-02-20 at 14:32 +0000, AndyH@nominet.org.uk wrote:
Hi,
I've been running 2.0.6 on Solaris 10 and it all seemed to be working fine. On a new application I have found that it loses messages. This application sends 3-7000 messages/sec. Part of these messages contains a unique ID which increments.
When I run the syslogd as supplied with Solaris 10 then all messages get logged, but when I use syslog-ng then it loses messages. On a Sun V210 I see these messages
message overflow on /dev/log minor #6 -- is syslogd(1M) running? message overflow on /dev/log minor #6 -- is syslogd(1M) running? message overflow on /dev/log minor #6 -- is syslogd(1M) running?
on the console, and eventually there is a hardware reset and the server reboots.
I have tried running syslogd to just pass messages on to syslog-ng and this doesn't cause the hardware reset, but still loses some messages. I have also tried the time_sleep and log_fetch_limit options with no success.
I have noticed that syslogd is multi-threaded but syslog-ng is only single threaded. Is this what is causing these messages to get missed? Are there any plans for a threaded version?
Anyone have any ideas as to how to get all messages logged with syslog-ng?
I don't think it is related to multithreaded/singlethreaded case. On Solaris, syslog-ng itself is multithreaded too, as the door() mechanism used to transport messages from application to syslogd requires that. syslog-ng issues a single getmsg() call for each poll() iteration, the log_fetch_limit() is ineffective in this case. Can you check if this patch solves/improves the behaviour? Thanks. diff --git a/src/afstreams.c b/src/afstreams.c index 009b074..d0a76f3 100644 --- a/src/afstreams.c +++ b/src/afstreams.c @@ -134,7 +134,7 @@ afstreams_sd_init(LogPipe *s, GlobalConfig *cfg, PersistentConfig *persist) close(fd); return FALSE; } - self->reader = log_reader_new(streams_read_new(fd), LR_LOCAL | LR_NOMREAD | LR_PKTTERM, s, &self->reader_options); + self->reader = log_reader_new(streams_read_new(fd), LR_LOCAL | LR_PKTTERM, s, &self->reader_options); log_pipe_append(self->reader, s); if (self->door_filename) This will cause the log-fetch-limit() option to become effective, thus several messages are going to be fetched for every iteration, this can easily multiply performance. Please also check if the local messages get mangled in any way, I seriously doubt that would happen, but messing with message transports always carries some risk. -- Bazsi