[syslog-ng]syslog-ng loses messages

Aaron watkins syslog-ng@lists.balabit.hu
Fri, 1 Oct 2004 12:38:56 +0100


Hi there,

I have been doing some stress testing of syslog-ng as a possible
facilitator of event messaging for use on our network. I have set up a
system as below:

      Event Process
          |    
         \_/
      FIFO/pipe
          |
    _____\_/_______
    [              ]
    [  Machine 1   ]
    [   running    ] 
    [  syslog-ng   ]
    [______________] 
          |
         \_/
      TCP socket
          |
    _____\_/_______
    [              ]
    [  Machine 2   ]
    [   running    ]
    [  syslog-ng   ]
    [______________] 
          |
         \_/
       FIFO/pipe
          |
         \_/
         cat

Or to describe it, I have a process writing to a pipe on Machine 1,
which syslog-ng then forwards via a socket to syslog-ng running on
Machine 2, which then outputs to another pipe, which I am catting.

My event process is in a while(true) loop, simply writing a number of
messages to the pipe and then sleeping for 1 second.

The problem I have discovered is that if I kill (-TERM or -9)
syslog-ng running on Machine 2, some messages are lost before it
starts buffering them in Machine 1. If I kill it while my event
process is sleeping and then restart it, the first two messages (once
the event process restarts transmitting) will be lost. Alternatively,
if I kill it while my event process is writing data (and syslog-ng on
Machine 1 is transmitting over the socket), normally about five
messages will be lost.

Is this a recognised problem and is there a way to work around this?
Obviously, I would like to be able to get all of my messages through
(up to the buffer limit) without having to reprocess the messages from
the local event copy file.

I am running this system on Solaris 5.8.

The code for my event process is:

#!/bin/sh
while [ 1 ]
do 
  count=0
  while [ $count -le 40 ]
  do
    echo "${count}: Hello World" >> event_pipe
    count=`expr ${count} + 1`
  done	
  sleep 1
done




My Machine 1 config file:

options { 
  sync(0); # Sends message as soon as it's received
  time_reopen(10); # Wait 10 before reopening closed connections
  log_fifo_size(500); # The number of messages that can go in the output queue
  create_dirs(yes); # Whether to create destination directories, if required  
};

source src_internalEvents { 
  internal(); 
};

source src_InputPipe {
  pipe( "/tmp/event_pipe" );
};

destination dst_localLog { 
  file( "/data/gen/log/nievent_all.log" 
	  template("$YEAR-$MONTH-$DAY--$HOUR:$MIN:$SEC ($HOST:$PROGRAM):
$MSG\n" )); };

destination dst_localEventCopy {
  file( "/data/gen/log/nievent_copy.log" 
	  template("$MSG\n") ); 
};

destination dst_OutputSocket { 
  tcp( "imgwrkfl-inp" port(1515) ); 
};

log { 
  source( src_InputPipe ); 
  destination( dst_localLog ); 
  destination( dst_localEventCopy );
  destination( dst_OutputSocket ); 
};

log { 
  source( src_internalEvents );
  source( src_InputPipe );
  destination( dst_localLog );
  flags( fallback );
};



For Machine 2:

options {
  sync(0); # Sends message as soon as it's received
  time_reopen(10); # Wait 10 before reopening closed connections
  log_fifo_size(500); # The number of messages that can go in the output queue
  create_dirs(yes); # Whether to create destination directories, if required  
};

source src_internalEvents { 
  internal(); 
};

source src_InputSocket { 
  tcp( port(1515) );
};

destination dst_localLog { 
  file( "/data/gen/log/nievent_all.log" 
	  template( "$YEAR-$MONTH-$DAY--$HOUR:$MIN:$SEC ($HOST:$PROGRAM): $MSG\n" ) ); 
};

destination dst_localEventCopy {
  file( "/data/gen/log/nievent_copy.log" 
	  template( "$MSG\n") ); 
};

destination dst_OutputPipe { 
  pipe( "/tmp/prepress/nievents_pipe" 
	  template( "$MSG\n") ); 
};

log { 
  source( src_InputSocket );
  destination( dst_localLog ); 
  destination( dst_localEventCopy );
  destination( dst_OutputPipe ); 
};

log {
  source( src_internalEvents );
  source( src_InputSocket );
  destination( dst_localLog );
  flags( fallback );
};


Thanks in advance for your help.

Aaron Watkins