[syslog-ng]Using a perl script with the pipe destination driver

Jim Mozley syslog-ng@lists.balabit.hu
Tue, 27 Apr 2004 19:48:28 +0100


I have a perl script that reads from a pipe using the following construct:

while ( 1 ) {
    die "$pipe is not a pipe!"  unless ( -p $pipe );
    open PIPE, $pipe or die "Cannot open $pipe: $!";
    my @lines = <PIPE>;
    close PIPE;

    for ( @lines ) {
       # do some stuff
    }
}

The "do some stuff part" loads info into a database (the messages are 
processed before being put into the database and two hashes keep cached 
information relevant to this in order to speed up the process).

I can test using:

cat syslog > pipe

and the script does its stuff, loading single syslog messages or a days 
worth (around 3000 messages in 8 seconds).

I then try to add a destination within syslog-ng.conf:

destination d_pipe {
        pipe("/path/to/pipe");
};

and

log {
        source(all);
        destination(d_pipe);
};

This doesn't work as I expected. It seems as if I have some kind of 
locking problem; as I send test syslog messages nothing happens, then if 
I stop syslog-ng all the test messages I have added (using logger) are 
then processed by the script. I can then restart syslog-ng, send 
messages and again they are not read by the script until syslog-ng is 
stopped.

I was able to use the construct:

open PIPE, $pipe or die "Cannot open $pipe: $!";
while ( <LOGPIPE> ) {
    # do something e.g. test write to file
};

as a test and this worked OK, seeming to process each message as I 
entered it (although I didn't try a volume test).

Based on what I'd found so far I had a few questions:

- can anyone explain what I think is the locking problem with the first 
approach (apologies if I'm asking a general unix or perl question, but 
testing the pipe outside of syslog-ng seemed to work ok)?

- If I use the second approach are log messages written atomically such 
that I will get them one at a time?

- One of the reasons I adopted the first approach is that I can throw a 
lot of messages at the pipe and still have them processed and put in the 
DB in a short space of time. I was concerned that if I used the second 
approach and syslog-ng was passing messages into the pipe faster than 
they could be processed, I would end up loosing messages. I'm sure that 
there is either an OS or syslog-ng buffer that is configurable but I 
assume its finite. We are logging a lot of events from network equipment 
and in the event of a "message storm" when network problems occur I 
wanted to ensure all of the messages are captured.

I'm using perl 5.8 and syslog-ng 1.6.2 on Solaris 8.

Jim Mozley