Problems writing to files with different templates/ownership in 3.x
Hello, I'm using syslog-ng as a remote syslog server for several of our internal groups. As such, when a syslog arrives, I use the source address to determine the ownership of the log file. Here is how I'm currently doing this: source net { udp(ip(0.0.0.0), port(514) flags(no-multi-line)); }; filter examplefilter { netmask("10.0.0.0/255.255.0.0"); }; filter example2filter { netmask("192.168.0.0/255.255.0.0"); }; destination examplelog { file("/var/log/remote/$HOST.log" group(examplegroup)); }; destination example2log { file("/var/log/remote/$HOST.log" group(example2group)); }; log { source(net); filter(examplefilter); destination(examplelog); flags(final); }; log { source(net); filter(example2filter); destination(example2log); flags(final); }; This worked fine in 1.6.x and appears to work fine in 3.2.x, but when logrotate pokes syslog-ng I get lines like this: syslog-ng: Internal error, duplicate configuration elements refer to the same persistent config; name='affile_dd_writers(/var/log/remote/$HOST.log)' Is there a better way to do this which only produces one reference to "/var/log/remote/$HOST.log"? Also, I'm using a similar set of filters and destinations to apply different templates to text that is being written to a single file. This also produces the error, but I am more concerned that the combined file might experience issues if the config isn't supposed to have multiple destinations be the same location. Is there a way to use filters to apply templates while still only having one destination defined? Could my current setup result in lost log lines? Thanks, Brian
On Mon, 2011-10-31 at 12:00 -0700, Brian De Wolf wrote:
Hello,
I'm using syslog-ng as a remote syslog server for several of our internal groups. As such, when a syslog arrives, I use the source address to determine the ownership of the log file. Here is how I'm currently doing this:
source net { udp(ip(0.0.0.0), port(514) flags(no-multi-line)); };
filter examplefilter { netmask("10.0.0.0/255.255.0.0"); }; filter example2filter { netmask("192.168.0.0/255.255.0.0"); };
destination examplelog { file("/var/log/remote/$HOST.log" group(examplegroup)); }; destination example2log { file("/var/log/remote/$HOST.log" group(example2group)); };
log { source(net); filter(examplefilter); destination(examplelog); flags(final); }; log { source(net); filter(example2filter); destination(example2log); flags(final); };
This worked fine in 1.6.x and appears to work fine in 3.2.x, but when logrotate pokes syslog-ng I get lines like this:
syslog-ng: Internal error, duplicate configuration elements refer to the same persistent config; name='affile_dd_writers(/var/log/remote/$HOST.log)'
Is there a better way to do this which only produces one reference to "/var/log/remote/$HOST.log"?
Also, I'm using a similar set of filters and destinations to apply different templates to text that is being written to a single file. This also produces the error, but I am more concerned that the combined file might experience issues if the config isn't supposed to have multiple destinations be the same location. Is there a way to use filters to apply templates while still only having one destination defined? Could my current setup result in lost log lines?
Well, syslog-ng keeps some data structures around for file destinations when reloading itself. Since the configuration can change between reloads, these data structures are identified by a name, generated based on the parameters of the given destination in question. E.g. the set of currently open files for the /var/log/remote/$HOST.log file destinations are filed under: "affile_dd_writers(/var/log/remote/$HOST.log)" as described by the error message. You have two file destinations generating the same id, which causes problems at reload time. My not-yet-implemented solution for the problem would be to allow the user to explicitly define the ID to be used for such purposes. We have some half-baked designs on that but nothing more concrete. The effect of this configuration is that at reload time, the set of opened files can be mixed up, statistical counters likewise. All in all, it's not a good situation. The workaround is to create several symlinks pointing to the same directory, and use a different path for each of your file destinations, which due to the symlink pointing to the same path would have the same end result as your current config. ln -s /var/log/remote /var/log/remote/examplegroup ln -s /var/log/remote /var/log/remote/example2group And then use: destination examplelog { file("/var/log/remote/examplegroup/$HOST.log" group(examplegroup)); }; destination example2log { file("/var/log/remote/example2group/$HOST.log" group(example2group)); }; This would solve the issue until there's a better fix around. -- Bazsi
participants (2)
-
Balazs Scheidler
-
Brian De Wolf