A technique that we use for an in-house application I wrote -- that writes log entries to a database -- I think would be very useful for syslog-ng in high-performance setups. The idea is basically to commit as frequently as the backend database can handle. The way it works is that you basically commit when your queue is completely empty. So if you have a low volume of logs coming it, it would commit after every log entry. This is quite demanding on the database for high volumes, so as the database loses the ability to keep up, the queue in syslog-ng starts to build and so it starts to commit less. When it starts doing more log entries per commit/transaction, performance on the database increases. So you end up writing out to the database as fast as the database can handle, with as little data loss as possible were syslog-ng to die somehow. You would also need some sort of max-transaction-size so you dont end up with too many uncommited log entries if the database is being really really slow. As mentioned I've been using this technique in our in-house application for a couple years now and it performs beautifully. Very low latency between syslog-ng getting the message, and it being available (for a select query) in the database, without overwhelming the database. In theory you could also use this same principle for file destinations as well, but I dont think it'd be as useful there. But anyway, just an idea :-)