I do not have the expertise or budget to design and deploy a database that can handle billions of records a month and still be able to do queries that don't last forever. I tried the mysql approach, and I ended up having queries that would never finish, due to the size of the database. On 1/28/10 2:17 PM, Martin Holste wrote:
I've done this before using program() and a Perl script which multiplexes logs as necessary. Syslog-NG won't do round-robin as far as I know. Conversely, a simple Perl script from program() could be easy on the number of open filehandles if you open, write, then close immediately after writing, if that's your concern versus throughput performance.
I'm genuinely curious: what design issue are you addressing with 10,000 simultaneously open log files? Is it that you prefer file-based tools (grep, etc.) versus dumping into a database?
--Martin
On Thu, Jan 28, 2010 at 3:08 PM, Rory Toma<rory@ooma.com> wrote:
OK, I have a question. I've found that it is good to run several syslog-ng processes when dealing with tens of thousands of log files.
So, I'd like to receive everything in one syslog-ng process, and forward them, via localhost, to about a hundred or so different syslog-ng processes on the same machine. Is there a good way to write a (log; filter; dest;) line that would round-robin between a bunch of destinations in a single line? ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.campin.net/syslog-ng/faq.html
______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.campin.net/syslog-ng/faq.html