I am developing a reporting application based on syslog-ng log files, this is a fw log file, there is a filter in syslog-ng definition matching the "teardown" keyword, then it writes to a pipe and there is my perl code running in background and parsing the logs and inserts to db, when I check for the log file with grep -c -i teardown and my db entries there is a difference of % 0.3 you think using program() in syslog-ng is more efficient than reading pipe,or instead of reading a pipe and filtering with syslog-ng, I can tail with perl the log file, and filter by perl? any ideas?? and maximum log entries per second is around 500 and filetred results must be the half,, Yüce SUNGUR Türkiye İş Bankası A.Ş. Genel Müdürlük Bilgi İşlem Müdürlüğü İŞ Kuleleri Kule-1 Kat 3 34330 Levent/İstanbul Tel: +90 (212) 316 88 49 Fax:+90 (212) 316 0938 Cep:+90 532 748 5466 Yuce.sungur@isbank.com.tr
On Wed, 2007-10-10 at 11:43 +0300, Yüce Sungur wrote:
I am developing a reporting application based on syslog-ng log files, this is a fw log file, there is a filter in syslog-ng definition matching the "teardown" keyword, then it writes to a pipe and there is my perl code running in background and parsing the logs and inserts to db,
when I check for the log file with grep -c -i teardown and my db entries there is a difference of % 0.3
you think using program() in syslog-ng is more efficient than reading pipe,or instead of reading a pipe and filtering with syslog-ng, I can tail with perl the log file, and filter by perl?
any ideas??
Filtering in syslog-ng must perform better, as you filter out unneeded entries without having to write them to a pipe first.
-- Bazsi
participants (2)
-
Balazs Scheidler
-
Yüce Sungur