Hari Sekhon wrote:
I'm also interested in something like this.
The other alternative is to have a second destination which is text based. You can then use an analyzer on this. Unfortunately, when I tried this using logwatch on the text files, logwatch was so inefficient that it took more than a day to analyze one day's logs (single file around 11MB)!
What kind of processor/mem specs do you have? 11MB is not that bad I have maillogs that get run against logwatch every day much bigger then that. [mgt@bell ~]$ du -sh /var/log/maillog 295M /var/log/maillog Last nights Logwatch took 3 mins to run. [Dual Xeon 2.4ghz 2.5gb of RAM] I have a feeling this is going to stray off topic but... It is possible to use a "wrapper" script for logwatch against the database. [I know because I have one] The concept is that you want to query the database for the time range and facility and save the query to a text file that you then use when running logwatch. Example: select date,time,host,msg from current where host = "sirius" and facility = "mail" and date = FROM_DAYS( TO_DAYS(curdate()) - 1) Gets the mail logs for my host Sirius from yesterday. Save that off in /tmp as maillog. Do all the facilities that you need and then run logwatch against that directory using the --logdir switch. If you need more help I think you should bring this to the logwatch users list. Good Luck. -Mike