Hi Wolfi + Gero Your projects look interesting, because there seems to be not much publicly available information on that area. ... :o how much net load does syslog create? : ... :o how much data does a host generate? : The overhead of the syslog protocol is rather small. The problem is the volume of logging done by the hosts/applications. Normal operations cause no significant loads. But if things go wrong, then log info ranges from 0..infinity. I saw 1 gig /var filesystems filled in no time flat by syslogd. You don't want this in your DB, do you? So a serious architecture requires some preprocessing before sending messages to the central host. ... :o common tools : :i'm aware of several analyzing tools (swatch, logsurfer ... from : Not to forget the big commercial framework packages: CA Unicenter, HP OpenView, Tivoli TME, BMC Patrol + Command/Post. A smaller product with very good reputation is Netcool Omnibus. All these provide agents, which scan and preprocess arbitrary logfiles. Yes, with central configuration and pre-defined templates for standard log entries. No, long term analysis is not [yet] easy with these, but service reporting functionality is sprouting up everywhere and hopefully useful in the near future. ... :as said, i'd be very interested on your thoughts on the topic, your :experiences and maybe references to papers/tools. : Although everybody must work on this, there is not much public stuff on this AFAIK. Maybe www.summitonline.com has a bit. For analysis of complex event patterns, a rather new approach is to visualize the conceptual relationship of events. E.g.: Girardin98: A Visual Approach for Monitoring Logs (http://www.usenix.org/events/lisa98/girardin.html) Hellerstein99: EventBrowser: A Flexible Tool for Scalable Analysis of Event Data" (http://www.research.ibm.com/PM/perf_mgt.html) HTH Peter