Some thoughts to this. I refer to the machine collecting logs as server, the machines writing logs to it as clients. 1) TCP Connections syslog-ng uses persistent connections to clients logging to the server. It has been argued that TCP has some overhead over UDP. Now imagine you have 300 clients logging to your web server/CGI. Using HTTP you have to create a new connection every time a client wants to write logs to your server. If persistent TCP has _some_ overhead, this can only be called *huge* 2) Calling CGI Every time you want to log to the server it has to exec your CGI. 300 clients logging simultaneously, each forcing the server to exec the CGI over and over again. There's a reason people came up with ASP, PHP, embedded <script> and servlets: overhead in loading, linking and executing the called program. It get's even worse when you're using an interpreted language, since then every time the interpreter has to be loaded again. 3) File handling syslog opens local files once, keeps the fd and flushes its message queues from time to time. With CGI you have to open,write,close your files everytime a client delivers his logs. 4) Managing logs on the server with generic syslog you have some liberty to define which logs get written to which file, based on priority, facility or even regexps. on the opposit you're very limited on where you write to. if you let each instance of your CGI write to a file i imagine it could get a bit messy. Even if you build a system as proposed (HTTP+CGI), and the server can handle all that overhead the question remains if your homegrown system proves to be anywhere near as reliable as a piece of software that has been specifically designed for the task. -- Those who do not understand Unix are condemned to reinvent it, poorly. -- Henry Spencer