I have just setup syslog-ng to log about 15 machines to a central log host, and i absolutely love it, but i have a question. right now i'm splitting the logs like this on the loghost destination hosts { file("/mnt/backups/logs/$HOST/$YEAR/$MONTH/$FACILITY$YEAR$MONTH" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); }; but that only splits the logs via facility. there are more logs being written on the client log machines, which are configured like this destination messages { file("/var/log/messages"); }; destination ssh { file("/var/log/ssh.log"); }; destination cron { file("/var/log/cron.log"); }; destination auth { file("/var/log/auth.log"); }; destination syslog { file("/var/log/syslog.log"); }; destination xinetd { file("/var/log/xinetd.log"); }; destination rsync { file("/var/log/rsync.log"); }; destination cfengine { file("/var/log/cfengine.log"); }; filter f_ssh { program("sshd"); }; filter f_cron { program("cron"); }; filter f_auth { program("su") or program("sudo"); }; filter f_syslog { program("syslog-ng"); }; filter f_xinetd { program("xinetd"); }; filter f_rsync { program("rsyncd"); }; filter f_cfengine { program("cfengine"); }; filter f_messages { ...}; // with messages getting everything else log { source(src); filter(f_ssh); destination(ssh); }; log { source(src); filter(f_cron); destination(cron); }; log { source(src); filter(f_auth); destination(auth); }; log { source(src); filter(f_syslog); destination(syslog); }; log { source(src); filter(f_xinetd); destination(xinetd); }; log { source(src); filter(f_rsync); destination(rsync); }; log { source(src); filter(f_cfengine); destination(cfengine); }; log { source(src); filter(f_messages); destination(messages); }; what i would like to do is log the files as they are being logged locally on the client machines, the same way on the loghost. do i have to change the destination? how do i get the same file names, etc, that are being logged, but on the loghost? any help is greatly appreciated. -Jeffrey -- -------------------------- Jeffrey Forman Gentoo Infrastructure Team jforman@gentoo.org --------------------------
2004-02-16, h keltezéssel 22:51-kor Jeffrey Forman ezt írta:
I have just setup syslog-ng to log about 15 machines to a central log host, and i absolutely love it, but i have a question. right now i'm splitting the logs like this on the loghost destination hosts { file("/mnt/backups/logs/$HOST/$YEAR/$MONTH/$FACILITY$YEAR$MONTH" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); };
but that only splits the logs via facility. there are more logs being written on the client log machines, which are configured like this
destination messages { file("/var/log/messages"); }; destination ssh { file("/var/log/ssh.log"); }; destination cron { file("/var/log/cron.log"); }; destination auth { file("/var/log/auth.log"); }; destination syslog { file("/var/log/syslog.log"); }; destination xinetd { file("/var/log/xinetd.log"); }; destination rsync { file("/var/log/rsync.log"); }; destination cfengine { file("/var/log/cfengine.log"); };
filter f_ssh { program("sshd"); }; filter f_cron { program("cron"); }; filter f_auth { program("su") or program("sudo"); }; filter f_syslog { program("syslog-ng"); }; filter f_xinetd { program("xinetd"); }; filter f_rsync { program("rsyncd"); }; filter f_cfengine { program("cfengine"); }; filter f_messages { ...}; // with messages getting everything else
log { source(src); filter(f_ssh); destination(ssh); }; log { source(src); filter(f_cron); destination(cron); }; log { source(src); filter(f_auth); destination(auth); }; log { source(src); filter(f_syslog); destination(syslog); }; log { source(src); filter(f_xinetd); destination(xinetd); }; log { source(src); filter(f_rsync); destination(rsync); }; log { source(src); filter(f_cfengine); destination(cfengine); }; log { source(src); filter(f_messages); destination(messages); };
what i would like to do is log the files as they are being logged locally on the client machines, the same way on the loghost. do i have to change the destination? how do i get the same file names, etc, that are being logged, but on the loghost? any help is greatly appreciated.
I'm not sure I understand you correctly, but I think you have two options: 1) move the filtering/destinations to the loghost 2) create separate destinations for each of your filter which send the log information on a specific port (e.g. send all logs about ssh to port 10001, cron to 10002, auth to 10003) and use this port information on the loghost to sort messages to various destinations -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
Bazsi, I'd like to have the client machines log on their own file structure under /var/log and send copies to the loghost, that is why i have the filtering on the client side. About your second suggestion, setting up seperate ports per machine is going to be a logistical and security nightmare, as we will be adding more and more machines as time progresses. I tried specifying just a directory on the loghost via /mnt/scratch/logs/$HOST but that didnt quite work, it either complained about no file/directory found or just logged everything into one file. Any suggestion? -Jeffrey On Mon, 2004-02-16 at 16:05, Balazs Scheidler wrote:
2004-02-16, h keltezéssel 22:51-kor Jeffrey Forman ezt írta:
I have just setup syslog-ng to log about 15 machines to a central log host, and i absolutely love it, but i have a question. right now i'm splitting the logs like this on the loghost destination hosts { file("/mnt/backups/logs/$HOST/$YEAR/$MONTH/$FACILITY$YEAR$MONTH" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); };
but that only splits the logs via facility. there are more logs being written on the client log machines, which are configured like this
destination messages { file("/var/log/messages"); }; destination ssh { file("/var/log/ssh.log"); }; destination cron { file("/var/log/cron.log"); }; destination auth { file("/var/log/auth.log"); }; destination syslog { file("/var/log/syslog.log"); }; destination xinetd { file("/var/log/xinetd.log"); }; destination rsync { file("/var/log/rsync.log"); }; destination cfengine { file("/var/log/cfengine.log"); };
filter f_ssh { program("sshd"); }; filter f_cron { program("cron"); }; filter f_auth { program("su") or program("sudo"); }; filter f_syslog { program("syslog-ng"); }; filter f_xinetd { program("xinetd"); }; filter f_rsync { program("rsyncd"); }; filter f_cfengine { program("cfengine"); }; filter f_messages { ...}; // with messages getting everything else
log { source(src); filter(f_ssh); destination(ssh); }; log { source(src); filter(f_cron); destination(cron); }; log { source(src); filter(f_auth); destination(auth); }; log { source(src); filter(f_syslog); destination(syslog); }; log { source(src); filter(f_xinetd); destination(xinetd); }; log { source(src); filter(f_rsync); destination(rsync); }; log { source(src); filter(f_cfengine); destination(cfengine); }; log { source(src); filter(f_messages); destination(messages); };
what i would like to do is log the files as they are being logged locally on the client machines, the same way on the loghost. do i have to change the destination? how do i get the same file names, etc, that are being logged, but on the loghost? any help is greatly appreciated.
I'm not sure I understand you correctly, but I think you have two options:
1) move the filtering/destinations to the loghost
2) create separate destinations for each of your filter which send the log information on a specific port (e.g. send all logs about ssh to port 10001, cron to 10002, auth to 10003) and use this port information on the loghost to sort messages to various destinations
I believe your program filter will only work on the local host. The central server will only have the loglevel and facility to sort with. Russell On Mon, Feb 16, 2004 at 03:51:19PM -0600, Jeffrey Forman wrote:
I have just setup syslog-ng to log about 15 machines to a central log host, and i absolutely love it, but i have a question. right now i'm splitting the logs like this on the loghost destination hosts { file("/mnt/backups/logs/$HOST/$YEAR/$MONTH/$FACILITY$YEAR$MONTH" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); };
but that only splits the logs via facility. there are more logs being written on the client log machines, which are configured like this
destination messages { file("/var/log/messages"); }; destination ssh { file("/var/log/ssh.log"); }; destination cron { file("/var/log/cron.log"); }; destination auth { file("/var/log/auth.log"); }; destination syslog { file("/var/log/syslog.log"); }; destination xinetd { file("/var/log/xinetd.log"); }; destination rsync { file("/var/log/rsync.log"); }; destination cfengine { file("/var/log/cfengine.log"); };
filter f_ssh { program("sshd"); }; filter f_cron { program("cron"); }; filter f_auth { program("su") or program("sudo"); }; filter f_syslog { program("syslog-ng"); }; filter f_xinetd { program("xinetd"); }; filter f_rsync { program("rsyncd"); }; filter f_cfengine { program("cfengine"); }; filter f_messages { ...}; // with messages getting everything else
log { source(src); filter(f_ssh); destination(ssh); }; log { source(src); filter(f_cron); destination(cron); }; log { source(src); filter(f_auth); destination(auth); }; log { source(src); filter(f_syslog); destination(syslog); }; log { source(src); filter(f_xinetd); destination(xinetd); }; log { source(src); filter(f_rsync); destination(rsync); }; log { source(src); filter(f_cfengine); destination(cfengine); }; log { source(src); filter(f_messages); destination(messages); };
what i would like to do is log the files as they are being logged locally on the client machines, the same way on the loghost. do i have to change the destination? how do i get the same file names, etc, that are being logged, but on the loghost? any help is greatly appreciated.
-Jeffrey --
-------------------------- Jeffrey Forman Gentoo Infrastructure Team jforman@gentoo.org --------------------------
participants (3)
-
Balazs Scheidler
-
Jeffrey Forman
-
Russell Adams