incorrect firewall logs with syslogng-2.0.0 [security error?]
Hello, After upgradeing from syslogng-1.6.11 to 2.0.0 I've got a strange messages in /var/log/messages file: Nov 29 08:14:05 denise WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 11:10:19 denise PT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 16:42:16 denise W=0 RES=0x00 RST URGP=0 Nov 29 21:37:15 denise 116 ID=34901 DF PROTO=TCP SPT=3584 DPT=80 WINDOW=65535 RES=0x00 ACK FIN URGP=0 These are truncated iptables logs. Because in firewall.log file are simmilar entries to truncated one, I think full file should be: Nov 29 11:10:19 denise firewallp=INVALID:1 a=DROP IN=eth1 OUT= MAC=00:30:4f:36:2b:dc:00:04:9a:2c:7f:20:08:00 SRC=193.41.230.81 DST=xxx.xxx.xxx.x LEN=576 TOS=0x00 PREC=0x00 TTL=117 ID=3495 PROTO=TCP SPT=443 DPT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 (xxx.xxx.xxx.x is a real server IP address - one 'x' is one digit) Config file with interesting lines: options { dir_perm(0700); perm(0600); chain_hostnames(no); stats(1200); keep_hostname(yes); dir_owner(root); dir_group(root); owner(root); group(root); }; source local { unix-stream("/dev/log" max_connections(300)); file("/proc/kmsg"); # udp(ip(0.0.0.0) port(514)); internal(); }; # ..... # authpriv.none;cron.none;mail.none;news.none -/var/log/messages filter f_2 { not facility(news); }; filter f_3 { not facility(local5); }; filter f_4 { not facility(local6); }; filter f_5 { not facility(authpriv); }; filter f_6 { not facility(cron); }; filter f_7 { not facility(mail); }; filter f_notlocal7 { not facility(local7); }; filter f_notfirewall { not match("firewallp="); }; destination d_2 { file("/var/log/messages" create_dirs(yes)); }; log { source(local); filter(f_2); filter(f_4); filter(f_5); filter(f_6); filter(f_7); filter(f_notimapd); filter(f_notpopa3d); filter(f_notnamed); filter(f_notfirewall); destination(d_2); }; #..... # firewallp.* -/var/log/firewall.log filter f_16 { level(warning) and match("firewallp="); }; destination d_12 { file("/var/log/firewall.log" create_dirs(yes)); }; log { source(local); filter(f_16); destination(d_12); }; #..... Do you know where could be an error? It is a production server so I can't debug syslog-ng on this machine (sorry). NOTE: the only truncated lines I can find are iptables (AFAIR they are logged by /proc/kmesg) entries. All other lines are complete. And it is probably a critical error in security. PS: simple statistics line from syslog-ng: Nov 30 00:00:06 denise syslog-ng[32682]: Log statistics; dropped='pipe(/dev/tty12)=0', processed='center(queued)=758289', processed='center(received)=369020', processed='destination(d_maillogimapd)=4245', processed='destination(d_5)=1', processed='destination(d_4)=0', processed='destination(d_3)=14259', processed='destination(d_2)=14259', processed='destination(d_13)=0', processed='destination(d_1)=6000', processed='destination(d_12)=6000', processed='destination(d_11)=369020', processed='destination(d_10)=36', processed='destination(d_8)=0', processed='destination(d_maillogpopa3d)=103224', processed='destination(d_7)=241245', processed='destination(d_6)=0', processed='source(local)=369020' I think you see that it could be a little hard to debug (with so many messages parsed by syslog-ng) ;) Jacek
On Thu, 2006-11-30 at 00:31 +0100, Jacek Kalinski wrote:
Hello,
After upgradeing from syslogng-1.6.11 to 2.0.0 I've got a strange messages in /var/log/messages file:
Nov 29 08:14:05 denise WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 11:10:19 denise PT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 16:42:16 denise W=0 RES=0x00 RST URGP=0 Nov 29 21:37:15 denise 116 ID=34901 DF PROTO=TCP SPT=3584 DPT=80 WINDOW=65535 RES=0x00 ACK FIN URGP=0
These are truncated iptables logs. Because in firewall.log file are simmilar entries to truncated one, I think full file should be: Nov 29 11:10:19 denise firewallp=INVALID:1 a=DROP IN=eth1 OUT= MAC=00:30:4f:36:2b:dc:00:04:9a:2c:7f:20:08:00 SRC=193.41.230.81 DST=xxx.xxx.xxx.x LEN=576 TOS=0x00 PREC=0x00 TTL=117 ID=3495 PROTO=TCP SPT=443 DPT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 (xxx.xxx.xxx.x is a real server IP address - one 'x' is one digit)
It is not necessarily syslog-ng that is at fault here, if the kernel ring buffer is overflown (because of higher traffic for example), the kernel might give partial lines while reading /proc/kmsg. You can increase the kernel ring buffer size by increasing the config option CONFIG_LOG_BUF_SHIFT -- Bazsi
Dnia 2006-11-30 10:53, Użytkownik Balazs Scheidler napisał:
After upgradeing from syslogng-1.6.11 to 2.0.0 I've got a strange messages in /var/log/messages file:
Nov 29 08:14:05 denise WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 11:10:19 denise PT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 16:42:16 denise W=0 RES=0x00 RST URGP=0 Nov 29 21:37:15 denise 116 ID=34901 DF PROTO=TCP SPT=3584 DPT=80 WINDOW=65535 RES=0x00 ACK FIN URGP=0
It is not necessarily syslog-ng that is at fault here, if the kernel ring buffer is overflown (because of higher traffic for example), the kernel might give partial lines while reading /proc/kmsg.
You can increase the kernel ring buffer size by increasing the config option CONFIG_LOG_BUF_SHIFT
I don't think so. Previously syslog-ng 1.6.11 was installed for 3 months (exactly since 1 August). Kernel also wasn't changed at least within last month or two. There wasn't before this kind of problems (even in a lot of higher network traffic). And I'm sure that there was no "truncated" entries, because every strange line in system logs is reported to me immediately. After upgrade (27th November) truncated lines are reported in logs every 3-10 hours. So it is very often. Everything I can do in this situation is to downgrade to syslog-ng 1.6 (yes I have previous binary in backup) and check if this errors will be repeated. PS: I'm not sure it is a syslog-ng error, but circumstances are very, very strange and connected with upgrade. Jacek
On Thu, 2006-11-30 at 19:23 +0100, Jacek Kalinski wrote:
Dnia 2006-11-30 10:53, Użytkownik Balazs Scheidler napisał:
After upgradeing from syslogng-1.6.11 to 2.0.0 I've got a strange messages in /var/log/messages file:
Nov 29 08:14:05 denise WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 11:10:19 denise PT=34536 WINDOW=16985 RES=0x00 ACK URGP=0 Nov 29 16:42:16 denise W=0 RES=0x00 RST URGP=0 Nov 29 21:37:15 denise 116 ID=34901 DF PROTO=TCP SPT=3584 DPT=80 WINDOW=65535 RES=0x00 ACK FIN URGP=0
It is not necessarily syslog-ng that is at fault here, if the kernel ring buffer is overflown (because of higher traffic for example), the kernel might give partial lines while reading /proc/kmsg.
You can increase the kernel ring buffer size by increasing the config option CONFIG_LOG_BUF_SHIFT
I don't think so. Previously syslog-ng 1.6.11 was installed for 3 months (exactly since 1 August). Kernel also wasn't changed at least within last month or two. There wasn't before this kind of problems (even in a lot of higher network traffic). And I'm sure that there was no "truncated" entries, because every strange line in system logs is reported to me immediately.
After upgrade (27th November) truncated lines are reported in logs every 3-10 hours. So it is very often. Everything I can do in this situation is to downgrade to syslog-ng 1.6 (yes I have previous binary in backup) and check if this errors will be repeated.
PS: I'm not sure it is a syslog-ng error, but circumstances are very, very strange and connected with upgrade.
Syslog-ng is software, and as such it can certainly have bugs of its own. I just said that symptoms are very similar to kernel ring buffer overflows. This can happen if kernel log messages are generated faster than syslog-ng is able to process them The possible reasons for syslog-ng not being able to process messages: 1) the rate of kernel messages got higher 2) syslog-ng got slower in your specific scenario 3) there's some kind of external cause that makes syslog-ng to block (DNS for instance), which increases latency A possible reason for #3 might be that 2.0.0 lacked DNS cache, which I have added to my TLA tree. What could help to track down the issue is to have syslog-ng strace-ed with timestamps (strace -ttT -s 4096 -p <pid>), wait for a message to be garbled and analyze the results. I'd look at: - identify the garbled message in the strace output (it should appear in a read() call) - check the file descriptor of the /proc/kmsg file - identify the read() call that is before the previously identified read call and refers to the same file descriptor - check the time stamps, how much time is spent between the two read() calls, and what happened between the two - clearly syslog-ng is working on something instead of reading the kernel message source. If you already know where time is being spent (possible causes: DNS lookups, time_sleep, etc.), you can either work in decreasing these latencies, or if nothing jumps out: 1) then either syslog-ng 2.0.0 got slower in your specific setup (although I had reports that syslog-ng 2.0.0 is significantly faster than 1.6.x) To improve this situation I'd need a profile generated by oprofile 2) increase the kernel ring buffer size -- Bazsi
syslog-ng 2.0.8 I am just starting to use the "file" source to follow files and would like to make a suggestion as to how this mechanism should work. Currently I understand that on startup, syslog-ng reads the persistent data file to determine where is "left off" in a file and resumes from that location. My concern is what it does if the file is no longer the same file. It seems that in the case where the file has changed, that none of the existing data in the file is processed by syslog-ng, and only new data arriving in the file is processed. I propose that the information saved in the persistent file should include the offset within the file, and upon starting, if the offset does not match a line break, then the file is no longer the same file and should be processed from the beginning. Also, if the file is smaller than the offset, it is a new file and should be processed from the beginning. Evan Rempel.
On Wed, 2008-03-12 at 08:33 -0700, Evan Rempel wrote:
syslog-ng 2.0.8
I am just starting to use the "file" source to follow files and would like to make a suggestion as to how this mechanism should work.
Currently I understand that on startup, syslog-ng reads the persistent data file to determine where is "left off" in a file and resumes from that location. My concern is what it does if the file is no longer the same file.
It seems that in the case where the file has changed, that none of the existing data in the file is processed by syslog-ng, and only new data arriving in the file is processed.
I propose that the information saved in the persistent file should include the offset within the file, and upon starting, if the offset does not match a line break, then the file is no longer the same file and should be processed from the beginning. Also, if the file is smaller than the offset, it is a new file and should be processed from the beginning.
With the new serialize framework in 2.1 I can add the inode number of the file to the persistent information. This way I can decide whether it's the same file (same inode) or not. Does that sound ok for you? -- Bazsi
Balazs Scheidler wrote:
On Wed, 2008-03-12 at 08:33 -0700, Evan Rempel wrote:
syslog-ng 2.0.8
I am just starting to use the "file" source to follow files and would like to make a suggestion as to how this mechanism should work.
Currently I understand that on startup, syslog-ng reads the persistent data file to determine where is "left off" in a file and resumes from that location. My concern is what it does if the file is no longer the same file.
It seems that in the case where the file has changed, that none of the existing data in the file is processed by syslog-ng, and only new data arriving in the file is processed.
I propose that the information saved in the persistent file should include the offset within the file, and upon starting, if the offset does not match a line break, then the file is no longer the same file and should be processed from the beginning. Also, if the file is smaller than the offset, it is a new file and should be processed from the beginning.
With the new serialize framework in 2.1 I can add the inode number of the file to the persistent information.
This way I can decide whether it's the same file (same inode) or not.
Does that sound ok for you?
Hmmm. Mostly good. I wouldn't do this, but if someone overwrote the file on startup, and then started syslog-ng to read it, the first group of messages would still be missed by syslog-ng. Perhaps /var/log/dmesg is an example. Another might be a redirected console. I think that some logic to look at the size of the file, and if the persistent offset makes sense given the data at that offset would be a more conclusive. Using the inode may still be the most efficient if it has changed, but it is a startup cost, not a normal running cost so efficiency may not be that important for this operation. To be really sure, you could CRC check the last couple of lines that you have read from that source and save that in your persistent data. Then on startup, read back two lines and CRC check those to match your persistnet data. That way you can be very sure that the file is or is not the same as the one your were using at the time syslog-ng shut down. Evan Rempel
participants (3)
-
Balazs Scheidler
-
Evan Rempel
-
Jacek Kalinski