[Bug 142] New: --worker-threads=0 support
https://bugzilla.balabit.com/show_bug.cgi?id=142 Summary: --worker-threads=0 support Product: syslog-ng Version: 3.3.x Platform: PC OS/Version: Linux Status: NEW Severity: normal Priority: unspecified Component: syslog-ng AssignedTo: bazsi@balabit.hu ReportedBy: arekm@maven.pl Type of the Report: enhancement Estimated Hours: 0.0 Hello, I was hoping that threading support would solve my problem described in https://bugzilla.balabit.com/show_bug.cgi?id=113. Unfortunately it looks like it is impossible to create configuration for which each target will use one thread by default (as only such configuration is safe in case of some targets blocking on I/O). This feature request is about adding some option like: --worker-threads=0 that would limit number of threads to numer of sources + number of destinations, so each possible blocking I/O would be happening in separate thread. Yes, it would be more than cpu cores but this configuration is about reliability and not performance. (right now I have only one, network logging target that needs to always work, so probably disabling threading globally and enabling threading only for this network target would get me "always working" logging via net, even if some infinite I/O blocking happens. What would be best is to have ability to have such config on by default for all targets) -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #1 from Balazs Scheidler <bazsi@balabit.hu> 2011-10-30 05:48:23 --- hmm... this patch against the bundled ivykis should do the trick: $ git diff diff --git a/modules/iv_work.c b/modules/iv_work.c index e941745..e034fb1 100644 --- a/modules/iv_work.c +++ b/modules/iv_work.c @@ -334,7 +334,7 @@ iv_work_pool_submit_work(struct iv_work_pool *this, struct iv_work_item *work) struct work_pool_thread, list); thr->kicked = 1; iv_event_post(&thr->kick); - } else if (pool->started_threads < this->max_threads) { + } else if (!this->max_threads || pool->started_threads < this->max_threads) { iv_work_start_thread(pool); } Can you check if it is indeed ok? What it does is if max_threads is set to 0, it'll allow the creation of threads undefinitely. This one is needed against the syslog-ng core: diff --git a/lib/mainloop.c b/lib/mainloop.c index f092536..681148b 100644 --- a/lib/mainloop.c +++ b/lib/mainloop.c @@ -292,6 +292,7 @@ main_loop_io_worker_thread_start(void *cookie) * since the ID map is stored in a single 64 bit integer. If we ever need * more threads than that, we can generalize this algorithm further. */ + main_loop_io_worker_id = 0; for (id = 0; id < main_loop_io_workers.max_threads; id++) { if ((main_loop_io_workers_idmap & (1 << id)) == 0) @@ -311,8 +312,11 @@ main_loop_io_worker_thread_stop(void *cookie) { g_static_mutex_lock(&main_loop_io_workers_idmap_lock); dns_cache_destroy(); - main_loop_io_workers_idmap &= ~(1 << (main_loop_io_worker_id - 1)); - main_loop_io_worker_id = 0; + if (main_loop_worker_id) + { + main_loop_io_workers_idmap &= ~(1 << (main_loop_io_worker_id - 1)); + main_loop_io_worker_id = 0; + } g_static_mutex_unlock(&main_loop_io_workers_idmap_lock); } Threads above 64 will not scale very well in the log_queue_fifo_push_tail() function call, as they don't get a unique ID for per-thread structures. I'd appreciate some testing if this works out for you. Thanks. -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #2 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-10-30 10:54:45 --- Hmmm. Actually I don't know why I didn't simply tried to set --worker-threads=1024. 1024 should be more than enough for most of configurations. That should work, too - right? I'm only worried about the main_loop_io_worker_thread_start() comment limiting idmap to 64. Could this use dynamicly allocated array instead ? -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #3 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-10-30 11:04:58 --- Looks like 1024 is not enough to get "one thread handles no more than one destination" behaviour. Blocking serveral destinations on I/O level caused also destination that wasn't blocked to stop working. -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 Balazs Scheidler <bazsi@balabit.hu> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |bazsi@balabit.hu --- Comment #4 from Balazs Scheidler <bazsi@balabit.hu> 2011-10-31 05:46:33 --- hmm, I'm not sure why that happens, either the D state in the kernel (uninterruptible sleep) applies to the whole process, not just to a single thread. or, a lock is being held accross the write syscall, which causes a deadlock later on. you can validate the first one using "ps axuw -L" which shows all threads, along with their states. if that's the case, the only way to solve this is to split the blocking portion into a separate syslog-ng process. to diagnose the other, we need further information: * it'd be useful to check what syslog-ng is doing in its main thread using strace * it'd also be useful to check where each thread is by printing a backtrace for each thread in gdb -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #5 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-11-04 09:03:54 --- Is _everything_ related to destination running in a separate thread? Are there no things like stat() or something happening before thread creation when a new log is going to be send to the destination? I've tried to look with gdb at what's happening but as soon as I get D-state I'm unable to interrupt a process in gdb. I've tried to send SIGABRT to syslog-ng but it only worked when I made D-state go away, then logged this below. (gdb) set follow-fork-mode child (gdb) r Starting program: /sbin/syslog-ng -f /etc/syslog-ng/syslog-ng.conf --worker-threads=1024 [Thread debugging using libthread_db enabled] [New process 5375] [Thread debugging using libthread_db enabled] [New process 5376] [Thread debugging using libthread_db enabled] Missing separate debuginfo for /lib/libwrap.so.0 Try to install package that provides `/usr/lib/debug/.build-id/22/bae42ce43bbb05b35ba52797f0f05a38548c86.debug' file WARNING: window sizing for tcp sources were changed in syslog-ng 3.3, the configuration value was divided by the value of max-connections(). The result was too small, clamping to 100 entries. Ensure you have a proper log_fifo_size setting to avoid message loss.; orig_log_iw_size='25', new_log_iw_size='100', min_log_fifo_size='100000' [New Thread 0xb7a89b70 (LWP 5377)] Program received signal SIGABRT, Aborted. [Switching to Thread 0xb7d74180 (LWP 5376)] 0xb7fde430 in __kernel_vsyscall () (gdb) bt #0 0xb7fde430 in __kernel_vsyscall () #1 0xb7e38c66 in __xstat64 () from /lib/libc.so.6 #2 0xb7d6a3ef in stat64 () from /lib/syslog-ng/libaffile.so #3 0xb7d655c7 in affile_open_file (name=0x81a7888 "/mnt/test/messages", flags=35137, uid=0, gid=124, mode=416, dir_uid=0, dir_gid=124, dir_mode=488, create_dirs=1, privileged=0, is_pipe=0, fd=0xbffff594) at affile.c:76 #4 0xb7d66793 in affile_dw_reopen (self=0x81a77e0) at affile.c:540 #5 0xb7d66afa in affile_dw_init (s=0x81a77e0) at affile.c:605 #6 0xb7d6523c in log_pipe_init (s=0x81a77e0, cfg=0x81a1910) at ../../lib/logpipe.h:239 #7 0xb7d67a3a in affile_dd_open_writer (args=0xb7a86f1c) at affile.c:1011 #8 0xb7f75ff4 in main_loop_call_handler (user_data=0x0) at mainloop.c:199 #9 0x0810c7ac in iv_event_run_pending_events (_dummy=0x0) at iv_event.c:67 #10 0x0810cb6d in iv_event_raw_got_event (_this=0xb7d7409c) at iv_event_raw.c:82 #11 0x08109cc6 in iv_run_active_list (active=0xbffffb44) at iv_main.c:219 #12 0x08109e85 in iv_main () at iv_main.c:269 #13 0xb7f771da in main_loop_run () at mainloop.c:722 #14 0x0805ea0a in main (argc=1, argv=0xbffffc44) at main.c:260 (gdb) info threads 4 Thread 0xb7a89b70 (LWP 5377) 0xb7fde430 in __kernel_vsyscall () * 3 Thread 0xb7d74180 (LWP 5376) 0xb7fde430 in __kernel_vsyscall () (gdb) thread 4 [Switching to thread 4 (Thread 0xb7a89b70 (LWP 5377))]#0 0xb7fde430 in __kernel_vsyscall () (gdb) bt #0 0xb7fde430 in __kernel_vsyscall () #1 0xb7f0231c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 #2 0xb7f75e8a in main_loop_call (func=0xb7d679ad <affile_dd_open_writer>, user_data=0xb7a86f1c, wait=1) at mainloop.c:180 #3 0xb7d67deb in affile_dd_queue (s=0x81bf640, msg=0xb7105d10, path_options=0xb7a86fcc, user_data=0x0) at affile.c:1092 #4 0xb7f5c52e in log_pipe_queue (s=0x81bf640, msg=0xb7105d10, path_options=0xb7a86fcc) at logpipe.h:288 #5 0xb7f5c94b in log_dest_group_queue (s=0x81bf790, msg=0xb7105d10, path_options=0xb7a86fcc, user_data=0x0) at dgroup.c:97 #6 0xb7f643bc in log_pipe_queue (s=0x81bf790, msg=0xb7105d10, path_options=0xb7a86fcc) at logpipe.h:288 #7 0xb7f645fe in log_multiplexer_queue (s=0x81a5c48, msg=0xb7105d10, path_options=0xb7a8703c, user_data=0x0) at logmpx.c:125 #8 0xb7f643bc in log_pipe_queue (s=0x81a5c48, msg=0xb7105d10, path_options=0xb7a8703c) at logpipe.h:288 #9 0xb7f645fe in log_multiplexer_queue (s=0x81a21e0, msg=0xb7105d10, path_options=0xb7a871e8, user_data=0x0) at logmpx.c:125 #10 0xb7f7ea76 in log_pipe_queue (s=0x81a21e0, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:288 #11 0xb7f7ea26 in log_pipe_forward_msg (self=0x81bf348, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:275 #12 0xb7f7eee4 in log_source_group_queue (s=0x81bf348, msg=0xb7105d10, path_options=0xb7a871e8, user_data=0x0) at sgroup.c:102 #13 0xb7f70718 in log_pipe_queue (s=0x81bf348, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:288 #14 0xb7f706c8 in log_pipe_forward_msg (self=0x81bf158, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:275 #15 0xb7f70733 in log_pipe_queue (s=0x81bf158, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:292 #16 0xb7f706c8 in log_pipe_forward_msg (self=0x81a8690, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:275 #17 0xb7f70733 in log_pipe_queue (s=0x81a8690, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:292 #18 0xb7f706c8 in log_pipe_forward_msg (self=0x81a8848, msg=0xb7105d10, path_options=0xb7a871e8) at logpipe.h:275 #19 0xb7f712b1 in log_source_queue (s=0x81a8848, msg=0xb7105d10, path_options=0xb7a87254, user_data=0x0) at logsource.c:288 #20 0xb7f6e3ec in log_pipe_queue (s=0x81a8848, msg=0xb7105d10, path_options=0xb7a87254) at logpipe.h:288 #21 0xb7f6ef9f in log_reader_handle_line (self=0x81a8848, line=0xb7106048 "<189>Nov 4 08:55:44 crond: crond shutdown succeeded", length=53, saddr=0x0) at logreader.c:503 #22 0xb7f6f0d1 in log_reader_fetch_log (self=0x81a8848) at logreader.c:561 #23 0xb7f6e495 in log_reader_work_perform (s=0x81a8848) at logreader.c:115 #24 0xb7f766eb in main_loop_io_worker_job_start (self=0x81a89ac) at mainloop.c:358 #25 0x0810e8d0 in iv_work_thread_got_event (_thr=0x81a8210) at iv_work.c:113 #26 0x0810c7ac in iv_event_run_pending_events (_dummy=0x0) at iv_event.c:67 #27 0x0810cb6d in iv_event_raw_got_event (_this=0xb7a89a8c) at iv_event_raw.c:82 #28 0x08109cc6 in iv_run_active_list (active=0xb7a87834) at iv_main.c:219 #29 0x08109e85 in iv_main () at iv_main.c:269 #30 0x0810ebb1 in iv_work_thread (_thr=0x81a8210) at iv_work.c:196 #31 0x0810dad3 in iv_thread_handler (_thr=0x81a8278) at iv_thread.c:100 #32 0xb7efea30 in start_thread () from /lib/libpthread.so.0 #33 0xb7e4ad9e in clone () from /lib/libc.so.6 (gdb) ps. here is how I'm testing this. /mnt/test is mounted over NFS from other machine /var/log/user is local disk. I expect /var/log/user to always work, even if /mnt/test gets D-state. I'm logging things to it via "logger qq". D-state is made by iptables -I INPUT -s IP-of-syslog-ng-host -j DROP on the nfs server. Now scenario: - logger qq -> gets logged into /var/log/user - iptables .. -j DROP - logger qq2 -> ges logged fine (most likely because nothing tried to log into /mnt/test yet) - service crond restart -> tries to log into /mnt/test/... - logger qq3 -> nothing gets logged into /var/log/user -> things are waiting for something # more /etc/syslog-ng/syslog-ng.conf @version: 3.3 # # Syslog-ng configuration for PLD Linux # # See syslog-ng(8) and syslog-ng.conf(5) for more information. # options { flush_lines(0); owner(root); group(logs); perm(0640); create_dirs(yes); dir_owner(root); dir_group(logs); dir_perm(0750); stats_freq(3600); time_reopen(10); time_reap(360); mark_freq(600); threaded(yes); }; source s_sys { file ("/proc/kmsg" program_override("kernel")); unix-stream("/dev/log" max-connections(1000)); internal(); }; # uncomment the line below if you want to setup syslog server #source s_net { udp(); }; #destination d_loghost { udp("loghost" port(514)); }; destination d_kern { file("/mnt/test/kernel"); }; destination d_messages { file("/mnt/test/messages"); }; destination d_authlog { file("/mnt/test/secure"); }; destination d_mail { file("/mnt/test/maillog"); }; destination d_uucp { file("/mnt/test/spooler"); }; destination d_debug { file("/mnt/test/debug"); }; #destination d_cron { file("/var/log/cron" owner(root) group(crontab) perm(0660)); }; destination d_cron { file("/mnt/test/cron" owner(root) group(crontab) perm(0660)); }; destination d_syslog { file("/mnt/test/syslog"); }; destination d_daemon { file("/mnt/test/daemon"); }; destination d_lpr { file("/var/log/lpr"); }; destination d_user { file("/var/log/user"); }; destination d_ppp { file("/var/log/ppp"); }; destination d_ftp { file("/var/log/xferlog"); }; destination d_audit { file("/var/log/audit"); }; destination d_postgres { file("/var/log/pgsql"); }; destination d_freshclam { file("/var/log/freshclam.log"); }; # Log iptables messages to separate file destination d_iptables { file("/mnt/test/iptables"); }; destination d_console { usertty("root"); }; #destination d_console_all { file("/dev/tty12"); }; destination d_xconsole { pipe("/dev/xconsole"); }; destination d_newscrit { file("/var/log/news/news.crit" owner(news) group(news)); }; destination d_newserr { file("/var/log/news/news.err" owner(news) group(news)); }; destination d_newsnotice { file("/var/log/news/news.notice" owner(news) group(news)); }; # Filters for standard syslog(3) facilities #filter f_audit { facility(audit); }; filter f_authpriv { facility(authpriv, auth); }; filter f_cron { facility(cron); }; filter f_daemon { facility(daemon); }; filter f_ftp { facility(ftp); }; filter f_kern { facility(kern); }; filter f_lpr { facility(lpr); }; filter f_mail { facility(mail); }; filter f_news { facility(news); }; filter f_syslog { facility(syslog); }; filter f_user { facility(user); }; filter f_uucp { facility(uucp); }; filter f_local0 { facility(local0); }; filter f_local1 { facility(local1); }; filter f_local2 { facility(local2); }; filter f_local3 { facility(local3); }; filter f_local4 { facility(local4); }; filter f_local5 { facility(local5); }; filter f_local6 { facility(local6); }; filter f_local7 { facility(local7); }; # Filters for standard syslog(3) priorities filter p_debug { level(debug); }; filter p_info { level(info); }; filter p_notice { level(notice); }; filter p_warn { level(warn); }; filter p_err { level(err); }; filter p_alert { level(alert); }; filter p_crit { level(crit); }; filter p_emergency { level(emerg); }; # Additional filters for specific programs/use filter f_freshclam { program(freshclam); }; filter f_ppp { program(pppd) or program(chat); }; filter f_postgres { program(postgres); }; filter f_iptables { match("IN=[A-Za-z0-9\.]* OUT=[A-Za-z0-9\.]*" value("MESSAGE")); }; log { source(s_sys); filter(f_authpriv); destination(d_authlog); }; log { source(s_sys); filter(f_cron); destination(d_cron); }; log { source(s_sys); filter(f_daemon); destination(d_daemon); }; log { source(s_sys); filter(f_ftp); destination(d_ftp); }; log { source(s_sys); filter(f_kern); destination(d_kern); }; log { source(s_sys); filter(f_lpr); destination(d_lpr); }; log { source(s_sys); filter(f_mail); destination(d_mail); }; log { source(s_sys); filter(f_news); filter(p_crit); destination(d_uucp); }; log { source(s_sys); filter(f_news); filter(p_crit); destination(d_newscrit); }; log { source(s_sys); filter(f_news); filter(p_err); destination(d_newserr); }; log { source(s_sys); filter(f_news); filter(p_warn); destination(d_newsnotice); }; log { source(s_sys); filter(f_news); filter(p_notice); destination(d_newsnotice); }; log { source(s_sys); filter(f_news); filter(p_info); destination(d_newsnotice); }; log { source(s_sys); filter(f_news); filter(p_debug); destination(d_newsnotice); }; log { source(s_sys); filter(f_syslog); destination(d_syslog); }; log { source(s_sys); filter(f_user); destination(d_user); }; log { source(s_sys); filter(f_uucp); destination(d_uucp); }; log { source(s_sys); filter(p_debug); destination(d_debug); }; log { source(s_sys); filter(f_daemon); filter(f_ppp); destination(d_ppp); }; log { source(s_sys); filter(f_local6); filter(f_freshclam); destination(d_freshclam); }; log { source(s_sys); filter(f_local0); filter(f_postgres); destination(d_postgres); }; #log { source(s_sys); filter(f_iptables); destination(d_iptables); }; log { source(s_sys); filter(p_emergency); destination(d_console); }; #log { source(s_sys); destination(d_console_all); }; # This is a catchall statement, and should catch all messages which were not # accepted any of the previous statements. log { source(s_sys); destination(d_messages); flags(fallback); }; # Network syslogging #log { source(s_sys); destination(d_loghost); }; -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 Balazs Scheidler <bazsi@balabit.hu> changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution| |INVALID Status|NEW |RESOLVED --- Comment #6 from Balazs Scheidler <bazsi@balabit.hu> 2011-11-12 13:15:34 --- Reading your diagnosis it seems that the D state applies to the whole process, therefore even if only one thread is in an uninterruptible sleep, the whole process with all other threads will be suspended too. This means that you'll need to run two syslog-ng processes to solve this: - one that is unlikely to become blocked by invalid I/O (e.g. with network destination) - another one that writes stuff to the local disk (e.g. with the file destinations) One instance can pass messages to the other using unix domain sockets or pipes. In your situation the first one should read the local logs and pass them on to the 2nd process. This would ensure that you get messages via the network with better probability. Sorry, I'm setting this again to INVALID, as I don't intend to change syslog-ng to use a multi-process architecture, certainly this can be done with the configuration outlined above. -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 Arkadiusz Miśkiewicz <arekm@maven.pl> changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|INVALID | Status|RESOLVED |REOPENED --- Comment #7 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-11-12 14:04:54 --- Test program below uses two threads, one writting to local disk, other writting to nfs server that's blocked on firewall (emulating D-state; the same as I was testing syslog-ng). It writes to local disk all the time without any problem even when /mnt/test/cron is blocked. This is the same setup where I tested syslog-ng. So looks like D-state doesn't apply to whole process. #!/usr/bin/python import time import threading import os def log(): # on local disk f = open('/var/log/user', 'a') while 1: f.write("%d: user: %s\n" % (os.getpid(), time.ctime())) f.flush() time.sleep(1) f.close() def log_stuck(): # /mnt/test is on NFS which is blocked on firewall f = open('/mnt/test/cron', 'a') while 1: f.write("%d: cron: %s\n" % (os.getpid(), time.ctime())) f.flush() time.sleep(1) f.close() t1 = threading.Thread(target=log) t2 = threading.Thread(target=log_stuck) t1.start() t2.start() t1.join() t2.join() -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #8 from Balazs Scheidler <bazsi@balabit.hu> 2011-11-12 20:12:35 --- ok, reopen is valid then. if syslog-ng stalls, then the main thread is doing something on the blocked filesystem, which blocks that too. can you run syslog-ng under strace in a mode that displays threads as well and see why the main thread is getting stalled? please make sure that the strace is complete, syslog-ng goes into the background and stracing the startup doesn't really help. something like: strace -ttT -o syslog-ng.log -s 256 -f /sbin/syslog-ng -Fe Then produce the problem. The interesting portion of the strace is what the main thread is doing once the D state hits. It is either blocking on a mutex/condvar or it is doing something on the broken filesystem. I've quickly browsed the related code, and no obvious thing stood out. -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #9 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-11-12 21:00:19 --- There seem to be two cases: 1) when /mnt/test is unavailable from beginning 2) when /mnt/test is gone some time after syslog-ng was started 1) hangs at create_containing_directory 2) seems to be hanging at close() of /mnt/test/syslog descriptor. Is that done from main thread? Note that it didn't hangt immediately. For some time after blocking /mnt/test logging to local disk was still working. [root@farm ~]# tail -f WYNIK 3556 20:49:11.591857 madvise(0xb43fe000, 8364032, MADV_DONTNEED <unfinished ...> 3210 20:49:11.591866 read(6, <unfinished ...> 3556 20:49:11.591873 <... madvise resumed> ) = 0 <0.000010> 3210 20:49:11.591881 <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8 <0.000009> 3556 20:49:11.591892 _exit(0) = ? 3210 20:49:11.591913 clock_gettime(CLOCK_MONOTONIC_RAW, {657, 626272341}) = 0 <0.000010> 3210 20:49:11.591944 epoll_wait(3, {}, 12, 14260) = 0 <14.261360> 3210 20:49:25.853354 clock_gettime(CLOCK_MONOTONIC_RAW, {671, 888411310}) = 0 <0.000011> 3210 20:49:25.853418 gettimeofday({1321127365, 853433}, NULL) = 0 <0.000011> 3210 20:49:25.853474 close(14^C [root@farm ~]# grep "= 14" WYNIK 3210 20:43:25.845335 open("/etc/resolv.conf", O_RDONLY) = 14 <0.000007> 3210 20:43:25.846378 open("/etc/resolv.conf", O_RDONLY) = 14 <0.000008> 3210 20:43:25.846645 socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 14 <0.000011> 3210 20:43:25.846747 socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 14 <0.000006> 3210 20:43:25.846842 open("/etc/host.conf", O_RDONLY) = 14 <0.000007> 3210 20:43:25.847043 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 14 <0.000007> 3210 20:43:25.877494 open("/mnt/test/syslog", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0640) = 14 <0.027377> -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #10 from Arkadiusz Miśkiewicz <arekm@maven.pl> 2011-11-12 21:01:13 --- Created an attachment (id=44) --> (https://bugzilla.balabit.com/attachment.cgi?id=44) syslog-ng strace -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 --- Comment #11 from Balazs Scheidler <bazsi@balabit.hu> 2011-12-21 14:26:30 --- It might very well happen that the main thread is also invoking some syscalls on the monitored files as the code wasn't audited for this kind of usage. I'm afraid doing this in a way that meets your requirements is going to be very fragile and would be often broken as features get added (and new syscalls added to non-worker-thread code) It might be doable now to delegate all I/O operations to the worker threads, but keeping such a rule in mind while coding gets faded away fast, and might once make a feature impossible to implement. Testing it is not easy either. I could integrate patches if they are not very intrusive, but you'd probably have to care about keeping this feature working as I quite possibly wouldn't be enough alone. This feature is only useful when it doesn't break every-now-and-then. All this boils down to is that I can't promise anything, but if patches are available and you do regular testing of this specific feature I would do my best, which would probably mean breakage every once in a while and might intentionally be broken if implementing a new feature would be much more expensive with this rule in mind. Please let me know what you think. -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
https://bugzilla.balabit.com/show_bug.cgi?id=142 DIVYA <nayitaldeeva1331@gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nayitaldeeva1331@gmail.com -- Configure bugmail: https://bugzilla.balabit.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are watching all bug changes.
participants (1)
-
bugzilla@bugzilla.balabit.com