Zorp 2.1.5.5 can't handle load
Hi folks, I'm worried. I'm in a situation where I've put a proxy cluster into production without adequately testing the Zorp component under load. I spent a lot of time testing load balancing, but didn't check that Zorp could cope with a large number of concurrent connections. We're running zorp-2.1.5.5 on a Linux 2.4.25 (Gentoo) kernel with glibc-2.3.2 (Gentoo r9). The http proxy dies with sig11 (all registers printed zero in the stack dump sent to syslog) when it reaches some small number of concurrent threads over 130. So we tried using just the TCP plug proxy, even for HTTP connections, but can't get a single instance using more than 1020 threads. We have 4 zorp boxes handling a 100Mbps uplink, load-balanced with LVS. LVS ipvsadm also shows that the Zorp boxes aren't handling more than about 1000 concurrent connections. The visible symptom of all this is that some connection attempts aren't even accepted, while others are accepted but not serviced. I've done a lot of Googling, and all the stuff on how to increase the number of processes allowed per process doesn't seem to apply; PTHREAD_THREADS_MAX is already large in the glibc sources, and NR_TASKS doesn't exist in the kernel source. I've bumped up ulimits for file descriptors and processes per user, but these don't help. Help. I realise I went into production prematurely, but now that I'm here, it's a horrible place and I'm worried that I overestimated Zorp's ability to cope with load. Am I expecting too much from Zorp, or is this just something that more experienced Linux folks would know about? Any ideas on how to get Zorp to handle the kind of concurrency other people on the list must be getting[1] would be greatly appreciated. Either I need to get Zorp to service a larger number of concurrent requests, or I need to know why it's not coping when it reaches the limit on concurrent requests. I tried lowering --threads to 200, but my connection attempts still either aren't accepted or time out waiting for a response. Thanks, Sheldon. [1] I base this assumption on a posting in the archives, where the poster said he needed about 4 zorp proxy hosts to handle 100Mbps.
2004-05-20, cs keltezéssel 22:51-kor Sheldon Hearn ezt írta:
Hi folks,
I'm worried. I'm in a situation where I've put a proxy cluster into production without adequately testing the Zorp component under load.
I spent a lot of time testing load balancing, but didn't check that Zorp could cope with a large number of concurrent connections.
We're running zorp-2.1.5.5 on a Linux 2.4.25 (Gentoo) kernel with glibc-2.3.2 (Gentoo r9).
chances are that you are using python2.3 and zorp 2.1. there was a locking issue looking similar what you have, the fix was included in 2.1.7.1, I'm not in the office right now but I can provide a broken down patch if you need that. (though upgrading to 2.1.7.1 should help too) -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
2004-05-21, p keltezéssel 08:55-kor Balazs Scheidler ezt írta:
2004-05-20, cs keltezéssel 22:51-kor Sheldon Hearn ezt írta:
Hi folks,
I'm worried. I'm in a situation where I've put a proxy cluster into production without adequately testing the Zorp component under load.
I spent a lot of time testing load balancing, but didn't check that Zorp could cope with a large number of concurrent connections.
We're running zorp-2.1.5.5 on a Linux 2.4.25 (Gentoo) kernel with glibc-2.3.2 (Gentoo r9).
chances are that you are using python2.3 and zorp 2.1. there was a locking issue looking similar what you have, the fix was included in 2.1.7.1, I'm not in the office right now but I can provide a broken down patch if you need that. (though upgrading to 2.1.7.1 should help too)
So the above fix should solve your SIGSEGV issue with HTTP, I will explain your possibilities with load in the next mail. -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Fri, 2004-05-21 at 10:37, Balazs Scheidler wrote:
chances are that you are using python2.3 and zorp 2.1. there was a locking issue looking similar what you have, the fix was included in 2.1.7.1, I'm not in the office right now but I can provide a broken down patch if you need that. (though upgrading to 2.1.7.1 should help too)
So the above fix should solve your SIGSEGV issue with HTTP, I will explain your possibilities with load in the next mail.
I see only 2.1.7 available for download at http://www.balabit.com/downloads/zorp/2.1/src/, and it does not fix the SIGSEGV issue with HTTP. # /usr/lib/zorp/zorp --version Zorp 2.1.7 Compile-Date: May 21 2004 08:32:20 Config-Date: 2004/05/21 Trace: on Debug: on IPOptions: off IPFilter-Tproxy: off Netfilter-Tproxy: on Netfilter-Linux22-Fallback: on Linux22-Tproxy: off Conntrack: on Zorplib 2.1.12.3 Compile-Date: May 21 2004 08:30:53 Trace: on MemTrace: off Caps: on Debug: on StackDump: on Ciao, Sheldon.
2004-05-21, p keltezéssel 19:53-kor Sheldon Hearn ezt írta:
On Fri, 2004-05-21 at 10:37, Balazs Scheidler wrote:
chances are that you are using python2.3 and zorp 2.1. there was a locking issue looking similar what you have, the fix was included in 2.1.7.1, I'm not in the office right now but I can provide a broken down patch if you need that. (though upgrading to 2.1.7.1 should help too)
So the above fix should solve your SIGSEGV issue with HTTP, I will explain your possibilities with load in the next mail.
I see only 2.1.7 available for download at http://www.balabit.com/downloads/zorp/2.1/src/, and it does not fix the SIGSEGV issue with HTTP.
# /usr/lib/zorp/zorp --version Zorp 2.1.7 Compile-Date: May 21 2004 08:32:20 Config-Date: 2004/05/21 Trace: on Debug: on IPOptions: off IPFilter-Tproxy: off Netfilter-Tproxy: on Netfilter-Linux22-Fallback: on Linux22-Tproxy: off Conntrack: on
This is the patch that was incorporated in 2.1.7.1, but the GPL version has not been released yet. Index: zorp-core/lib/proxyvars.c diff -u zorp-core/lib/proxyvars.c:1.10.2.1 zorp-core/lib/proxyvars.c:1.10.2.2 --- zorp-core/lib/proxyvars.c:1.10.2.1 Wed Aug 6 09:46:06 2003 +++ zorp-core/lib/proxyvars.c Fri Apr 30 13:00:08 2004 @@ -742,19 +742,21 @@ break; case Z_VAR_TYPE_METHOD: { - ZorpMethod *zm; ZProxy *proxy; proxy = va_arg(l, ZProxy *); - zm = z_py_zorp_method_new(proxy, va_arg(l, ZProxyMethodFunc)); - e->value = zm; + z_python_lock(); + e->value = z_py_zorp_method_new(proxy, va_arg(l, ZProxyMethodFunc)); + z_python_unlock(); e->free = z_proxy_vars_method_free; e->getvar = z_proxy_vars_method_get; break; } case Z_VAR_TYPE_HASH: + z_python_lock(); e->value = z_py_zorp_hash_new((GHashTable *) va_arg(l, gpointer)); + z_python_unlock(); e->free = z_proxy_vars_hash_free; e->getvar = z_proxy_vars_hash_get; break; @@ -766,8 +768,9 @@ e->free = va_arg(l, gpointer); break; case Z_VAR_TYPE_DIMHASH: - + z_python_lock(); e->value = z_py_zorp_dimhash_new((ZDimHashTable *) va_arg(l, gpointer)); + z_python_unlock(); e->free = z_proxy_vars_dimhash_free; e->getvar = z_proxy_vars_dimhash_get; break; -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Sat, 2004-05-22 at 00:11, Balazs Scheidler wrote:
So the above fix should solve your SIGSEGV issue with HTTP, I will explain your possibilities with load in the next mail.
I see only 2.1.7 available for download at http://www.balabit.com/downloads/zorp/2.1/src/, and it does not fix the SIGSEGV issue with HTTP.
This is the patch that was incorporated in 2.1.7.1, but the GPL version has not been released yet. Index: zorp-core/lib/proxyvars.c diff -u [...] zorp-core/lib/proxyvars.c:1.10.2.2 [...]
That patch doesn't stop Zorp's HTTP proxy dying under load, still with a sig 11. The Plug proxy doesn't exhibit this behaviour. I'm going to try downgrading to 2.0.9, which also claims to have fixed this bug. Failing that, I give up. I should probably cut my time losses and write a URL filtering patch for FK's HTTP proxy and use that instead. This has been a disappointing experience, but I only have myself to blame for putting untested software into production. Ciao, Sheldon.
2004-05-22, szo keltezéssel 18:20-kor Sheldon Hearn ezt írta:
On Sat, 2004-05-22 at 00:11, Balazs Scheidler wrote:
So the above fix should solve your SIGSEGV issue with HTTP, I will explain your possibilities with load in the next mail.
I see only 2.1.7 available for download at http://www.balabit.com/downloads/zorp/2.1/src/, and it does not fix the SIGSEGV issue with HTTP.
This is the patch that was incorporated in 2.1.7.1, but the GPL version has not been released yet. Index: zorp-core/lib/proxyvars.c diff -u [...] zorp-core/lib/proxyvars.c:1.10.2.2 [...]
That patch doesn't stop Zorp's HTTP proxy dying under load, still with a sig 11. The Plug proxy doesn't exhibit this behaviour.
I'm going to try downgrading to 2.0.9, which also claims to have fixed this bug.
Failing that, I give up. I should probably cut my time losses and write a URL filtering patch for FK's HTTP proxy and use that instead. This has been a disappointing experience, but I only have myself to blame for putting untested software into production.
Sorry to hear that. We have successfully deployed Zorp in an environment similar to yours, however we are running it on our Debian based OS. Could you please post the segmentation fault dump (at least in a private mail)? I'd really like to have a look. -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Sat, 2004-05-22 at 18:20, Sheldon Hearn wrote:
That patch doesn't stop Zorp's HTTP proxy dying under load, still with a sig 11. The Plug proxy doesn't exhibit this behaviour.
I'm going to try downgrading to 2.0.9, which also claims to have fixed this bug.
Downgrading to zorp-2.0.9 with libzorpll-2.0.26.24 doesn't improve the situation. # /usr/lib/zorp/zorp --version Zorp 2.0.9 Compile-Date: May 22 2004 16:56:47 Config-Date: 2004/05/22 Trace: on Debug: on IPOptions: off IPFilter-Tproxy: off Netfilter-Tproxy: on Netfilter-Linux22-Fallback: on Linux22-Tproxy: off Conntrack: on Zorplib 2.0.26.24 Compile-Date: May 22 2004 16:50:51 Trace: on MemTrace: off Caps: on Debug: on StackDump: on Again, the Plug proxy doesn't have a problem, which suggests that this isn't flakey memory causing the SIGSEGV: (gdb) back #0 0x4011ad0d in PyObject_Malloc () from /usr/lib/libpython2.3.so.1.0 #1 0x00000001 in ?? () #2 0x00000028 in ?? () #3 0x401b1b60 in _Py_NotImplementedStruct () from /usr/lib/libpython2.3.so.1.0 #4 0x4003a0fc in __JCR_LIST__ () from /usr/lib/libzorp.so.2 #5 0x40039660 in z_py_zorp_sockaddr_funcs () from /usr/lib/libzorp.so.2 #6 0x40d04470 in ?? () #7 0xbffff4c8 in ?? () #8 0x40020856 in z_py_zorp_sockaddr_new (sa=0x1) at pysockaddr.c:215 Previous frame inner to this frame (corrupt stack?) So neither the stable nor development branches of zorp-gpl copes with any significant level of concurrency. I'm testing with Apache Benchmark (ab) with a concurrency limit of only 100. Ciao, Sheldon.
On Sat, 2004-05-22 at 20:00, Sheldon Hearn wrote:
So neither the stable nor development branches of zorp-gpl copes with any significant level of concurrency. I'm testing with Apache Benchmark (ab) with a concurrency limit of only 100.
Turns out, neither the stable nor development branches of zorp-gpl copes with any significant level of concurrency when linked against libpython2.3. I just tested the stable (2.0.9) version linked against libpython2.1 and it works fine. I'll test 2.1.7 w/ your patch tomorrow. For the archives: Zorp with libpython2.3 (python-2.3) crashes / core dumps / dumps core on SIGSEGV / sig11 / signal 11. Ciao, Sheldon.
On Sat, 2004-05-22 at 18:20, Sheldon Hearn wrote:
This is the patch that was incorporated in 2.1.7.1, but the GPL version has not been released yet. Index: zorp-core/lib/proxyvars.c diff -u [...] zorp-core/lib/proxyvars.c:1.10.2.2 [...]
That patch doesn't stop Zorp's HTTP proxy dying under load, still with a sig 11. The Plug proxy doesn't exhibit this behaviour.
My humble apologies! The patch does resolve the HTTP proxy SIGSEGV problem, but only if I actually apply the patch. *blush* I planned to test 2.1.7 with python2.1 today, and while patching the configure file for support for having multiple versions of python installed, I noticed that I wasn't actually applying the patch you sent yesterday. Sorry for the time that my oversight cost you. Ciao, Sheldon.
2004-05-20, cs keltezéssel 22:51-kor Sheldon Hearn ezt írta:
Hi folks,
I'm worried. I'm in a situation where I've put a proxy cluster into production without adequately testing the Zorp component under load.
I spent a lot of time testing load balancing, but didn't check that Zorp could cope with a large number of concurrent connections.
We're running zorp-2.1.5.5 on a Linux 2.4.25 (Gentoo) kernel with glibc-2.3.2 (Gentoo r9).
The http proxy dies with sig11 (all registers printed zero in the stack dump sent to syslog) when it reaches some small number of concurrent threads over 130.
So we tried using just the TCP plug proxy, even for HTTP connections, but can't get a single instance using more than 1020 threads.
We have 4 zorp boxes handling a 100Mbps uplink, load-balanced with LVS. LVS ipvsadm also shows that the Zorp boxes aren't handling more than about 1000 concurrent connections.
The visible symptom of all this is that some connection attempts aren't even accepted, while others are accepted but not serviced.
I've done a lot of Googling, and all the stuff on how to increase the number of processes allowed per process doesn't seem to apply; PTHREAD_THREADS_MAX is already large in the glibc sources, and NR_TASKS doesn't exist in the kernel source.
I've bumped up ulimits for file descriptors and processes per user, but these don't help.
Help. I realise I went into production prematurely, but now that I'm here, it's a horrible place and I'm worried that I overestimated Zorp's ability to cope with load. Am I expecting too much from Zorp, or is this just something that more experienced Linux folks would know about?
Any ideas on how to get Zorp to handle the kind of concurrency other people on the list must be getting[1] would be greatly appreciated.
Either I need to get Zorp to service a larger number of concurrent requests, or I need to know why it's not coping when it reaches the limit on concurrent requests. I tried lowering --threads to 200, but my connection attempts still either aren't accepted or time out waiting for a response.
The solution is to split your single Zorp instances to smaller instances working on the same set of connections. This can be achieved by running for example 16 instances of HTTP listening on different ports. (for example 50080 - 50095) then use 16 packet filter rules to distribute the load between processes based on source port for example. How this can be achieved: def define_services(): Service("http", HttpProxy, ...) def instance1(): define_services() Listener(SockAddrInet('1.2.3.4', 50080), 'http') def instance2(): define_services() Listener(SockAddrInet('1.2.3.4', 50081), 'http') def instance3(): define_services() Listener(SockAddrInet('1.2.3.4', 50082), 'http') etc. You can either use the stock --sport match with ranges to distribute the load, but it's better to use u32 where you can do things like: source port module 16 decides which listener to redirect to. iptables -t tproxy -A PREROUTING -p tcp -m u32 --u32 '0>>22&0x3C@0>>16&0xF=0' -j TPROXY --on-port 50080 iptables -t tproxy -A PREROUTING -p tcp -m u32 --u32 '0>>22&0x3C@0>>16&0xF=1' -j TPROXY --on-port 50081 iptables -t tproxy -A PREROUTING -p tcp -m u32 --u32 '0>>22&0x3C@0>>16&0xF=2' -j TPROXY --on-port 50082 iptables -t tproxy -A PREROUTING -p tcp -m u32 --u32 '0>>22&0x3C@0>>16&0xF=3' -j TPROXY --on-port 50083 and so on. Creating 16 processes will probably suffice. How many connections do you have in a second? We have somewhere between 500-600 new connections/sec distributed on 4 computers running 16 processes each. And latency is ok. And btw: which tproxy version are you using? Do you have more system or userspace CPU time? (vmstat will tell you that) -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Fri, 2004-05-21 at 10:58, Balazs Scheidler wrote:
The solution is to split your single Zorp instances to smaller instances working on the same set of connections. This can be achieved by running for example 16 instances of HTTP listening on different ports. (for example 50080 - 50095) then use 16 packet filter rules to distribute the load between processes based on source port for example.
This works very well, thank you. When I push a single instance to its maximum threads limit, I soon get the following: (zorp_default_http_00@default@balrog/nosession): Too many running threads, waiting for one to become free; num_threads='1000', max_threads='1000' zorp_default_http_00[12741]: (Log thread): zorp_default_http_00[12741]: (Log thread): GLib-ERROR **: Cannot create pipe main loop wake-up: Too many open files Is this one of the serious problems you warned me about with Glib, for which you have a patched version of GLib as part of your Debian packages? Do you have the patches available?
We have somewhere between 500-600 new connections/sec distributed on 4 computers running 16 processes each. And latency is ok. And btw: which tproxy version are you using?
IP_TPROXY: Transparent proxy support initialized 1.2.0
Do you have more system or userspace CPU time? (vmstat will tell you that)
It's about equally split, with system outweighing userspace 4:3 as total utilization approaches 50% (about the highest I get with the limited load I've been able to produce in testing). Ciao, Sheldon.
2004-05-25, k keltezéssel 10:59-kor Sheldon Hearn ezt írta:
On Fri, 2004-05-21 at 10:58, Balazs Scheidler wrote:
The solution is to split your single Zorp instances to smaller instances working on the same set of connections. This can be achieved by running for example 16 instances of HTTP listening on different ports. (for example 50080 - 50095) then use 16 packet filter rules to distribute the load between processes based on source port for example.
This works very well, thank you.
When I push a single instance to its maximum threads limit, I soon get the following:
(zorp_default_http_00@default@balrog/nosession): Too many running threads, waiting for one to become free; num_threads='1000', max_threads='1000' zorp_default_http_00[12741]: (Log thread): zorp_default_http_00[12741]: (Log thread): GLib-ERROR **: Cannot create pipe main loop wake-up: Too many open files
Is this one of the serious problems you warned me about with Glib, for which you have a patched version of GLib as part of your Debian packages? Do you have the patches available?
There's a --threads command line argument for Zorp. It will not create more than that number of threads. As it seems you might also have more than the maximum number of file descriptors. You might need to increase your resource limits. The thread limit of 1024 per process is inherent to libc 2.2.5, later libcs (at least 2.3.2) with NPTL support and kernel 2.6 (or 2.4 with backported O1/futex patch) can run with more than this limit. I think the better solution is to use as many instances for the same traffic as you need (separating the load using packet filter rules).
Do you have more system or userspace CPU time? (vmstat will tell you that)
It's about equally split, with system outweighing userspace 4:3 as total utilization approaches 50% (about the highest I get with the limited load I've been able to produce in testing).
hmm. that system load is quite high, it should not be more than 20-30% How many interrupts/context switches do you have? -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Tue, 2004-05-25 at 13:26, Balazs Scheidler wrote:
There's a --threads command line argument for Zorp. It will not create more than that number of threads. As it seems you might also have more than the maximum number of file descriptors. You might need to increase your resource limits.
I thought the resource limits were adjusted by zorpctl. I edited the script to print them, and they looked fine.
The thread limit of 1024 per process is inherent to libc 2.2.5, later libcs (at least 2.3.2) with NPTL support and kernel 2.6 (or 2.4 with backported O1/futex patch) can run with more than this limit.
Right. I have --threads set to 1000. And when I reach the maximum, I see a log message that says "Too many running threads, ...". So that's fine. The thing I think is a problem is that it falls over even though it's limited in this way. Why do I get these errors (which cause the entire instance to die) when I've limited the number of threads per instance to 1000? (Log thread): GLib-ERROR **: Cannot create pipe main loop wake-up: Too many open files That's why I asked about your glib patches.
I think the better solution is to use as many instances for the same traffic as you need (separating the load using packet filter rules).
Yes, I've already taken your advice here.
hmm. that system load is quite high, it should not be more than 20-30% How many interrupts/context switches do you have?
I'll take a look, but I'm really not so worried about the CPU. I'm worried about being unable to limit the number of threads a single Zorp instance will start. If I can't do that, an attacker can easily DOS the Zorp cluster by creating lots of TCP connections. So this GLib-ERROR seems quite serious to me. Ciao, Sheldon.
2004-05-25, k keltezéssel 13:45-kor Sheldon Hearn ezt írta:
On Tue, 2004-05-25 at 13:26, Balazs Scheidler wrote:
There's a --threads command line argument for Zorp. It will not create more than that number of threads. As it seems you might also have more than the maximum number of file descriptors. You might need to increase your resource limits.
I thought the resource limits were adjusted by zorpctl. I edited the script to print them, and they looked fine.
The thread limit of 1024 per process is inherent to libc 2.2.5, later libcs (at least 2.3.2) with NPTL support and kernel 2.6 (or 2.4 with backported O1/futex patch) can run with more than this limit.
Right. I have --threads set to 1000. And when I reach the maximum, I see a log message that says "Too many running threads, ...". So that's fine.
The thing I think is a problem is that it falls over even though it's limited in this way. Why do I get these errors (which cause the entire instance to die) when I've limited the number of threads per instance to 1000?
because the fd limit you set is too low for the thread number you need. zorpctl tries to guess the necessary fd limit for a given number of threads, but you can change its default. For example: intra_http -v3 -p /etc/zorp/policy.py -- --fd-limit 4096
(Log thread): GLib-ERROR **: Cannot create pipe main loop wake-up: Too many open files
That's why I asked about your glib patches.
I see, but the Glib error is triggered by the fact pipe() failed because it could not allocate enough file descriptors, that's why I think that your fd limit is too low. We have three glib patches on our binaries currently: glib-abort-recursed.diff -- moves an recursion checking in g_log() to make it possible to get a backtrace (which uses g_log() when an assertion is triggered (SIGABRT); This is not so important, but makes debugging SIGABRT problems easier glib-memchunk-race.diff -- this is a MUSTHAVE, fixes a serious glib race condition; it is said to be fixed in glib2.4 (IIRC) Without this patch Zorp will crash under load. glib-eintr.diff -- glib thrashes errno in g_strerror() which results in a) ugly error messages, b) incorrect behaviour this should be applied, though not absolutely necessary We did not touch the error message above, though it would be better to fail gracefully.
So this GLib-ERROR seems quite serious to me.
see above. -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Tue, 2004-05-25 at 14:09, Balazs Scheidler wrote:
because the fd limit you set is too low for the thread number you need. zorpctl tries to guess the necessary fd limit for a given number of threads, but you can change its default. For example:
intra_http -v3 -p /etc/zorp/policy.py -- --fd-limit 4096
Well, that's 64 less than the default that zorpctl would have chosen with --threads 1000, but I'll give it a try.
We have three glib patches on our binaries currently:
glib-abort-recursed.diff [...]
glib-memchunk-race.diff [...]
glib-eintr.diff [...]
I found glib-memchunk-race.diff with google. I think I've found glib-eintr.diff; is this it: http://mail.gnome.org/archives/gtk-devel-list/2000-November/msg00116.html Don't you make your the patches to GPL software available for download somewhere? Ciao, Sheldon.
2004-05-25, k keltezéssel 14:44-kor Sheldon Hearn ezt írta:
On Tue, 2004-05-25 at 14:09, Balazs Scheidler wrote:
We have three glib patches on our binaries currently:
glib-abort-recursed.diff [...]
glib-memchunk-race.diff [...]
glib-eintr.diff [...]
I found glib-memchunk-race.diff with google. I think I've found glib-eintr.diff; is this it:
http://mail.gnome.org/archives/gtk-devel-list/2000-November/msg00116.html
Don't you make your the patches to GPL software available for download somewhere?
Sorry, forgot to include the url. of course we publicly distribute GPL derived software, try this one: http://www.balabit.com/downloads/zorp/zorp-os/pool/g/glib2.0/ -- Bazsi PGP info: KeyID 9AF8D0A9 Fingerprint CD27 CFB0 802C 0944 9CFD 804E C82C 8EB1
On Tue, 2004-05-25 at 13:26, Balazs Scheidler wrote:
hmm. that system load is quite high, it should not be more than 20-30% How many interrupts/context switches do you have?
At about 50% CPU utilization, I'm seeing about 6,000 interrupts per second, and as many as 11,000 context switches per second. I've got NAPI enabled for the e1000 driver, so there's at least an upper limit on interrupts per second. As for context switches, there doesn't seem to be much I can do about that. If I'm going to have thousands of threads running on the system, there's going to be a lot of context switching, no? Thanks very much for all your help on this. It's great to have finally gotten to the point where Zorp stays up even when its configured limits are reached. Now I can get on with optimisation. :-) Ciao, Sheldon.
participants (2)
-
Balazs Scheidler
-
Sheldon Hearn