hi, Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below: 1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly: IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912 My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked? Thanks! Daniel 2007-06-10
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote:
hi,
Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below:
1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly:
IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912
My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked?
can you tell me your kernel/tproxy version? The error message above means that the application tries to an address that already is in the tproxy hash table (e.g. which was allocated before). This should never happen, as this would indicate that you have two sockets bound to the same local ip:port The details of the already registered entry is included in the log message. The conflicting local address is 192.168.11.10:34380 and you wanted to assign 192.168.11.232 with a random port (0 port). Your flags has ITP_ONCE set, and nothing else, and the entry in the table was registered at 1181425010.475 secs after the UNIX epoch. A minor note: * ITP_ONCE was removed from the latest versions of tproxy, you should not use that (I don't think removing that would help in your case however) Please give me the exact versions you are testing with. -- Bazsi
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote: hi,
Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below:
1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly:
IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912
My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked?
can you tell me your kernel/tproxy version?
The error message above means that the application tries to an address that already is in the tproxy hash table (e.g. which was allocated before). This should never happen, as this would indicate that you have two sockets bound to the same local ip:port
The details of the already registered entry is included in the log message.
The conflicting local address is 192.168.11.10:34380 and you wanted to assign 192.168.11.232 with a random port (0 port).
Your flags has ITP_ONCE set, and nothing else, and the entry in the table was registered at 1181425010.475 secs after the UNIX epoch.
A minor note: * ITP_ONCE was removed from the latest versions of tproxy, you should not use that (I don't think removing that would help in your case however)
Please give me the exact versions you are testing with.
-- Bazsi
I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason. Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912 Maybe I missed some important fix? Thanks for your quick reply. Daniel tooldcas@163.com 2007-06-12
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote: hi,
Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below:
1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly:
IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912
My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked?
can you tell me your kernel/tproxy version?
The error message above means that the application tries to an address that already is in the tproxy hash table (e.g. which was allocated before). This should never happen, as this would indicate that you have two sockets bound to the same local ip:port
The details of the already registered entry is included in the log message.
The conflicting local address is 192.168.11.10:34380 and you wanted to assign 192.168.11.232 with a random port (0 port).
Your flags has ITP_ONCE set, and nothing else, and the entry in the table was registered at 1181425010.475 secs after the UNIX epoch.
A minor note: * ITP_ONCE was removed from the latest versions of tproxy, you should not use that (I don't think removing that would help in your case however)
Please give me the exact versions you are testing with.
-- Bazsi
Sorry about my last encoded email. I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason. Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912 Maybe I missed some important fix? Thanks for your quick reply. Daniel tooldcas@163.com 2007-06-12
On Tue, 2007-06-12 at 11:21 +0800, Daniel wrote:
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote: hi,
Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below:
1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly:
IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912
My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked?
can you tell me your kernel/tproxy version?
The error message above means that the application tries to an address that already is in the tproxy hash table (e.g. which was allocated before). This should never happen, as this would indicate that you have two sockets bound to the same local ip:port
The details of the already registered entry is included in the log message.
The conflicting local address is 192.168.11.10:34380 and you wanted to assign 192.168.11.232 with a random port (0 port).
Your flags has ITP_ONCE set, and nothing else, and the entry in the table was registered at 1181425010.475 secs after the UNIX epoch.
A minor note: * ITP_ONCE was removed from the latest versions of tproxy, you should not use that (I don't think removing that would help in your case however)
Please give me the exact versions you are testing with.
-- Bazsi
Sorry about my last encoded email.
I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason.
Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912
The process which opened these sockets were killed or is it still running? -- Bazsi
On Tue, 2007-06-12 at 11:21 +0800, Daniel wrote:
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote: hi,
Recently I tested tproxy with Avanlanche (about 800M/s stress) and some result below:
1. deadlock when ip_conntrack_ftp loaded. Plz see my last post and explaination from Balazs Scheidler. 2. I tested again without nat_reservation(deadlock disappeared). After 10 hours' stress test, kernel kept giving out exactly the same messages repeatedly:
IP_TPROXY: socket already assigned, reuse=1, 0a0ba8c0:4c86, sr->faddr=e80ba8c0:0000, flags=10000, sr->tv_hashed=1181425010:475244912
My questions: Is this sockref leaked? and, what is the situation when a sockref is leaked?
can you tell me your kernel/tproxy version?
The error message above means that the application tries to an address that already is in the tproxy hash table (e.g. which was allocated before). This should never happen, as this would indicate that you have two sockets bound to the same local ip:port
The details of the already registered entry is included in the log message.
The conflicting local address is 192.168.11.10:34380 and you wanted to assign 192.168.11.232 with a random port (0 port).
Your flags has ITP_ONCE set, and nothing else, and the entry in the table was registered at 1181425010.475 secs after the UNIX epoch.
A minor note: * ITP_ONCE was removed from the latest versions of tproxy, you should not use that (I don't think removing that would help in your case however)
Please give me the exact versions you are testing with.
-- Bazsi
Sorry about my last encoded email.
I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason.
Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912
The process which opened these sockets were killed or is it still running?
-- Bazsi
hi, I think it's the http proxy who initialized these sockets, and it was still running then. regards Daniel tooldcas@163.com 2007-06-20
On Wed, 2007-06-20 at 19:16 +0800, Daniel wrote:
On Tue, 2007-06-12 at 11:21 +0800, Daniel wrote:
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote:
-- Bazsi
Sorry about my last encoded email.
I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason.
Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912
The process which opened these sockets were killed or is it still running?
-- Bazsi
hi,
I think it's the http proxy who initialized these sockets, and it was still running then.
Hmm.. tproxy closes these entries if: 1) the proxy closes the associated fd 2) the process exits (which implies 1 above) FYI: I'm working on releasing tproxy4 which will resolve all the fundamental issues (like this). -- Bazsi
On Wed, 2007-06-20 at 19:16 +0800, Daniel wrote:
On Tue, 2007-06-12 at 11:21 +0800, Daniel wrote:
On Sun, 2007-06-10 at 20:25 +0800, Daniel wrote:
-- Bazsi
Sorry about my last encoded email.
I use tproxy-2.0.1 and kernel 2.6.9 and a http proxy, and I cann't move forward to new versions for some reason.
Today when I stop stress and check /proc/net/tproxy later, I found 3 entries always there: cat /proc/net/tproxy 00006 470ba8c0:0000 0a0ba8c0:0581 00000000:0000 00010000 00000 000001 1181498811:997708128 00006 700ba8c0:0000 0a0ba8c0:5a9e 00000000:0000 00010000 00000 000001 1181489066:124306088 00006 e80ba8c0:0000 0a0ba8c0:4c86 00000000:0000 00010000 00000 000001 1181425010:475244912
The process which opened these sockets were killed or is it still running?
-- Bazsi
hi,
I think it's the http proxy who initialized these sockets, and it was still running then.
Hmm.. tproxy closes these entries if: 1) the proxy closes the associated fd 2) the process exits (which implies 1 above)
FYI: I'm working on releasing tproxy4 which will resolve all the fundamental issues (like this).
-- Bazsi
Good news! I read the post from netfilter.org and I think that's much better than nat-based approach. http://lists.netfilter.org/pipermail/netfilter-devel/2007-January/026472.htm... Regards Daniel tooldcas@163.com 2007-06-29
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 29 June 2007, Daniel wrote:
Good news!
I read the post from netfilter.org and I think that's much better than nat-based approach.
http://lists.netfilter.org/pipermail/netfilter-devel/2007-January/026472.ht ml
Indeed, that seems to be good news. Especially since the patches were received positively as far as I can see. What is the current status of the patch set? I couldn't find any further mention on the netfilter list. Regards, David - -- - - hallo... wie gehts heute? - - *hust* gut *rotz* *keuch* - - gott sei dank kommunizieren wir über ein septisches medium ;) -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGhOX4/Pp1N6Uzh0URAgH4AJ0f8P+4RFT8mtFFy17Z0ha12nHY5gCfWGlK 0odaelNpgezBqNlUmae6N38= =j8dn -----END PGP SIGNATURE-----
participants (3)
-
Balazs Scheidler
-
Daniel
-
David Schmitt