[tproxy] connection go to CLOSE_WAIT after sending FIN
KOVACS Krisztian
hidden at balabit.hu
Mon Dec 11 15:27:45 CET 2006
Hi,
On Monday 11 December 2006 12:39, Eyal Rundstein wrote:
> I am using kernel 2.4.32 with tproxy version 2.0.2.
> My client is a transparent proxy.
> My client opens a transparent connection to the server, sends a message
> and then closes the connection with FIN. The server replies with an
> ACK, WITHOUT sending a FIN.
> Now I see that the connection stays in the ip_conntrack table in
> CLOSE_WAIT state. During that time I can not reuse the connection.
> (SYNs to the same dest are not sent).
This symptom is probably caused by the TCP window tracking patch
included in newer TProxy tarballs. (Ie. a new TCP connection tracking
engine with a revamped state machine.) So it has nothing to do directly
with tproxy, you'd have the same experience without transparent proxying
on recent 2.6 kernels.
However, I do not think it is a bug. After sending the FIN and receiving
an ACK you have a half-open connection, where you're waiting for the
other end to close the (now half-duplex) pipe. If you simply close the
socket on your side, the other endpoint will become unsynchronized --
that is, it will still think you're still waiting for the information
from that endpoint, while in reality you're not.
On page 33 of RFC 793 has a complete section dealing with this problem.
It's not simple: unfortunately the conntrack code has no way of knowing
that you've closed and reopened your socket, so it simply ignores your
SYN. However, it will probably also fail the in-window check, and because
of this the packet will be considered invalid. You could try if enabling
the net.ipv4.netfilter.ip_conntrack_tcp_be_liberal sysctl makes the SYN
go through. (However, I suspect this still isn't enough, more about that
later.)
> 1) Isn't the correct behavior for that connection is to go to
> FIN_WAIT_2 state? Is it a bug?
> 2) The CLOSE_WAIT timeout is 500 seconds. Is there a way I can still
> open a new connection to the same destination?
The connection tracking state machine is not exactly the same as the one
defined in the RFC for TCP endpoints. It does not have a FIN_WAIT_2
state, just to begin with.
> - When I use an older kernel (2.4.18) with old tproxy (version 23) I
> don't see this problem.
That's because it still had the old TCP conntrack code. That code had an
even less accurate state machine and did not include strict TCP checks.
(For example: if you take a look at the state machine of the 2.4 TCP
conntrack code, you'll notice that the LAST_ACK state is unreachable.)
The lack of strict operation is what makes it _look_like_ it's working:
it simply goes to SYN_SENT state from TIME_WAIT if you send a SYN.
However, this is still no good: because it's not a new connection TProxy
won't apply the new "foreign" address on the new conntrack entry.
Instead, it will simply inherit the address of the "old" one! This can
lead to very mysterious and hard-to-debug problems...
It's a tough problem -- looks like the unsynchronized nature of the
connection tracking state table is biting us again.
--
Regards,
Krisztian Kovacs
More information about the tproxy
mailing list