Discussion:
IPv6 TCP sessions hangs when using PF keep state
(too old to reply)
iMil
2013-01-06 13:24:36 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Hi,

I'm using PF as my gateway's packet filter for a while, and I
notice weird behaviours since I upgraded that server to NetBSD 6.0.

That machine has two NIC, one connected to an ADSL modem, the other
one connected to two tagged VLANs. IPv6 connectivity is enabled via
a gre(4) tunnel using the public (ADSL) interface.

- From the gateway itself, everything runs fast and smooth, I can
access every IPv6 service without any lag. But from the machines
located on the private VLAN, behind the gateway, I am witnessing
"hangs" while receiving HTTP data, but also during SSH sessions.

I spent quite a lot of time trying various techniques like
reducing the MTU on different interfaces, and finally I tried
flushing PF rules, which led to fully functionnal IPv6
sessions.
Digging a little bit further, a friend suggested to add a
pass with "no state" on the private interface of the gateway,
and as a matter of fact, IPv6 traffic is still working. But as
soon as I get rid of this rule, I still witness hangs.
Also, even if the only rules loaded by PF are plain pass, IPv6
TCP sessions will also hang.

Does this behaviour rings a bell to anyone?

- ------------------------------------------------------------------
Emile "iMil" Heitor .°. <imil@{home.imil.net,NetBSD.org,gcu.info}>
_
| http://imil.net | ASCII ribbon campaign ( )
| http://www.NetBSD.org | - against HTML email X
| http://gcu.info | & vCards / \

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (NetBSD)

iD8DBQFQ6XsUFG3BlGWyzUIRAumcAJ9kem5FVQKD5Q+dTvHkj4b2IUPk+ACaAshx
8CR7x+0mrIrmyGDzjOSK0oc=
=XJZ3
-----END PGP SIGNATURE-----

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Greg Troxel
2013-01-06 13:43:39 UTC
Permalink
It very dimly rings a bell. I think I saw a problem where ipfilter did
range checks on sequence numbers, but this depended on seeing window
scaling options.

So I would use ipfstat -inh6 (and -onh6) and find the blocked packets,
and add log statements, and also dump your state entries. It wouldn't
surprise me if there's a bug.
Greg Troxel
2013-01-06 13:49:36 UTC
Permalink
Post by Greg Troxel
It very dimly rings a bell. I think I saw a problem where ipfilter did
range checks on sequence numbers, but this depended on seeing window
scaling options.
So I would use ipfstat -inh6 (and -onh6) and find the blocked packets,
and add log statements, and also dump your state entries. It wouldn't
surprise me if there's a bug.
I don't mean to dis you, but recommending the use of ipfstat to someone
that is using pf won't help.
Indeed, a fair point. But surely there is some pfstat equivalent.

(FWIW, I am successfully using ipfilter on IPv6, from netbsd-5.)
Darren Reed
2013-01-06 14:01:54 UTC
Permalink
Post by Greg Troxel
It very dimly rings a bell. I think I saw a problem where ipfilter did
range checks on sequence numbers, but this depended on seeing window
scaling options.
So I would use ipfstat -inh6 (and -onh6) and find the blocked packets,
and add log statements, and also dump your state entries. It wouldn't
surprise me if there's a bug.
I don't mean to dis you, but recommending the use of ipfstat to someone
that is using pf won't help.

To me, this sounds like an MSS problem.

Darren


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Anthony Mallet
2013-01-07 10:50:56 UTC
Permalink
| I'm using PF as my gateway's packet filter for a while, and I
| notice weird behaviours since I upgraded that server to NetBSD 6.0.

Do you see any "cksum: out of data" kernel messages?
See http://mail-index.netbsd.org/tech-userlevel/2012/12/21/msg007030.html

Now I'm using npf. It does not have all the features from pf (yet?), but does
the job - although npf from after Dec 24 was unusable (panics).

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Anthony Mallet
2013-01-07 16:19:25 UTC
Permalink
On Monday, at 11:50, Anthony Mallet wrote:
| | I'm using PF as my gateway's packet filter for a while, and I
| | notice weird behaviours since I upgraded that server to NetBSD 6.0.
|

Another issue that I just discovered (man pf.conf) : Currently, only IPv4
fragments are supported and IPv6 fragments are blocked unconditionally.

This means that if you initiate a TCP connection with a big MSS (e.g. 1460 on a
gif tunnel with a 1480 MTU), the connection will eventually stall if the
other part starts sending ipv6-frag packets.

You might want to try to clamp the mss of outgoing connections with "max-mss
1420" if you have a MTU of 1480.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Greg Troxel
2013-01-07 17:28:00 UTC
Permalink
Post by Anthony Mallet
This means that if you initiate a TCP connection with a big MSS
(e.g. 1460 on a gif tunnel with a 1480 MTU), the connection will
eventually stall if the other part starts sending ipv6-frag packets.
But, the default mtu on gif(4) is 1280. So you should not be seeing
IPv6 fragmentation.
Anthony Mallet
2013-01-07 17:44:48 UTC
Permalink
On Monday, at 12:28, Greg Troxel wrote:
| > This means that if you initiate a TCP connection with a big MSS
| > (e.g. 1460 on a gif tunnel with a 1480 MTU), the connection will
| > eventually stall if the other part starts sending ipv6-frag packets.
|
| But, the default mtu on gif(4) is 1280. So you should not be seeing
| IPv6 fragmentation.

However, I do see it... Not even talking about UDP, I see fragmentation
between a netbsd and solaris 10 host through a gif tunnel. For some reason the
MSS of my outgoing packets is set to 1440 initially. I'm currently
investigating :)

Also, if you are on a host on the LAN behind the gateway, is it possible that
the host starts tcp with the correct MSS?

I have a related question: ifconfig gif0 does not show any MTU value. Is this a
bug? I can do "ifconfig gif0 mtu 1480" without error, but I'm not sure if this
is silently ignored or not.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Greg Troxel
2013-01-08 03:26:19 UTC
Permalink
Post by Anthony Mallet
| But, the default mtu on gif(4) is 1280. So you should not be seeing
| IPv6 fragmentation.
However, I do see it... Not even talking about UDP, I see
fragmentation between a netbsd and solaris 10 host through a gif
tunnel. For some reason the MSS of my outgoing packets is set to 1440
initially. I'm currently investigating :)
I am seeing fragmentation too (from doing an ftp from funet.fi from a
host on a lan behind a tunneling gateway). I thought that v6 by default
limited itself to 1280, but now I can't remember why I think that.
Post by Anthony Mallet
Also, if you are on a host on the LAN behind the gateway, is it
possible that the host starts tcp with the correct MSS?
I don't follow what you are asking. I saw outgoing SYN packets with mss
1440. That's ok, except that it's too high for the tunnel.

What I don't understand is why the far side doesn't drop when it gets
the packet-too-big messsage.
Post by Anthony Mallet
I have a related question: ifconfig gif0 does not show any MTU
value. Is this a bug? I can do "ifconfig gif0 mtu 1480" without error,
but I'm not sure if this is silently ignored or not.
On what system/version? On a netbsd-5 system, where I did not try to
set the MTU, 'ifconfig -a' included the line:
gif0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280
Anthony Mallet
2013-01-08 10:01:35 UTC
Permalink
On Monday, at 22:26, Greg Troxel wrote:
| > Also, if you are on a host on the LAN behind the gateway, is it
| > possible that the host starts tcp with the correct MSS?
|
| I don't follow what you are asking. I saw outgoing SYN packets with mss
| 1440. That's ok, except that it's too high for the tunnel.

I meant this:
Host A --> GW --> Host B
MTU 1500 MTU 1480 MTU 1500

Host A sends SYN with MSS 1440, which is not optimal since for every connection
this will eventually drop to 1420 after some ICMP message. From what I
understand from PMTUD, host A should be able to figure out the 1480 MTU. I
don't see this happening.

Even when connecting from GW to Host B, SYN packets from GW are sent with MSS
1440. I don't understand why.


| What I don't understand is why the far side doesn't drop when it gets
| the packet-too-big messsage.

In this situation:
NetBSD --> Internet --> Solaris 10
MTU 1480 MTU 1500

A SYN packet with MSS 1440 is sent from NetBSD to Solaris. Solaris enventually
gets a "packet too big" from internet. It then starts sending fragments of size
1494, which is correct IMHO. If running PF, those fragments will be dropped.

I mention Solaris explicitly, because Linux has a different behaviour. When it
gets the "packet too big", it seems that it sends smaller regular tcp packets
instead of using fragments.

All in all, nothing like this would be happening if NetBSD would send SYN with
MSS 1420.


| On what system/version? On a netbsd-5 system, where I did not try to
| set the MTU, 'ifconfig -a' included the line:
| gif0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280

This is on i386 -current. I filed http://gnats.netbsd.org/47419

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lars Schotte
2013-01-08 11:59:41 UTC
Permalink
MSSCLAMP is for example an option you have to PPP on DSL, which is
enabled by default, but when you are not using PPP(oE) or when you are
using kernel version of PPPoE, for example on NetBSD, then you need to
do MSSCLAMP in your packet filter.

On NetBSD this is not PF!

I know that IPTABLES from Linux have an option for MSSCLAMP, OpenWRT's
using it for example as default for PPP(oE)/DSL connections.

PF is dropping fragments, because it's explicitly telling the other
side not to fragment packets using the DF flag. this is for preventing
some attacks that use fragmented packets to get inside of sth, i have
no idea. maybe that can be disabled too and maybe you even get MSSCLAMP
on your PF working. that's where I would start searching for.

I had the same problems with an BFSR41 from Linksys on my ADSL
connection, so i know about it.

i even got a link for you:
http://www.netbsd.org/docs/network/pppoe/#clamping

On Tue, 8 Jan 2013 11:01:35 +0100
Post by Anthony Mallet
A SYN packet with MSS 1440 is sent from NetBSD to Solaris. Solaris
enventually gets a "packet too big" from internet. It then starts
sending fragments of size 1494, which is correct IMHO. If running PF,
those fragments will be dropped.
I mention Solaris explicitly, because Linux has a different
behaviour. When it gets the "packet too big", it seems that it sends
smaller regular tcp packets instead of using fragments.
All in all, nothing like this would be happening if NetBSD would send
SYN with MSS 1420.
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Anthony Mallet
2013-01-08 13:24:14 UTC
Permalink
On Tuesday, at 12:59, Lars Schotte wrote:
| PF is dropping fragments, because it's explicitly telling the other
| side not to fragment packets using the DF flag.

AFAIK, there is no DF bit in IPv6 since it is implicit. And anyway this will
not prevent the other side from sending its own fragmented packets if it so
desires.

| i even got a link for you:
| http://www.netbsd.org/docs/network/pppoe/#clamping

Thanks for the link. npf has a max-mss parameter extension as well. I wanted to
first understand whether I misconfigured something before hacking the packets :)

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...