Discussion:
IPv6 task list
(too old to reply)
Loganaden Velvindron
2013-10-26 12:28:13 UTC
Permalink
Hi guys,

I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.

I would welcome feedback from everybody.

Kind regards,
//logan

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Jonathan A. Kollasch
2013-10-26 12:46:55 UTC
Permalink
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
off the top of my head;

- RFC3484 address selection
- working privacy addresses

Jonathan Kollasch

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Christos Zoulas
2013-10-27 14:30:14 UTC
Permalink
Post by Jonathan A. Kollasch
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
off the top of my head;
- RFC3484 address selection
Easy to port FreeBSD's http://nixdoc.net/man-pages/FreeBSD/ip6addrctl.8.html

christos


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Takahiro Kambe
2013-10-27 16:02:32 UTC
Permalink
Hi,

In message <l4j81m$j5j$***@ger.gmane.org>
on Sun, 27 Oct 2013 14:30:14 +0000 (UTC),
Post by Christos Zoulas
Post by Jonathan A. Kollasch
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
off the top of my head;
- RFC3484 address selection
Easy to port FreeBSD's http://nixdoc.net/man-pages/FreeBSD/ip6addrctl.8.html
Just FYI: RFC3474 is obsoleted by RFC6724.
--
Takahiro Kambe <***@back-street.net>

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Young
2013-10-28 15:50:25 UTC
Permalink
Post by Christos Zoulas
Post by Jonathan A. Kollasch
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
off the top of my head;
- RFC3484 address selection
Easy to port FreeBSD's http://nixdoc.net/man-pages/FreeBSD/ip6addrctl.8.html
Look at 'options IPSELSRC' / in_getifa(9), it's an RFC3484 analogue for
IPv4. It should be easy to translate to IPv6.

Dave
--
David Young
***@pobox.com Urbana, IL (217) 721-9981

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Robert Swindells
2013-10-26 13:12:32 UTC
Permalink
Post by Loganaden Velvindron
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
There are several things that were in KAME that have not been updated
to work in NetBSD-current - SCTP, DCCP and Mobile IP.

I have done some work on getting SCTP and Mobile IP to work.

Robert Swindells

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Mindaugas Rasiukevicius
2013-10-26 13:24:21 UTC
Permalink
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
Unfortunately IPv6 stack was often done by copy-pasting the IPv4 code and
converting the it to use struct in6_addr with other necessary adjustments.
The result of this is: 1) massive code duplication 2) many duplicated
branches under #ifdef in TCP code. There are multiple IPv4 and IPv6 code
parts which could and should be merged. Few examples:

- Replace struct in_addr and struct in6_addr with a new structure and use
it to convert the code to be IP-version agnostic, thus remove many #ifdefs.
Possible approach: NPF has npf_addr_t and only the lower level primitives
separate v4 and v6 handling while the rest of the code is completely
IP-version agnostic. This has proven to be very successful approach.

- Merge IPv4 and IPv6 PCB interfaces into one (rpaulo-netinet-merge-pcb
branch was an attempt to do that in the past, see doc/BRANCHES).

- For something simpler - merge IPv4 and IPv6 reassembly, since they mostly
match (see ip_reass.c and frag6.c).

.. etc. The list can go on. This is a major work, though.
--
Mindaugas

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Mouse
2013-10-26 17:53:28 UTC
Permalink
Post by Loganaden Velvindron
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
I don't know whether this is what you mean - indeed, I'm not certain
it's actually a fault in the v6 stack - but there's a v6-affecting
issue which has been bothering me for a while.

I have a machine with two interfaces (rtk0 and wm0 in my case - I don't
know whether that's relevant). I create a bridge and put them in the
bridge. I then configure both v4 and v6 on rtk0 and don't configure
anything on wm0 - all wm0 gets is "up".

A machine plugged into wm0 can then reach the v4 address just fine, but
fails when trying to reach the v6 address.

In my case, this is 4.0.1 amd64. I haven't tried it on anything else;
there _might_ be nothing to do here (possibly excepting my backporting
changes). But it sure looks to me like a bug; it certainly violates
the expectation that a bridge amounts to an in-machine network switch.

/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML ***@rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2013-10-26 19:44:55 UTC
Permalink
You are (probably) being affected by PR 48104.

The bridge sends the multicast NDP packets down rtk0 towards layer 1, but not up rtk0 towards the IP stack. That patch I included in the PR is against the NetBSD 6 branch.

Cheers,
Lloyd
Post by Mouse
Post by Loganaden Velvindron
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
I don't know whether this is what you mean - indeed, I'm not certain
it's actually a fault in the v6 stack - but there's a v6-affecting
issue which has been bothering me for a while.
I have a machine with two interfaces (rtk0 and wm0 in my case - I don't
know whether that's relevant). I create a bridge and put them in the
bridge. I then configure both v4 and v6 on rtk0 and don't configure
anything on wm0 - all wm0 gets is "up".
A machine plugged into wm0 can then reach the v4 address just fine, but
fails when trying to reach the v6 address.
In my case, this is 4.0.1 amd64. I haven't tried it on anything else;
there _might_ be nothing to do here (possibly excepting my backporting
changes). But it sure looks to me like a bug; it certainly violates
the expectation that a bridge amounts to an in-machine network switch.
/~\ The ASCII Mouse
\ / Ribbon Campaign
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2013-10-28 21:34:49 UTC
Permalink
Post by Robert Swindells
Has your multicast fix for IPv6 been committed yet ?
No and Dieter Roelants has reported a panic with the patch in combination with xennet. In the last few days I have freed up a machine that I can use to test this, so I will get on to that.

Cheers,
Lloyd
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Mouse
2013-10-28 17:56:14 UTC
Permalink
Post by Lloyd Parkes
You are (probably) being affected by PR 48104.
Thank you! The symptoms do seem to match.
Post by Lloyd Parkes
The bridge sends the multicast NDP packets down rtk0 towards layer 1, but no$
And v4 isn't affected because...? I'm going to guess that it's because
ARPs are broadcast rather than non-broadcast multicast - is that guess
correct? (That is, is that the actual reason?)
Post by Lloyd Parkes
That patch I included in the PR is against the NetBSD 6 branch.
It looks fairly easy to backport to the versions I care about.

/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML ***@rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2013-10-28 20:59:09 UTC
Permalink
Post by Mouse
Post by Lloyd Parkes
The bridge sends the multicast NDP packets down rtk0 towards layer 1, but no$
And v4 isn't affected because...? I'm going to guess that it's because
ARPs are broadcast rather than non-broadcast multicast - is that guess
correct? (That is, is that the actual reason?)
I’m not sure what’s going on. I managed to build a test configuration that showed failure in both IPv4 and IPv6, but I don’t know of any real world example of this problem where IPv4 fails.

Cheers,
Lloyd
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Robert Swindells
2013-10-28 21:22:19 UTC
Permalink
Post by Mouse
Post by Lloyd Parkes
The bridge sends the multicast NDP packets down rtk0 towards layer 1, but no$
And v4 isn't affected because...? I'm going to guess that it's because
ARPs are broadcast rather than non-broadcast multicast - is that guess
correct? (That is, is that the actual reason?)
I'm not sure what's going on. I managed to build a test configuration
that showed failure in both IPv4 and IPv6, but I don't know of any
real world example of this problem where IPv4 fails.
There were some reports a few years ago of IPv4 broadcast packets
getting lost across a bridge.

I have been running with a local change to bridge that adds static
cache entries for each of the bridge members. I think it should help
with this, or at least shouldn't hurt anything.

Has your multicast fix for IPv6 been committed yet ?

Robert Swindells


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Beverly Schwartz
2013-10-27 15:41:36 UTC
Permalink
Two IPsec tunnel related items - one a clear bug (also exists in Linux, FYI), the other a little more controversial.

Clear Bug:
Do source fragmentation BEFORE applying tunnel code. Right now, what happens, is all packets, regardless of size, are put through the IPsec code BEFORE having source fragmentation applied. So packet gets encapsulated, then split up, then at the other end of the tunnel, the packet is reassembled. If the endpoint of the tunnel is different from the ultimate destination of the packet, then the packet will then be dropped (because it is too large for the MTU), and a packet-too-large message will be returned to the source - all to have the same thing happen again.

More controversial issue...
If an IPsec tunnel appears mid flight and applying tunnel code causes the packet to be too big for the MTU, the packet is then fragmented. Some people have argued that because a new outer header is applied, it is a new packet, therefore source fragmentation is allowed. The primary author of the IPsec RFC disagrees with this interpretation - he feels a packet-too-large message should be returned. The point of source fragmentation is to get the sizing right in the first place so that we don't end up with a bunch of tiny fragments. That is exactly what happens in this scenario. The new header adds a handful of bytes over the MTU, so every full-size packet (and full-size packets will be common in a large transfer) will be fragmented at the tunnel - one full-size packet, and one tiny packet.

There are other weird behaviors with MTUs and IPsec tunnels that, for the most part, won't occur in real life. And fixing some of these odd borderline cases would involve changes to specifications.

-Bev
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
Kind regards,
//logan
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Hugo Silva
2013-10-27 20:29:37 UTC
Permalink
Post by Loganaden Velvindron
Hi guys,
I'm currently drafting a list of tasks that need to be done for the
IPv6 stack in NetBSD.
I would welcome feedback from everybody.
Kind regards,
//logan
Not sure if this is a NetBSD problem yet, but I can't connect to my
NetBSD Xen dom0 running on IPv6 unless I do some network operation from
it first.

ssh hangs in connect() from my client machine. Say I turn the screen on
and run a "dig netbsd.org aaaa" (anything that sends v6 pkts, really),
and then retry the ssh connection. It works then.


At this point I am not sure whether this is a NetBSD issue or not. What
I can say is that the FreeBSD and OpenBSD virtual machines also running
do not have this issue, but that is not conclusive by itself.




--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Greg Troxel
2013-10-28 20:38:01 UTC
Permalink
Post by Hugo Silva
Not sure if this is a NetBSD problem yet, but I can't connect to my
NetBSD Xen dom0 running on IPv6 unless I do some network operation from
it first.
ssh hangs in connect() from my client machine. Say I turn the screen on
and run a "dig netbsd.org aaaa" (anything that sends v6 pkts, really),
and then retry the ssh connection. It works then.
I believe there is a bug in the handling of broadcast in bridging to
xen, and I think this has been applied to the tree, but I'm not sure.

http://mail-index.netbsd.org/tech-net/2013/08/05/msg004160.html
Dennis Ferguson
2013-10-27 23:17:54 UTC
Permalink
Post by Beverly Schwartz
Two IPsec tunnel related items - one a clear bug (also exists in Linux, FYI), the other a little more controversial.
Do source fragmentation BEFORE applying tunnel code. Right now, what happens, is all packets, regardless of size, are put through the IPsec code BEFORE having source fragmentation applied. So packet gets encapsulated, then split up, then at the other end of the tunnel, the packet is reassembled. If the endpoint of the tunnel is different from the ultimate destination of the packet, then the packet will then be dropped (because it is too large for the MTU), and a packet-too-large message will be returned to the source - all to have the same thing happen again.
This is clearly a bug. For IPv6 fragmentation size depends on the
end-to-end path (unlike IPv4, where it depends solely on the next
hop interface) and must be tracked and applied at the very start
of the path. The encapsulation tunnel is generally only a path
segment, i.e. is not the full end-to-end path, so fragmentation needs
to be done before you get there.
Post by Beverly Schwartz
More controversial issue...
If an IPsec tunnel appears mid flight and applying tunnel code causes the packet to be too big for the MTU, the packet is then fragmented. Some people have argued that because a new outer header is applied, it is a new packet, therefore source fragmentation is allowed. The primary author of the IPsec RFC disagrees with this interpretation - he feels a packet-too-large message should be returned. The point of source fragmentation is to get the sizing right in the first place so that we don't end up with a bunch of tiny fragments. That is exactly what happens in this scenario. The new header adds a handful of bytes over the MTU, so every full-size packet (and full-size packets will be common in a large transfer) will be fragmented at the tunnel - one full-size packet, and one tiny packet.
There are other weird behaviors with MTUs and IPsec tunnels that, for the most part, won't occur in real life. And fixing some of these odd borderline cases would involve changes to specifications.
I think it might be controversial to argue that fragmenting the
tunnel-encapsulated packet is not permitted as an implementation
choice, but I also think it is non-controversial to argue that not
doing it this way, instead using the information about the effective
tunnel MTU to persuade the packet originator to send packets small
enough to fit through it without (additional) fragmentation, is a
functionally superior implementation option to choose. Anything that
reduces, or eliminates, the need for mid-path routers to do packet
reassemblies is always better just based on functional considerations,
and while there are always MTU-related warts associated with tunnelling
(even with IPv4) I believe you end up with fewer of them by always
pushing packet size issues (whether fragmentation, segmentation or
reassembly) as close to the ends of the end-to-end path as you can.

The reasons generic IPv4 tunnel implementations generally choose to
send encapsulated packets with the DF bit clear in the outer header
(i.e. to allow fragmentation of the encapsulated packet) are historical.
IPv6 doesn't share that history and needn't (and shouldn't) emulate this.

Dennis Ferguson
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...