Discussion:
kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
(too old to reply)
Greg A. Woods
2010-01-30 00:06:07 UTC
Permalink
At Fri, 29 Jan 2010 14:43:38 -0600, David Young <***@pobox.com> wrote:
Subject: Re: kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
I need advanced kernel-level multilink PPP (MLPPP) support, including
the ability to create bundle links via UDP (and maybe TCP) over IP.
Why do you need "kernel-level multilink PPP" support? Do you need to
interoperate with existing multilink PPP systems?
Partly, but the biggest concern is performance.

I.e.:

1. We absolutely do need to use MLPPP. We do control both ends of the
connection, and we may someday look at other protocols, but our current
production head-end concentrators are using MLPPP.

2. We also need to do it over multiple connections that are up to many
tens of megabits/sec each, perhaps sometimes even 100mbps each. Home
cable connections are now 10-50mbps down or more in many places, and
truly high-speed ADSL2 is also growing in availability. We aggregate
such connections for both speed and reliability reasons.

Our current low-end FreeBSD-based CPE device, which has a board with a
500 MHz AMD Geode LX800 on it, when connected to a 50mbps+2mbps cable
connection that has been split into two tunnels, can achieve 8-mbps max
(download) with userland MLPPP, period; but as much as 34mbps with MPD
using Netgraph MLPPP via UDP, and that was just a quick&dirty test
without tuning anything or using truly independent connections.

As I'm sure you know it's just not feasible to move data fast enough in
and out of userland to split and reassemble packets them on commodity
CPE devices. We also need to do ipsec (with hardware crypto), ipfilter,
ethernet bridging and vlans, etc., all on the same little processors.
--
Greg A. Woods
Planix, Inc.

<***@planix.com> +1 416 218 0099 http://www.planix.com/
Greg A. Woods
2010-01-30 20:59:29 UTC
Permalink
At Sat, 30 Jan 2010 11:37:41 +0900, Masao Uebayashi <***@tombi.co.jp> wrote:
Subject: Re: kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
What you need is something like npppd/pipex which OpenBSD has just imported?
Not as it is, as far as I can tell. (I don't see any new documentation
imported for it -- just a couple of kernel files and the usr.sbin/npppd
stuff, also without manual pages it seems, sigh.)

Does it actually do MLPPP? I only find mention of Multilink PPP (which
they abbreviate "MP" for some silly reason) in usr.sbin/npppd/npppd/ppp.h.

usr.sbin/npppd seems to be server-only. I need client code first, then
eventually server support.

The kernel code (if indeed it has any client code -- not sure yet)
doesn't seem to allow forwarding through UDP or TCP. It does mention
PPTP, and PPPoE in places but those don't really help me directly.

The document I eventually found here:

http://www.seil.jp/download/eng/doc/npppd_pipex.pdf

confirms that this seems to be server/concentrator only. (that link
sure would have helped me figure this out faster!)

The more I think about it, the more I highly desire the simple way
Netgraph modules can be composed into any graph that meets one's current
requirements, and it's all done without recompiling anything.
--
Greg A. Woods
Planix, Inc.

<***@planix.com> +1 416 218 0099 http://www.planix.com/
Yasuoka Masahiko
2010-01-31 01:35:44 UTC
Permalink
On Sat, 30 Jan 2010 15:59:29 -0500
Post by Greg A. Woods
Does it actually do MLPPP? I only find mention of Multilink PPP (which
they abbreviate "MP" for some silly reason) in usr.sbin/npppd/npppd/ppp.h.
npppd and pipex don't support multilink PPP. "MP" in ppp.h have been
drived from RFC 3145.
Post by Greg A. Woods
usr.sbin/npppd seems to be server-only. I need client code first, then
eventually server support.
Yes, npppd supports server-side only. For client-side ppp, we already
have pppd(8). Pppd need a process by a ppp connection, but I think it
is not a problem for client-side for most cases. Pppd itself doesn't
have in-kernel framing for PPTP, PPPoE or L2TP. But we can add it by
adding pipex ioctl hooks to pppd(8) and pipex hooks to ppp(4).

--yasuoka

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Yasuoka Masahiko
2010-01-31 02:52:52 UTC
Permalink
On Sat, 30 Jan 2010 18:38:53 -0500
Is MLPPD necessary/desirable for some reason?
I'm not sure what MLPPD is -- Did you mean MLPPP? If so, then yes,
MLPPP is, currently, a core feature of the project I'm working on.
Does the project use "The PPP Multilink Protocol" described in RFC 1717?

If so, as far as I know MP(RFC1717) is for combine multiple PPP links
into a logical data link. What tunnelling protocol the project use?
Why the tunnelling protocol need the MP?

--yasuoka

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Greg A. Woods
2010-01-31 20:47:43 UTC
Permalink
At Sat, 30 Jan 2010 19:35:47 -0500, Thor Lancelot Simon <***@panix.com> wrote:
Subject: Re: kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
As far as I know, the standard *is* "MP". MLPPP -- in my years-ago
experience anyway -- was Livingston's proprietary predecessor of the
standard protocol; they don't interoperate.
Well long ago there was RFC 1717, which was written by authors from
Newbridge, UCB, and Lloyd Internetworking, and indeed the title of that
RFC appears to abbreviate "PPP Multilink Protocol" to "MP" (though
perhaps it should be called "PPP-MP"). There was also a protocol from
Ascend called Multichannel Protocol Plus (MP+) and I don't know if/how
it was related to PPP-MP. Livingston did support RFC 1717 and they also
called it "MP", or sometimes "multi-line load balancing". If I remember
correctly Lucent bought Livingston, then Ascend.

Initially I need to inter-operate with a concentrator running MPD on
FreeBSD using Netgraph, thus ng_ppp(4), which implements RFC 1990 PPP
Multilink Protocol, probably using UDP encapsulation. (RFC1990 obsoletes
RFC1717)

Porting Netgraph still seems to be the most optimal solution all round,
though perhaps not with the fastest result, unless I can get help on the
FreeBSD side at making the code more portable.
--
Greg A. Woods
Planix, Inc.

<***@planix.com> +1 416 218 0099 http://www.planix.com/
Loading...