Greg A. Woods
2010-01-30 00:06:07 UTC
At Fri, 29 Jan 2010 14:43:38 -0600, David Young <***@pobox.com> wrote:
Subject: Re: kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
interoperate with existing multilink PPP systems?
Partly, but the biggest concern is performance.
I.e.:
1. We absolutely do need to use MLPPP. We do control both ends of the
connection, and we may someday look at other protocols, but our current
production head-end concentrators are using MLPPP.
2. We also need to do it over multiple connections that are up to many
tens of megabits/sec each, perhaps sometimes even 100mbps each. Home
cable connections are now 10-50mbps down or more in many places, and
truly high-speed ADSL2 is also growing in availability. We aggregate
such connections for both speed and reliability reasons.
Our current low-end FreeBSD-based CPE device, which has a board with a
500 MHz AMD Geode LX800 on it, when connected to a 50mbps+2mbps cable
connection that has been split into two tunnels, can achieve 8-mbps max
(download) with userland MLPPP, period; but as much as 34mbps with MPD
using Netgraph MLPPP via UDP, and that was just a quick&dirty test
without tuning anything or using truly independent connections.
As I'm sure you know it's just not feasible to move data fast enough in
and out of userland to split and reassemble packets them on commodity
CPE devices. We also need to do ipsec (with hardware crypto), ipfilter,
ethernet bridging and vlans, etc., all on the same little processors.
Subject: Re: kernel level multilink PPP and maybe (re)porting FreeBSD netgraph
I need advanced kernel-level multilink PPP (MLPPP) support, including
the ability to create bundle links via UDP (and maybe TCP) over IP.
Why do you need "kernel-level multilink PPP" support? Do you need tothe ability to create bundle links via UDP (and maybe TCP) over IP.
interoperate with existing multilink PPP systems?
I.e.:
1. We absolutely do need to use MLPPP. We do control both ends of the
connection, and we may someday look at other protocols, but our current
production head-end concentrators are using MLPPP.
2. We also need to do it over multiple connections that are up to many
tens of megabits/sec each, perhaps sometimes even 100mbps each. Home
cable connections are now 10-50mbps down or more in many places, and
truly high-speed ADSL2 is also growing in availability. We aggregate
such connections for both speed and reliability reasons.
Our current low-end FreeBSD-based CPE device, which has a board with a
500 MHz AMD Geode LX800 on it, when connected to a 50mbps+2mbps cable
connection that has been split into two tunnels, can achieve 8-mbps max
(download) with userland MLPPP, period; but as much as 34mbps with MPD
using Netgraph MLPPP via UDP, and that was just a quick&dirty test
without tuning anything or using truly independent connections.
As I'm sure you know it's just not feasible to move data fast enough in
and out of userland to split and reassemble packets them on commodity
CPE devices. We also need to do ipsec (with hardware crypto), ipfilter,
ethernet bridging and vlans, etc., all on the same little processors.
--
Greg A. Woods
Planix, Inc.
<***@planix.com> +1 416 218 0099 http://www.planix.com/
Greg A. Woods
Planix, Inc.
<***@planix.com> +1 416 218 0099 http://www.planix.com/