Discussion:
[patch] ixg(4): Intel 82599 10-gigabit ethernet
(too old to reply)
Bernd Ernesti
2011-08-13 07:01:06 UTC
Permalink
Hi,
Here is a NetBSD port of the Intel 82599 10-gigabit ethernet driver from
FreeBSD,
<ftp://elmendorf.ojctech.com/users/netbsd-08c5a58c/ixgbe.patch>.
To try it out, apply the patch and add this line to your kernel
ixg* at pci? dev ? function ? # Intel 8259x 10 gigabit
The driver which you just commited has files which are named ixgbe*
Is there any reason why the driver name couldn't be then ixgbe(4) too?

I would find it more natural to have a source code which matches the
driver name.

Bernd

P.S. Thank you for the NetBSD port of the driver.


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Young
2011-08-13 17:55:55 UTC
Permalink
Post by Bernd Ernesti
Hi,
Here is a NetBSD port of the Intel 82599 10-gigabit ethernet driver from
FreeBSD,
<ftp://elmendorf.ojctech.com/users/netbsd-08c5a58c/ixgbe.patch>.
To try it out, apply the patch and add this line to your kernel
ixg* at pci? dev ? function ? # Intel 8259x 10 gigabit
The driver which you just commited has files which are named ixgbe*
Is there any reason why the driver name couldn't be then ixgbe(4) too?
I believe users expect a correspondence between the device unit names
(ixg0..ixgN) and the manual name. I know I do. That is the reason for
the manual name.

I have found that changing FreeBSD filenames/paths too severely creates
a lot of pain down the road, so I did not consider that.

BTW, in FreeBSD, the driver attaches as ix0, but the ix(4) name is taken
in NetBSD by the "Intel EtherExpress/16 Ethernet ISA bus NIC driver," so
I used the next shortest name, ixg(4).
Post by Bernd Ernesti
P.S. Thank you for the NetBSD port of the driver.
Thanks to Coyote Point Systems, Inc., for sponsoring the port!

Dave
--
David Young OJC Technologies
***@ojctech.com Urbana, IL * (217) 344-0444 x24

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Thor Lancelot Simon
2011-08-15 19:00:05 UTC
Permalink
With all of the hardware acceleration options turned on and with
iperf(1) bound to the 0th CPU and running 4 threads, TCP transmission
speeds of 5 Gb/s are possible (receiver was a FreeBSD box). The maximum
TCP receive speed I have observed is 3.6 Gb/s (transmitter was FreeBSD.)
I am curious how performance changes with large frames. In particular,
I would expect receive performance might be somewhat better, since we
don't have any form of aggregated handling of packets from the same TCP
stream, like Linux does (they call it "Large Receive Offload" which is
rather misleading), though on send we can use this device's large send
support.

(Note that 9K frames are a *bad* idea. Try a frame size that actually
fits neatly in two pages, like, say, 8K - overhead)
If I don't bind iperf(1) to one CPU, then lock contention on my 2-CPU
test boxes gobbles up a lot of time and TCP transmission performance
plummets. Our network stack doesn't do SMP well. :-/
If you have a few minutes to try it, I am curious whether reverting my
change to suck all packets off the input queue under one hold of the lock
(in ip_input.c, about a year ago) has any effect -- and if so, what effect.
I'd expect it might make things much worse to revert that, but I would
think we finally have a case where we should see _some_ effect, anyway.

Thor

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Laight
2011-08-15 19:04:22 UTC
Permalink
If I don't bind iperf(1) to one CPU, then lock contention on my 2-CPU
test boxes gobbles up a lot of time and TCP transmission performance
plummets. Our network stack doesn't do SMP well. :-/
Lock contention or massive amounts of cache-snooping (etc) because
the process keeps on being scheduled on the 'other' cpu.

David
--
David Laight: ***@l8s.co.uk

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...