David Young
2009-03-19 17:47:01 UTC
I am concerned that the lengthy Tx queues/rings in NetBSD lead
to lengthy delays and unfairness in typical home/office gateway
applications. What do you think?
NetBSD's network queues and Tx DMA rings seem rather long. For example,
tlp(4) reserves 1024 Tx descriptors, enough for 64 packets, each with at
most 16 DMA segments. 256 segments is the default maximum length both
for network queues (e.g., ipintrq, ip6intrq) and for interface queues.
If a NetBSD box is the Internet gateway for several 10-, 100-, or
1000-megabit clients, and the clients share a 1-, 2-, or 3-megabit
Internet pipe, it is easy for some outbound stream to fill both the
Tx ring (max 64 packets) and the output queues (max 256 packets) to
capacity with full-size (Ethernet MTU) packets. Once the ring + queue
capacity is reached, every additional packet of outbound traffic that
the LAN offers will linger in the gateway between 1.3 and 3.8 seconds.
Now, suppose that we shorten the interface queue, or else we "shape"
traffic using ALTQ. Outbound traffic nevertheless spends 1/4 to 3/4
second on the Tx ring, which may defeat ALTQ prioritization in some
instances.
This is getting a bit long, so I am going to hastily draw some
conclusions. Please tell me if I am way off base:
1 in order for ALTQ to be really effective at controlling latency for
delay-sensitive traffic, it has to feed a very short Tx ring
2 maximum queue/ring lengths in NetBSD are tuned for very high-speed
networks, now; the maximums should adapt to hold down the expected delay
while absorbing momentary overflows.
Dave
to lengthy delays and unfairness in typical home/office gateway
applications. What do you think?
NetBSD's network queues and Tx DMA rings seem rather long. For example,
tlp(4) reserves 1024 Tx descriptors, enough for 64 packets, each with at
most 16 DMA segments. 256 segments is the default maximum length both
for network queues (e.g., ipintrq, ip6intrq) and for interface queues.
If a NetBSD box is the Internet gateway for several 10-, 100-, or
1000-megabit clients, and the clients share a 1-, 2-, or 3-megabit
Internet pipe, it is easy for some outbound stream to fill both the
Tx ring (max 64 packets) and the output queues (max 256 packets) to
capacity with full-size (Ethernet MTU) packets. Once the ring + queue
capacity is reached, every additional packet of outbound traffic that
the LAN offers will linger in the gateway between 1.3 and 3.8 seconds.
Now, suppose that we shorten the interface queue, or else we "shape"
traffic using ALTQ. Outbound traffic nevertheless spends 1/4 to 3/4
second on the Tx ring, which may defeat ALTQ prioritization in some
instances.
This is getting a bit long, so I am going to hastily draw some
conclusions. Please tell me if I am way off base:
1 in order for ALTQ to be really effective at controlling latency for
delay-sensitive traffic, it has to feed a very short Tx ring
2 maximum queue/ring lengths in NetBSD are tuned for very high-speed
networks, now; the maximums should adapt to hold down the expected delay
while absorbing momentary overflows.
Dave
--
David Young OJC Technologies
***@ojctech.com Urbana, IL * (217) 278-3933
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Young OJC Technologies
***@ojctech.com Urbana, IL * (217) 278-3933
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de