Discussion:
download rate to slow tith 1000BaseT
(too old to reply)
Philip Miller
2013-02-26 18:03:40 UTC
Permalink
If I upload a large binary file in netbsd, then the speed is around
40MB/s, which is normal. But by Downloading a large binary file, the
download speed is around 11MB/s, which is to few.
My network card configuration:
-bash-4.2$ ifconfig nfe0
nfe0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
address: 00:25:22:8a:a9:5c
media: Ethernet autoselect (1000baseT full-duplex)
status: active
inet 192.168.2.103 netmask 0xffffff00 broadcast 192.168.2.255
inet6 fe80::225:22ff:fe8a:a95c%nfe0 prefixlen 64 scopeid 0x1

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Philip Miller
2013-02-26 19:23:07 UTC
Permalink
netbsd:
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.2.103 port 5001 connected with 192.168.2.129 port 43759
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
^C-bash-4.2$ iperf -c 192.168.2.129
------------------------------------------------------------
Client connecting to 192.168.2.129, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.103 port 63872 connected with 192.168.2.129 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1006 MBytes 843 Mbits/sec

Linux:
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.2.103 port 5001 connected with 192.168.2.129 port 43759
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
^C-bash-4.2$ iperf -c 192.168.2.129
------------------------------------------------------------
Client connecting to 192.168.2.129, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.103 port 63872 connected with 192.168.2.129 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1006 MBytes 843 Mbits/sec

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Philip Miller
2013-02-26 19:33:52 UTC
Permalink
that's all OK
On Tue, 26 Feb 2013 19:23:07 +0000
Post by Philip Miller
843 Mbits/s
I believe you, but this is not the truth.
When I receive a binary file in netbsd over ftp or ssh it is not around
40MB/s or higher.
Should I show you the output?

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Philip Miller
2013-02-27 13:33:49 UTC
Permalink
/dev/zero > samplefile
On the hdd raid:
dd if=/dev/zero of=tempfile bs=1024k count=1024 conv=notrunc,sync
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 84.552 secs (12699188 bytes/sec)

On ssd:
sudo dd if=/dev/zero of=tempfile bs=1024k count=1024 conv=notrunc,sync
Password:
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 30.612 secs (35075846 bytes/sec)

But scp to netbsd ssd is around 14MB/s:
scp Minix.vdi <netbsd>:
Minix.vdi 72% 1387MB 13.6MB/s
00:37 ETA

But all in all it is to few, these storages are better.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2013-02-27 20:09:22 UTC
Permalink
I recommend analysing the problem in stages, and you have done this already. Do not start with testing ssh/scp because it virtualises the network connection and older versions of ssh have a lot of significant performance problems.

1) Run a network benchmarking tool like ttcp or iperf. You have done this.

2) Run ftp, and make sure the file being written is /dev/null. This tests how fast one end of the ftp connection can get the data off the disk and on to the network.

3) Run ftp, and write the file to disk. This tests how fast you can write to the disk from the network. I expect that ftp isn't very sophisticated and while it is writing to the disk, it won't be reading from the network.

4) Run scp with /dev/null as your destination.

5) Run scp as normal.

Cheers,
Lloyd


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Laight
2013-02-27 22:23:03 UTC
Permalink
Post by Lloyd Parkes
I recommend analysing the problem in stages, and you have done
this already. Do not start with testing ssh/scp because it
virtualises the network connection and older versions of ssh
have a lot of significant performance problems.
...
Post by Lloyd Parkes
2) Run ftp, and make sure the file being written is /dev/null.
This tests how fast one end of the ftp connection can get the
data off the disk and on to the network.
2a) Run ftp from a sparse file to /dev/null. Use both 'put' and 'get'
by requesting the transfer from both ends.

David
--
David Laight: ***@l8s.co.uk

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2013-02-27 20:14:30 UTC
Permalink
/dev/zero > samplefile
...
...
Be careful, some SSD controllers will compress the data being stored (making it faster) and some may even optimise the writing of blocks of zero. I would dd from /dev/urandom to an SSD while running top to make sure that the CPU was coping with having to generate all those pseudo-random numbers. Your numbers are probably correct, but SSDs are an immature technology and their performance figures vary widely.

Cheers,
Lloyd


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Mouse
2013-02-27 22:37:36 UTC
Permalink
Be careful, some SSD controllers will compress the data being stored (making$
/dev/urandom will be slow compared to doing something like arc4 in
userland - even if the kernel implementation is fast (which it may well
be), arc4 is cheap enough that just the syscall overhead of reading
from /dev/urandom will probably cost more cycles than doing arc4 in
userland.

Besides, for this even a multiplicative congruential RNG quite possibly
would be enough. You don't need cryptographic strength; you just need
something random enough to defeat the SSD's compression algorithm(s).

/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML ***@rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
David Laight
2013-02-27 22:57:20 UTC
Permalink
Post by Mouse
Be careful, some SSD controllers will compress the data being stored (making$
/dev/urandom will be slow compared to doing something like arc4 in
userland - even if the kernel implementation is fast (which it may well
be), arc4 is cheap enough that just the syscall overhead of reading
from /dev/urandom will probably cost more cycles than doing arc4 in
userland.
Besides, for this even a multiplicative congruential RNG quite possibly
would be enough. You don't need cryptographic strength; you just need
something random enough to defeat the SSD's compression algorithm(s).
Or one of the ones that is a bit like a CRC - but using using an
array of ints and addition (instead of single bits and xor).

David
--
David Laight: ***@l8s.co.uk

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...