Discussion:
5.x filesystem performance regression
(too old to reply)
Edgar Fuß
2011-06-04 14:40:27 UTC
Permalink
Having fixed my performace-critical RAID configuration, I think there's some
serious filesystem performance regression from 4.x to 5.x.

I've tested every possible combination of 4.0.1 vs. 5.1, softdep vs. WAPBL,
parity maps enabled vs. disabled, bare disc vs. RAID 1 vs. RAID 5.
The test case was extracting the 5.1 src.tgz set onto the filesystem under test.
The extraction was done twice (having deleted the extracted tree in between);
in some cases the time for the first run is missing because I forgot to time
the tar command.
All tests are on identical hardware, a 4G amd64 system with three Seagate
ST336607LW discs on an Adaptec 19160 SCSI controller.

In the following table, the two figures in each column are elapsed seconds
for the two runs.

plain disc RAID 1 RAID 5 16k RAID 5 32k
4.0.1 softdep 64s 12s ? 11s ? 17s 54s 12s
5.1 softdep 51s 42s 65s 60s 330s 347s 218s 250s
5.1 log 66s 30s 84s 25s ? 426s 194s 190s
5.1 softdep, no parity map 63s 61s 339s 331s not measured
5.1 log, no parity map 88s 26s ? 340s not measured

Both RAIDs have 32 sectPerSU.
The filesystem on the RAID 1 has a 16k bsize, on RAID 5, I tested both 16k/32k.

So, almost everywhere, 4.0.1 is three to fiveteen times as fast as 5.1.

Any ideas where to look further? Anyone to confirm my measurements?

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Christos Zoulas
2011-06-04 14:56:19 UTC
Permalink
Post by Edgar Fuß
Having fixed my performace-critical RAID configuration, I think there's some
serious filesystem performance regression from 4.x to 5.x.
I've tested every possible combination of 4.0.1 vs. 5.1, softdep vs. WAPBL,
parity maps enabled vs. disabled, bare disc vs. RAID 1 vs. RAID 5.
The test case was extracting the 5.1 src.tgz set onto the filesystem under test.
The extraction was done twice (having deleted the extracted tree in between);
in some cases the time for the first run is missing because I forgot to time
the tar command.
All tests are on identical hardware, a 4G amd64 system with three Seagate
ST336607LW discs on an Adaptec 19160 SCSI controller.
In the following table, the two figures in each column are elapsed seconds
for the two runs.
plain disc RAID 1 RAID 5 16k RAID 5 32k
4.0.1 softdep 64s 12s ? 11s ? 17s 54s 12s
5.1 softdep 51s 42s 65s 60s 330s 347s 218s 250s
5.1 log 66s 30s 84s 25s ? 426s 194s 190s
5.1 softdep, no parity map 63s 61s 339s 331s not measured
5.1 log, no parity map 88s 26s ? 340s not measured
Both RAIDs have 32 sectPerSU.
The filesystem on the RAID 1 has a 16k bsize, on RAID 5, I tested both 16k/32k.
So, almost everywhere, 4.0.1 is three to fiveteen times as fast as 5.1.
Any ideas where to look further? Anyone to confirm my measurements?
No, but can you try current? It would be much more useful to look at what
we are planning to release, so we can fix it before release time.

christos



--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Martin Husemann
2011-06-04 22:14:09 UTC
Permalink
Have you compared raw disk throughput (without filesystem)?

Martin

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Lloyd Parkes
2011-06-05 02:34:08 UTC
Permalink
??? Why is this in tech-net?
Post by Edgar Fuß
Having fixed my performace-critical RAID configuration, I think there's some
serious filesystem performance regression from 4.x to 5.x.
I've tested every possible combination of 4.0.1 vs. 5.1, softdep vs. WAPBL,
parity maps enabled vs. disabled, bare disc vs. RAID 1 vs. RAID 5.
Excellent.
Post by Edgar Fuß
The test case was extracting the 5.1 src.tgz set onto the filesystem under test.
The extraction was done twice (having deleted the extracted tree in between);
I always reboot between such tests to ensure that the buffer cache has been cleared out. If I ever get around to running RAID benchmarks again, I'll script it all in /etc/rc.d with reboots between each run so that I can get a number of runs without having to run anything by hand.
Post by Edgar Fuß
in some cases the time for the first run is missing because I forgot to time
the tar command.
That's a problem because that is what is required to show the effect of the buffer cache.
Post by Edgar Fuß
So, almost everywhere, 4.0.1 is three to fiveteen times as fast as 5.1.
I'm afraid that is isn't even close to almost everywhere because there are so many missing measurements. If we ignore all of the second runs because of the buffer cache issue, we only have two columns that contain enough data. The first is the plain disc column and it shows things looking pretty good for 5.1. The second is RAID 5 32k which doesn't look so good. For some reason, RAID 5 appears to be very slow and it needs looking at.

If we want to look at the second runs in order to work out why 5.1 looks so much worse in the second runs, we still only have enough data in the plain disk and RAID 5 32k columns. For the plain disk, 5.1 does perform better in the second run than the first, just nowhere near as well as 4.0.1. My guess is that the VM parameters changed between 4.0.1. and 5.1 (they did change, I just can't remember when). Try comparing the output of "sysctl vm" on the two versions of NetBSD. My experience is that the VM settings need adjusting in order to get acceptable performance from any specialised workload and I suspect that under 4.0.1 your file set fits in memory, but under 5.1 it doesn't fit in the allowed file memory. Once again RAID 5 appears to be very slow and it needs looking at.

Cheers,
Lloyd


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Edgar Fuß
2011-06-05 10:09:41 UTC
Permalink
Post by Lloyd Parkes
??? Why is this in tech-net?
EMISTAKE. Sorry.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...