PRIMERGY RX2540 M2 PRAID

PRIMERGY, SPARC Enterprise Server, PRIMEFLEX, PRIMEPower, BS2000

Moderator: ModTeam

Karlostavitch
Posts: 1
Joined: Fri Mar 03, 2017 7:44
Product(s): PRIMERGY RX2540 M2

PRIMERGY RX2540 M2 PRAID

Postby Karlostavitch » Sun Mar 05, 2017 0:02

I have recently setup our shiney new Primergy RX2540 M2 Server and I have attempted to optimise the RAID configuration for my HyperV VMs.

6 x 600GB HDD's in RAID 10 for SQL Database (D:)
Remaining HDD's in RAID 5 for VM Windows Operating Systems. (E:)

The RAID 10 Partition is formatted with 64k block size and I have been testing performance from the host OS without any VMs running. According to my testing, there is a marginal performance difference between the RAID volumes and I was really expecting to see a significant write speed improvement for the RAID 10 volume.

Any advice appreciated (see results below)

--------------------------------------RAID5---------------------------------------------------
C:\tmp\Diskspd-v2.0.17\amd64fre>Diskspd.exe -b64K -d60 -h -L -o2 -t4 -r -w30 -c50M d:\io.dat

Command Line: Diskspd.exe -b64K -d60 -h -L -o2 -t4 -r -w30 -c50M d:\io.dat

Input parameters:

timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'd:\io.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 70/30)
block size: 65536
using random I/O (alignment: 65536)
number of outstanding I/O operations: 2
thread stride size: 0
threads per file: 4
using I/O Completion Ports
IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time: 60.00s
thread count: 4
proc count: 40

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 64.35%| 4.95%| 59.40%| 35.65%
1| 63.80%| 5.83%| 57.97%| 36.20%
2| 63.49%| 5.16%| 58.33%| 36.51%
3| 63.12%| 6.02%| 57.11%| 36.87%
4| 0.62%| 0.34%| 0.29%| 99.37%
5| 0.18%| 0.03%| 0.16%| 99.82%
6| 0.39%| 0.16%| 0.23%| 99.61%
7| 0.05%| 0.03%| 0.03%| 99.95%
8| 0.08%| 0.03%| 0.05%| 99.92%
9| 0.08%| 0.05%| 0.03%| 99.92%
10| 0.57%| 0.00%| 0.57%| 99.43%
11| 0.08%| 0.00%| 0.08%| 99.92%
12| 0.52%| 0.08%| 0.44%| 99.48%
13| 9.79%| 1.67%| 8.12%| 90.21%
14| 0.18%| 0.00%| 0.18%| 99.82%
15| 0.00%| 0.00%| 0.00%| 100.00%
16| 0.18%| 0.16%| 0.03%| 99.82%
17| 0.05%| 0.00%| 0.05%| 99.95%
18| 0.05%| 0.05%| 0.00%| 99.95%
19| 0.05%| 0.00%| 0.05%| 99.95%
20| 1.17%| 0.83%| 0.34%| 98.83%
21| 0.00%| 0.00%| 0.00%| 100.00%
22| 0.29%| 0.16%| 0.13%| 99.71%
23| 0.13%| 0.10%| 0.03%| 99.87%
24| 0.23%| 0.18%| 0.05%| 99.76%
25| 5.26%| 4.82%| 0.44%| 94.74%
26| 1.41%| 1.04%| 0.36%| 98.59%
27| 0.21%| 0.13%| 0.08%| 99.79%
28| 0.83%| 0.60%| 0.23%| 99.16%
29| 0.78%| 0.34%| 0.44%| 99.22%
30| 0.47%| 0.26%| 0.21%| 99.53%
31| 0.26%| 0.13%| 0.13%| 99.74%
32| 0.49%| 0.18%| 0.31%| 99.50%
33| 0.31%| 0.05%| 0.26%| 99.69%
34| 0.21%| 0.05%| 0.16%| 99.79%
35| 0.23%| 0.21%| 0.03%| 99.76%
36| 0.13%| 0.10%| 0.03%| 99.87%
37| 0.13%| 0.13%| 0.00%| 99.87%
38| 0.26%| 0.18%| 0.08%| 99.74%
39| 0.16%| 0.08%| 0.08%| 99.84%
-------------------------------------------
avg.| 7.02%| 0.85%| 6.16%| 92.98%

Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 67872555008 | 1035653 | 1078.79 | 17260.56 | 0.114 | 0.050 | d:\io.dat (50MB)
1 | 67398139904 | 1028414 | 1071.24 | 17139.91 | 0.115 | 0.277 | d:\io.dat (50MB)
2 | 67756556288 | 1033883 | 1076.94 | 17231.06 | 0.114 | 0.027 | d:\io.dat (50MB)
3 | 67824648192 | 1034922 | 1078.02 | 17248.38 | 0.114 | 0.020 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 270851899392 | 4132872 | 4304.99 | 68879.91 | 0.114 | 0.142

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 47507177472 | 724902 | 755.09 | 12081.47 | 0.115 | 0.058 | d:\io.dat (50MB)
1 | 47143649280 | 719355 | 749.31 | 11989.03 | 0.116 | 0.237 | d:\io.dat (50MB)
2 | 47477227520 | 724445 | 754.62 | 12073.86 | 0.115 | 0.029 | d:\io.dat (50MB)
3 | 47449702400 | 724025 | 754.18 | 12066.86 | 0.115 | 0.020 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 189577756672 | 2892727 | 3013.20 | 48211.22 | 0.115 | 0.123

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 20365377536 | 310751 | 323.69 | 5179.09 | 0.112 | 0.020 | d:\io.dat (50MB)
1 | 20254490624 | 309059 | 321.93 | 5150.89 | 0.113 | 0.355 | d:\io.dat (50MB)
2 | 20279328768 | 309438 | 322.33 | 5157.20 | 0.112 | 0.021 | d:\io.dat (50MB)
3 | 20374945792 | 310897 | 323.84 | 5181.52 | 0.112 | 0.019 | d:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 81274142720 | 1240145 | 1291.79 | 20668.70 | 0.112 | 0.178


%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.056 | 0.054 | 0.054
25th | 0.102 | 0.099 | 0.101
50th | 0.114 | 0.111 | 0.113
75th | 0.126 | 0.123 | 0.125
90th | 0.139 | 0.135 | 0.138
95th | 0.150 | 0.145 | 0.148
99th | 0.173 | 0.169 | 0.172
3-nines | 0.209 | 0.205 | 0.208
4-nines | 0.307 | 0.303 | 0.307
5-nines | 1.458 | 1.333 | 1.385
6-nines | 31.888 | 2.581 | 18.874
7-nines | 196.854 | 196.927 | 196.927
8-nines | 196.854 | 196.927 | 196.927
9-nines | 196.854 | 196.927 | 196.927
max | 196.854 | 196.927 | 196.927

----------------------------------------RAID10-----------------------------------------------------------
C:\tmp\Diskspd-v2.0.17\amd64fre>Diskspd.exe -b64K -d60 -h -L -o2 -t4 -r -w30 -c50M e:\io.dat

Command Line: Diskspd.exe -b64K -d60 -h -L -o2 -t4 -r -w30 -c50M e:\io.dat

Input parameters:

timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'e:\io.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 70/30)
block size: 65536
using random I/O (alignment: 65536)
number of outstanding I/O operations: 2
thread stride size: 0
threads per file: 4
using I/O Completion Ports
IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time: 60.02s
thread count: 4
proc count: 40

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 63.92%| 4.32%| 59.59%| 36.08%
1| 62.77%| 5.21%| 57.56%| 37.23%
2| 64.05%| 5.18%| 58.87%| 35.95%
3| 62.28%| 4.66%| 57.62%| 37.72%
4| 0.31%| 0.16%| 0.16%| 99.69%
5| 0.08%| 0.00%| 0.08%| 99.92%
6| 0.03%| 0.00%| 0.03%| 99.97%
7| 0.00%| 0.00%| 0.00%| 100.00%
8| 0.03%| 0.00%| 0.03%| 99.97%
9| 0.03%| 0.00%| 0.03%| 99.97%
10| 0.55%| 0.29%| 0.26%| 99.45%
11| 0.00%| 0.00%| 0.00%| 100.00%
12| 0.00%| 0.00%| 0.00%| 100.00%
13| 4.56%| 0.70%| 3.85%| 95.44%
14| 0.99%| 0.13%| 0.86%| 99.01%
15| 0.00%| 0.00%| 0.00%| 100.00%
16| 0.18%| 0.13%| 0.05%| 99.82%
17| 0.00%| 0.00%| 0.00%| 100.00%
18| 0.08%| 0.03%| 0.05%| 99.92%
19| 0.26%| 0.00%| 0.26%| 99.74%
20| 0.78%| 0.57%| 0.21%| 99.22%
21| 0.05%| 0.05%| 0.00%| 99.95%
22| 1.30%| 1.15%| 0.16%| 98.70%
23| 0.08%| 0.05%| 0.03%| 99.92%
24| 0.47%| 0.18%| 0.29%| 99.53%
25| 4.30%| 4.01%| 0.29%| 95.70%
26| 0.94%| 0.60%| 0.34%| 99.06%
27| 0.10%| 0.10%| 0.00%| 99.90%
28| 0.05%| 0.05%| 0.00%| 99.95%
29| 87.82%| 77.53%| 10.28%| 12.18%
30| 3.15%| 2.60%| 0.55%| 96.85%
31| 0.00%| 0.00%| 0.00%| 100.00%
32| 0.52%| 0.47%| 0.05%| 99.48%
33| 1.43%| 0.60%| 0.83%| 98.57%
34| 0.94%| 0.57%| 0.36%| 99.06%
35| 0.03%| 0.00%| 0.03%| 99.97%
36| 0.62%| 0.34%| 0.29%| 99.38%
37| 0.08%| 0.05%| 0.03%| 99.92%
38| 0.18%| 0.16%| 0.03%| 99.82%
39| 0.26%| 0.00%| 0.26%| 99.74%
-------------------------------------------
avg.| 9.08%| 2.75%| 6.33%| 90.92%

Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 67966795776 | 1037091 | 1080.02 | 17280.40 | 0.114 | 0.024 | e:\io.dat (50MB)
1 | 68248797184 | 1041394 | 1084.51 | 17352.10 | 0.114 | 0.022 | e:\io.dat (50MB)
2 | 68136665088 | 1039683 | 1082.72 | 17323.59 | 0.114 | 0.022 | e:\io.dat (50MB)
3 | 68029120512 | 1038042 | 1081.02 | 17296.25 | 0.114 | 0.023 | e:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 272381378560 | 4156210 | 4328.27 | 69252.33 | 0.114 | 0.023

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 47573303296 | 725911 | 755.96 | 12095.40 | 0.115 | 0.024 | e:\io.dat (50MB)
1 | 47742058496 | 728486 | 758.64 | 12138.31 | 0.114 | 0.023 | e:\io.dat (50MB)
2 | 47741599744 | 728479 | 758.64 | 12138.19 | 0.114 | 0.023 | e:\io.dat (50MB)
3 | 47589097472 | 726152 | 756.21 | 12099.42 | 0.115 | 0.023 | e:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 190646059008 | 2909028 | 3029.46 | 48471.32 | 0.114 | 0.023

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 20393492480 | 311180 | 324.06 | 5185.00 | 0.113 | 0.024 | e:\io.dat (50MB)
1 | 20506738688 | 312908 | 325.86 | 5213.79 | 0.112 | 0.021 | e:\io.dat (50MB)
2 | 20395065344 | 311204 | 324.09 | 5185.40 | 0.112 | 0.021 | e:\io.dat (50MB)
3 | 20440023040 | 311890 | 324.80 | 5196.83 | 0.113 | 0.021 | e:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 81735319552 | 1247182 | 1298.81 | 20781.01 | 0.113 | 0.022


%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.056 | 0.056 | 0.056
25th | 0.100 | 0.100 | 0.100
50th | 0.112 | 0.111 | 0.112
75th | 0.125 | 0.123 | 0.125
90th | 0.141 | 0.136 | 0.139
95th | 0.153 | 0.146 | 0.151
99th | 0.175 | 0.167 | 0.174
3-nines | 0.244 | 0.241 | 0.243
4-nines | 0.627 | 0.612 | 0.624
5-nines | 1.275 | 1.143 | 1.272
6-nines | 2.865 | 2.785 | 2.785
7-nines | 3.780 | 2.854 | 3.780
8-nines | 3.780 | 2.854 | 3.780
9-nines | 3.780 | 2.854 | 3.780
max | 3.780 | 2.854 | 3.780

me@work
Posts: 1024
Joined: Thu Jun 22, 2006 15:32
Product(s): Scaleo Pi2662, Primergy

Re: PRIMERGY RX2540 M2 PRAID

Postby me@work » Sun Mar 05, 2017 7:53

In a first attempt, you might want to switch from a 50M testfile size to 50G, and verify the results.

Then, did you already examine and understand Fujitsu's related publications, like
https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-performance-report-primergy-rx2540-m2-ww-en.pdf
https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-basics-of-disk-io-performance-ww-en.pdf
https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-raid-controller-performance-2016-ww-en.pdf
https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-py-hdd-ssd-en.pdf

These should provide some guidance to initially start with. Good luck.


Return to “Server Products”

Who is online

Users browsing this forum: No registered users and 1 guest