difficulty interpreting results of disk i/o test

Asked by dpfarrer

I have two identically outfitted Dell 2950 iii servers running Ubuntu 14.04: gb2a and gb2b. Both systems have 32GB RAM and 6 2TB disks in a hardware RAID 6. Both have dual Xeon X5450 @ 3.00GHz.

 When I time the i/o prepare command these are the results:

root@gb2b:~# time sysbench --test=fileio --file-total-size=150G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark

128 files, 1228800Kb each, 153600Mb total
Creating files for the test...

real 11m44.316s
user 0m0.700s
sys 3m12.292s
root@gb2b:~#

root@gb2a:~# time sysbench --test=fileio --file-total-size=150G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark

128 files, 1228800Kb each, 153600Mb total
Creating files for the test...

real 42m3.393s
user 0m8.691s
sys 39m52.104s
root@gb2a:~# ~

AS you can see system gb2a takes ~4 times longer to complete. When I run the actual test the results are nearly identical:

root@gb2a:~# sysbench --test=fileio --file-total-size=150G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.

Extra file open flags: 0
128 files, 1.1719Gb each
150Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 31500 Read, 21000 Write, 67193 Other = 119693 Total
Read 492.19Mb Written 328.12Mb Total transferred 820.31Mb (2.729Mb/sec)
  174.66 Requests/sec executed

Test execution summary:
    total time: 300.5868s
    total number of events: 52500
    total time taken by event execution: 271.8952
    per-request statistics:
         min: 0.01ms
         avg: 5.18ms
         max: 1326.14ms
         approx. 95 percentile: 11.21ms

Threads fairness:
    events (avg/stddev): 52500.0000/0.00
    execution time (avg/stddev): 271.8952/0.00

----------- and here is gb2b -----------------
root@gb2b:~# sysbench --test=fileio --file-total-size=150G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.

Extra file open flags: 0
128 files, 1.1719Gb each
150Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 33109 Read, 22072 Write, 70528 Other = 125709 Total
Read 517.33Mb Written 344.88Mb Total transferred 862.2Mb (2.874Mb/sec)
  183.93 Requests/sec executed

Test execution summary:
    total time: 300.0030s
    total number of events: 55181
    total time taken by event execution: 278.4508
    per-request statistics:
         min: 0.01ms
         avg: 5.05ms
         max: 1252.96ms
         approx. 95 percentile: 10.79ms

Threads fairness:
    events (avg/stddev): 55181.0000/0.00
    execution time (avg/stddev): 278.4508/0.00

I ran sysbench in the first place to try and figure out why gb2a is so much slower than its identical twin. Does the slowness in file prep on gb2a tell you anything?

Many thanks,

DP

Question information

Language:
English Edit question
Status:
Answered
For:
sysbench Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Alexey Kopytov (akopytov) said :
#1

I guess the difference between prepare and run is that prepare creates a new file, while 'run' with --file-test-mode=rndrw generates a mixed read/write workload, which also does cached reads by default. So I/O performance between the systems is amortized by the cache.

Things to try:

--file-test-mode=rndwr (i.e. a write-only workload)
or
--file-test-mode=rndrw --file-extra-flags=direct (i.e. a mixed read-write workload, but with OS caching disabled via O_DIRECT).

Can you help with this problem?

Provide an answer of your own, or ask dpfarrer for more information if necessary.

To post a message you must log in.