fio_vm_ub14.04_ssd_r0 4 x Intel Core (Haswell no TSX) testing with a QEMU Standard PC (i440FX + PIIX 1996) and Cirrus Logic GD 5446 on Ubuntu 14.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1507238-BE-FIOVMUB1403 ceph vs ssd Processor: 4 x Intel Core (Haswell no TSX) @ 2.30GHz (4 Cores), Motherboard: QEMU Standard PC (i440FX + PIIX 1996), Chipset: Intel 440FX- 82441FX PMC, Memory: 1 x 4096 MB RAM QEMU, Disk: 9GB, Graphics: Cirrus Logic GD 5446, Network: Red Hat Virtio device
OS: Ubuntu 14.04, Kernel: 3.13.0-32-generic (x86_64), Compiler: GCC 4.8.4, File-System: ext4, Screen Resolution: 1024x768
Compiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-libmudflap --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Mount Options Notes: data=ordered,errors=remount-ro,relatime,rw
fio_vm_ub14.04_ssd_r0 OpenBenchmarking.org Phoronix Test Suite 4 x Intel Core (Haswell no TSX) @ 2.30GHz (4 Cores) QEMU Standard PC (i440FX + PIIX 1996) Intel 440FX- 82441FX PMC 1 x 4096 MB RAM QEMU 9GB Cirrus Logic GD 5446 Red Hat Virtio device Ubuntu 14.04 3.13.0-32-generic (x86_64) GCC 4.8.4 ext4 1024x768 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution Fio_vm_ub14.04_ssd_r0 Benchmarks System Logs - --build=x86_64-linux-gnu --disable-browser-plugin --disable-libmudflap --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-multilib-list=m32,m64,mx32 --with-tune=generic -v - data=ordered,errors=remount-ro,relatime,rw
fio_vm_ub14.04_ssd_r0 fio: Seq Write - Libaio - Yes - Yes - 64KB - / - MB/s fio: Seq Write - Libaio - Yes - Yes - 64KB - / - IOPS fio: Seq Write - Libaio - Yes - No - 64KB - / - MB/s fio: Seq Write - Libaio - Yes - No - 64KB - / - IOPS fio: Seq Write - Libaio - No - Yes - 64KB - / - MB/s fio: Seq Write - Libaio - No - Yes - 64KB - / - IOPS fio: Seq Read - Libaio - Yes - Yes - 64KB - / - MB/s fio: Seq Read - Libaio - Yes - Yes - 64KB - / - IOPS fio: Seq Write - Libaio - No - No - 64KB - / - IOPS fio: Seq Read - Libaio - Yes - No - 64KB - / - MB/s fio: Seq Read - Libaio - Yes - No - 64KB - / - IOPS fio: Seq Read - Libaio - No - Yes - 64KB - / - MB/s fio: Seq Read - Libaio - No - Yes - 64KB - / - IOPS fio: Seq Read - Libaio - No - No - 64KB - / - MB/s fio: Seq Read - Libaio - No - No - 64KB - / - IOPS fio: Rand Write - Libaio - Yes - Yes - 64KB - / - MB/s fio: Rand Write - Libaio - Yes - Yes - 64KB - / - IOPS fio: Rand Write - Libaio - Yes - No - 64KB - / - MB/s fio: Rand Write - Libaio - Yes - No - 64KB - / - IOPS fio: Rand Write - Libaio - No - Yes - 64KB - / - MB/s fio: Rand Write - Libaio - No - Yes - 64KB - / - IOPS fio: Rand Read - Libaio - Yes - Yes - 64KB - / - MB/s fio: Rand Read - Libaio - Yes - Yes - 64KB - / - IOPS fio: Rand Write - Libaio - No - No - 64KB - / - MB/s fio: Rand Write - Libaio - No - No - 64KB - / - IOPS fio: Rand Read - Libaio - Yes - No - 64KB - / - MB/s fio: Rand Read - Libaio - Yes - No - 64KB - / - IOPS fio: Rand Read - Libaio - No - Yes - 64KB - / - MB/s fio: Rand Read - Libaio - No - Yes - 64KB - / - IOPS fio: Rand Read - Libaio - No - No - 64KB - / - MB/s fio: Rand Read - Libaio - No - No - 64KB - / - IOPS fio: Seq Write - Libaio - No - No - 64KB - / - MB/s ceph vs ssd 1141.93 18228 1156.13 18384 794.37 11738 1106.93 17997 11886 1016.80 16406 3986.67 64257 3955.77 64582 1011.87 16184 1014.27 16384 794.47 12052 251.53 3976 791.51 12366 192.40 2992 3391.63 53994 3395.83 54594 798.44 OpenBenchmarking.org
Flexible IO Tester OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 12.34, N = 3 1141.93 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 4K 8K 12K 16K 20K SE +/- 181.50, N = 3 18228 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 5.87, N = 3 1156.13 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 4K 8K 12K 16K 20K SE +/- 154.48, N = 3 18384 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 3.55, N = 3 794.37 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 3K 6K 9K 12K 15K SE +/- 177.28, N = 5 11738 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 7.80, N = 3 1106.93 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 4K 8K 12K 16K 20K SE +/- 136.93, N = 3 17997 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 3K 6K 9K 12K 15K SE +/- 181.48, N = 5 11886 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 1.36, N = 3 1016.80 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 4K 8K 12K 16K 20K SE +/- 83.23, N = 3 16406 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 900 1800 2700 3600 4500 SE +/- 43.03, N = 3 3986.67 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 14K 28K 42K 56K 70K SE +/- 1025.92, N = 3 64257 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 800 1600 2400 3200 4000 SE +/- 73.72, N = 3 3955.77 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Read - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 14K 28K 42K 56K 70K SE +/- 905.12, N = 3 64582 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 4.64, N = 3 1011.87 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 3K 6K 9K 12K 15K SE +/- 48.00, N = 3 16184 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 6.75, N = 3 1014.27 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 4K 8K 12K 16K 20K SE +/- 139.71, N = 3 16384 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 0.80, N = 3 794.47 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 3K 6K 9K 12K 15K SE +/- 205.47, N = 4 12052 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 50 100 150 200 250 SE +/- 2.23, N = 3 251.53 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: Yes - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 900 1800 2700 3600 4500 SE +/- 70.48, N = 6 3976 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 3.24, N = 3 791.51 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Write - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 3K 6K 9K 12K 15K SE +/- 88.70, N = 3 12366 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 40 80 120 160 200 SE +/- 0.93, N = 3 192.40 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: Yes - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 600 1200 1800 2400 3000 SE +/- 8.99, N = 3 2992 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 700 1400 2100 2800 3500 SE +/- 26.49, N = 3 3391.63 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: No - Direct: Yes - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 12K 24K 36K 48K 60K SE +/- 380.76, N = 3 53994 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 700 1400 2100 2800 3500 SE +/- 13.78, N = 3 3395.83 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 2.1.13 Type: Random Read - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: IOPS ceph vs ssd 12K 24K 36K 48K 60K SE +/- 160.79, N = 3 54594 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 2.1.13 Type: Sequential Write - IO Engine: Libaio - Buffered: No - Direct: No - Block Size: 64KB - Disk Target: / - Result: MB/s ceph vs ssd 200 400 600 800 1000 SE +/- 30.31, N = 6 798.44 1. (CC) gcc options: -rdynamic -std=gnu99 -O3 -ffast-math -include -lrt -laio -lm -lpthread -ldl
ceph vs ssd Processor: 4 x Intel Core (Haswell no TSX) @ 2.30GHz (4 Cores), Motherboard: QEMU Standard PC (i440FX + PIIX 1996), Chipset: Intel 440FX- 82441FX PMC, Memory: 1 x 4096 MB RAM QEMU, Disk: 9GB, Graphics: Cirrus Logic GD 5446, Network: Red Hat Virtio device
OS: Ubuntu 14.04, Kernel: 3.13.0-32-generic (x86_64), Compiler: GCC 4.8.4, File-System: ext4, Screen Resolution: 1024x768
Compiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-libmudflap --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Mount Options Notes: data=ordered,errors=remount-ro,relatime,rw
Testing initiated at 23 July 2015 07:36 by user root.