Docker testing on Ubuntu 20.04.3 LTS via the Phoronix Test Suite.
EXT4 Processor: 2 x Intel Xeon E5-2630 v3 @ 3.20GHz (16 Cores / 32 Threads), Motherboard: Dell 0CNCJW (2.2.5 BIOS), Memory: 64GB, Disk: 731GB PERC H730P Mini, Graphics: mgadrmfb
OS: Ubuntu 20.04.3 LTS, Kernel: 3.10.0-1160.36.2.el7.x86_64 (x86_64), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1024x768, System Layer: Docker
Kernel Notes: Transparent Huge Pages: neverCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0x44Disk Scheduler Notes: DEADLINEJava Notes: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04)Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of Load fences usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full retpoline IBPB + srbds: Not affected + tsx_async_abort: Not affected
XFS OS: Ubuntu 20.04.3 LTS, Kernel: 3.10.0-1160.36.2.el7.x86_64 (x86_64), Compiler: GCC 9.3.0, File-System: xfs, Screen Resolution: 1024x768, System Layer: Docker
db-fs-results OpenBenchmarking.org Phoronix Test Suite 2 x Intel Xeon E5-2630 v3 @ 3.20GHz (16 Cores / 32 Threads) Dell 0CNCJW (2.2.5 BIOS) 64GB 731GB PERC H730P Mini mgadrmfb Ubuntu 20.04.3 LTS 3.10.0-1160.36.2.el7.x86_64 (x86_64) GCC 9.3.0 ext4 xfs 1024x768 Docker Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-Systems Screen Resolution System Layer Db-fs-results Performance System Logs - Transparent Huge Pages: never - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance - CPU Microcode: 0x44 - DEADLINE - OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04) - itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of Load fences usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full retpoline IBPB + srbds: Not affected + tsx_async_abort: Not affected
EXT4 vs. XFS Comparison Phoronix Test Suite Baseline +48.5% +48.5% +97% +97% +145.5% +145.5% 126% 96.2% 81.7% 76.1% 53.9% 48.6% 38.3% 35.7% 35.7% 18.7% 18.7% 17.6% 14.5% 14.1% 14% 13% 11.4% 11.2% 11.1% 9.7% 9.6% 5.9% 4.5% 3.9% 3.7% 3.7% 3.2% 3.2% 3.2% 3.1% 3.1% 2.6% 2.6% 2.5% 2.5% 2.3% 2.3% 2.2% 2.2% 2.2% 2.2% 2.1% 2.1% Rand Read - 128 194% Rand Read - 128 191.5% Rand Write - 16 Rand Read - 64 113.6% Rand Read - 64 112.5% Rand Write - 16 Rand Write - 32 Rand Write - 32 Rand Write - 128 Rand Write - 64 Rand Write - 64 Rand Write - 128 Seq Write - 32 Rand Read - 32 33% Rand Read - 32 32.5% Seq Read - 16 24% Seq Read - 16 23.8% Async Rand Read - 32 20.3% Async Rand Read - 32 20.2% 1000 - 500 - Read Write 1000 - 500 - Read Write - Average Latency Increment - 1 17.9% Rand Read - 16 17.9% Rand Read - 16 17.8% Increment - 1 17.7% Seq Write - 16 Seq Read - 32 16.2% Seq Read - 32 15.8% Seq Write - 16 1000 - 250 - Read Write 1000 - 250 - Read Write - Average Latency R.R.W.R Rand Read - 1 Rand Read - 1 Seq Write - 128 Rand Write - 4 11.1% Seq Write - 4 10% Seq Write - 32 Seq Write - 128 Rand Write - 1 Increment - 4 5.5% Seq Read - 4 5.5% Increment - 4 5.4% Seq Read - 64 5.3% Increment - 16 5% Increment - 16 4.9% Seq Read - 64 4.9% 1 - 250 - Read Write - Average Latency Rand Read - 4 4.2% Rand Read - 4 4.1% Redis 4% Update Rand Increment - 32 3.9% Increment - 32 3.8% 1000 - 500 - Read Only 1000 - 500 - Read Only - Average Latency Rand Write - 4 3.5% 100 - 250 - Read Write 100 - 250 - Read Write - Average Latency Async Rand Read - 1 3.2% 1 - 250 - Read Write Seq Read - 4 3.2% Seq Write - 64 3.1% 100 - 500 - Read Write - Average Latency 100 - 500 - Read Write Async Rand Read - 1 2.9% Mixed 1:3 2.8% 1 2.6% 100 - 100 - Read Write - Average Latency 100 - 100 - Read Write SET 2.5% Reads Seq Read - 128 Seq Read - 128 Rand Fill 100 - 100 - Read Only 100 - 50 - Read Write - Average Latency 100 - 50 - Read Write 100 - 100 - Read Only - Average Latency 1 - 500 - Read Only - Average Latency 1 - 500 - Read Only Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase PostgreSQL pgbench PostgreSQL pgbench Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase PostgreSQL pgbench PostgreSQL pgbench Facebook RocksDB Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase Apache HBase PostgreSQL pgbench Apache HBase Apache HBase Memtier_benchmark Facebook RocksDB Apache HBase Apache HBase PostgreSQL pgbench PostgreSQL pgbench Apache HBase PostgreSQL pgbench PostgreSQL pgbench Apache HBase PostgreSQL pgbench Apache HBase Apache HBase PostgreSQL pgbench PostgreSQL pgbench Apache HBase Apache Cassandra SQLite PostgreSQL pgbench PostgreSQL pgbench Redis Apache Cassandra Apache HBase Apache HBase Facebook RocksDB PostgreSQL pgbench PostgreSQL pgbench PostgreSQL pgbench PostgreSQL pgbench PostgreSQL pgbench PostgreSQL pgbench EXT4 XFS
db-fs-results leveldb: Hot Read leveldb: Fill Sync leveldb: Fill Sync leveldb: Overwrite leveldb: Overwrite leveldb: Rand Fill leveldb: Rand Fill leveldb: Rand Read leveldb: Seek Rand leveldb: Rand Delete leveldb: Seq Fill leveldb: Seq Fill sqlite: 1 couchdb: 100 - 1000 - 24 keydb: pgbench: 1 - 1 - Read Only pgbench: 1 - 1 - Read Only - Average Latency pgbench: 1 - 1 - Read Write pgbench: 1 - 1 - Read Write - Average Latency pgbench: 1 - 50 - Read Only pgbench: 1 - 50 - Read Only - Average Latency pgbench: 1 - 100 - Read Only pgbench: 1 - 100 - Read Only - Average Latency pgbench: 1 - 250 - Read Only pgbench: 1 - 250 - Read Only - Average Latency pgbench: 1 - 50 - Read Write pgbench: 1 - 50 - Read Write - Average Latency pgbench: 1 - 500 - Read Only pgbench: 1 - 500 - Read Only - Average Latency pgbench: 100 - 1 - Read Only pgbench: 100 - 1 - Read Only - Average Latency pgbench: 1 - 100 - Read Write pgbench: 1 - 100 - Read Write - Average Latency pgbench: 1 - 250 - Read Write pgbench: 1 - 250 - Read Write - Average Latency pgbench: 1 - 500 - Read Write pgbench: 1 - 500 - Read Write - Average Latency pgbench: 100 - 1 - Read Write pgbench: 100 - 1 - Read Write - Average Latency pgbench: 100 - 50 - Read Only pgbench: 100 - 50 - Read Only - Average Latency pgbench: 1000 - 1 - Read Only pgbench: 1000 - 1 - Read Only - Average Latency pgbench: 100 - 100 - Read Only pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 50 - Read Write pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 500 - Read Only pgbench: 100 - 500 - Read Only - Average Latency pgbench: 1000 - 1 - Read Write pgbench: 1000 - 1 - Read Write - Average Latency pgbench: 1000 - 50 - Read Only pgbench: 1000 - 50 - Read Only - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 500 - Read Write pgbench: 100 - 500 - Read Write - Average Latency pgbench: 1000 - 100 - Read Only pgbench: 1000 - 100 - Read Only - Average Latency pgbench: 1000 - 250 - Read Only pgbench: 1000 - 250 - Read Only - Average Latency pgbench: 1000 - 50 - Read Write pgbench: 1000 - 50 - Read Write - Average Latency pgbench: 1000 - 500 - Read Only pgbench: 1000 - 500 - Read Only - Average Latency pgbench: 1000 - 100 - Read Write pgbench: 1000 - 100 - Read Write - Average Latency pgbench: 1000 - 250 - Read Write pgbench: 1000 - 250 - Read Write - Average Latency pgbench: 1000 - 500 - Read Write pgbench: 1000 - 500 - Read Write - Average Latency sqlite-speedtest: Timed Time - Size 1,000 memtier-benchmark: Redis redis: LPOP redis: SADD redis: LPUSH redis: GET redis: SET cassandra: Reads cassandra: Writes cassandra: Mixed 1:1 cassandra: Mixed 1:3 rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand hbase: Increment - 1 hbase: Increment - 1 hbase: Increment - 4 hbase: Increment - 4 hbase: Increment - 16 hbase: Increment - 16 hbase: Increment - 32 hbase: Increment - 32 hbase: Increment - 64 hbase: Increment - 64 hbase: Increment - 128 hbase: Increment - 128 hbase: Rand Read - 1 hbase: Rand Read - 1 hbase: Rand Read - 4 hbase: Rand Read - 4 hbase: Rand Read - 16 hbase: Rand Read - 16 hbase: Rand Read - 32 hbase: Rand Read - 32 hbase: Rand Read - 64 hbase: Rand Read - 64 hbase: Rand Write - 1 hbase: Rand Write - 1 hbase: Rand Write - 4 hbase: Rand Write - 4 hbase: Rand Read - 128 hbase: Rand Read - 128 hbase: Rand Write - 16 hbase: Rand Write - 16 hbase: Rand Write - 32 hbase: Rand Write - 32 hbase: Rand Write - 64 hbase: Rand Write - 64 hbase: Rand Write - 128 hbase: Rand Write - 128 hbase: Seq Read - 1 hbase: Seq Read - 1 hbase: Seq Read - 4 hbase: Seq Read - 4 hbase: Seq Read - 16 hbase: Seq Read - 16 hbase: Seq Read - 32 hbase: Seq Read - 32 hbase: Seq Read - 64 hbase: Seq Read - 64 hbase: Seq Write - 1 hbase: Seq Write - 1 hbase: Seq Write - 4 hbase: Seq Write - 4 hbase: Async Rand Read - 1 hbase: Async Rand Read - 1 hbase: Async Rand Read - 4 hbase: Async Rand Read - 4 hbase: Seq Read - 128 hbase: Seq Read - 128 hbase: Seq Write - 16 hbase: Seq Write - 16 hbase: Seq Write - 32 hbase: Seq Write - 32 hbase: Seq Write - 64 hbase: Seq Write - 64 hbase: Async Rand Read - 16 hbase: Async Rand Read - 16 hbase: Async Rand Read - 32 hbase: Async Rand Read - 32 hbase: Async Rand Read - 64 hbase: Async Rand Read - 64 hbase: Async Rand Write - 1 hbase: Async Rand Write - 1 hbase: Async Rand Write - 4 hbase: Async Rand Write - 4 hbase: Seq Write - 128 hbase: Seq Write - 128 hbase: Async Rand Read - 128 hbase: Async Rand Read - 128 hbase: Async Rand Write - 16 hbase: Async Rand Write - 16 hbase: Async Rand Write - 32 hbase: Async Rand Write - 32 hbase: Async Rand Write - 64 hbase: Async Rand Write - 64 hbase: Async Rand Write - 128 hbase: Async Rand Write - 128 influxdb: 4 - 10000 - 2,5000,1 - 10000 influxdb: 64 - 10000 - 2,5000,1 - 10000 influxdb: 1024 - 10000 - 2,5000,1 - 10000 EXT4 XFS 35.284 6.4 543.771 12.2 288.497 12.3 287.328 35.393 44.143 288.116 12.2 288.300 3.159 112.380 321216.81 13247 0.076 1880 0.532 224084 0.223 224756 0.445 239272 1.045 3627 13.784 213610 2.341 12877 0.078 2733 36.597 1433 177.014 771 668.755 1751 0.571 215937 0.232 10983 0.091 211082 0.474 224047 1.116 12061 4.148 195781 2.554 1584 0.631 196267 0.255 12050 8.300 11614 21.527 11047 45.280 191884 0.521 197740 1.264 2855 17.514 169958 2.942 2852 35.072 3242 77.109 4223 118.403 149.386 1549937.94 880911.58 899392.79 877540.98 873340.33 878810.46 69078 78980 67250 66998 146594 48097925 133336 148183 65263 2410987 571707 4452 223 12905 308 33959 468 42608 746 42684 1493 41978 3038 5980 166 23872 166 70930 223 90063 352 100261 633 53879 18 162144 45 107030 1185 147758 165 124449 318 110431 667 110151 1316 6681 148 21846 182 60689 262 73558 432 81765 778 65319 14 197735 30 6392 155 21404 185 95539 1332 347807 60 317972 133 282855 246 54206 294 67261 473 46162 1381 2597 383 8403 474 259064 499 34559 3720 21724 734 27839 1146 30182 2115 30000 4254 859395.6 886748.9 867557.9 35.252 6.5 539.802 12.1 292.072 12.2 288.485 35.529 44.138 289.401 12.2 288.983 3.242 110.569 319106.41 13398 0.075 1888 0.530 225698 0.222 225630 0.443 242059 1.033 3587 13.939 218102 2.292 12840 0.078 2738 36.527 1479 169.336 772 671.078 1743 0.574 217222 0.230 10911 0.092 215740 0.464 226582 1.103 12322 4.059 197404 2.533 1598 0.626 196550 0.254 12364 8.088 11991 20.852 11387 43.921 194230 0.515 194410 1.286 2847 17.560 176325 2.836 2888 34.632 3698 67.610 5014 99.725 149.718 1490596.16 873824.56 883258.92 876195.46 882156.81 857147.73 70798 78269 67488 65175 149915 48239829 138575 149787 65252 2395428 645769 3782 263 12244 325 32349 491 41038 775 42998 1482 41725 3058 6648 149 22927 173 60216 263 67957 468 47175 1352 54359 17 156727 50 36719 3484 289850 73 219181 175 152697 449 149520 855 21176 192 49038 325 63518 502 77940 819 65732 14 201341 33 6214 160 21565 184 97724 1300 398393 51 348681 98 274321 243 55966 569 46059 1387 2572 387 8556 466 284045 449 34113 3778 21717 734 27685 1153 29806 2141 29866 4273 848341.6 877126.2 860201.8 OpenBenchmarking.org
Clients: 8
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 16
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 32
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 64
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 128
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 256
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 512
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 1024
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 2048
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Clients: 4096
EXT4: ./mysqlslap: 3: ./bin/mysqlslap: not found
XFS: ./mysqlslap: 3: ./bin/mysqlslap: not found
Scaling Factor: 10000 - Clients: 1 - Mode: Read Only
EXT4: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 1 - Mode: Read Write
EXT4: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 0 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 50 - Mode: Read Only
EXT4: pgbench: error: client 33 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: error: client 37 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 100 - Mode: Read Only
EXT4: error: client 11 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 93 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 250 - Mode: Read Only
EXT4: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 191 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 50 - Mode: Read Write
EXT4: error: client 23 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench:error: error: client 3 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 500 - Mode: Read Only
EXT4: pgbench: error: client 207 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 143 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 100 - Mode: Read Write
EXT4: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 11 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 250 - Mode: Read Write
EXT4: pgbench: error: client 55 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 15 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Scaling Factor: 10000 - Clients: 500 - Mode: Read Write
EXT4: pgbench: error: client 159 aborted in command 0 (set) of script 0; evaluation of meta-command failed
XFS: pgbench: error: client 63 aborted in command 0 (set) of script 0; evaluation of meta-command failed
Memtier_benchmark Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool. This current test profile currently just stresses the Redis protocol and basic options exposed wotj a 1:1 Set/Get ratio, 30 pipeline, 100 clients per thread, and thread count equal to the number of CPU cores/threads present. Patches to extend the test are welcome as always. Learn more via the OpenBenchmarking.org test page.
Test: Increment - Clients: 256
EXT4: Test failed to run.
XFS: Test failed to run.
Test: Random Read - Clients: 256
EXT4: Test failed to run.
XFS: Test failed to run.