Benchmarks for a future article.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2301271-NE-XEONPLATI60 Xeon Platinum 8380 DODT Mitigation Impact - Phoronix Test Suite Xeon Platinum 8380 DODT Mitigation Impact Benchmarks for a future article.
HTML result view exported from: https://openbenchmarking.org/result/2301271-NE-XEONPLATI60&sor&gru .
Xeon Platinum 8380 DODT Mitigation Impact Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution Linux w DODT doitm=off 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Ice Lake IEH 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP Ubuntu 22.10 6.2.0-rc5-phx-dodt (x86_64) GNOME Shell X Server 1.21.1.3 1.3.211 GCC 12.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375 Python Details - Python 3.10.6 Security Details - Linux w DODT: dodt: Mitigation of DOITM + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - doitm=off: dodt: Vulnerable + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Xeon Platinum 8380 DODT Mitigation Impact minibude: OpenMP - BM1 stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Crypto stress-ng: Malloc stress-ng: IO_uring stress-ng: SENDFILE stress-ng: x86_64 RdRand nekrs: TurboPipe Periodic kvazaar: Bosphorus 4K - Super Fast kvazaar: Bosphorus 4K - Ultra Fast uvg266: Bosphorus 4K - Super Fast uvg266: Bosphorus 4K - Ultra Fast minibude: OpenMP - BM1 xmrig: Monero - 1M xmrig: Wownero - 1M deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream cryptsetup: PBKDF2-sha512 cryptsetup: PBKDF2-whirlpool cryptsetup: AES-XTS 256b Encryption cryptsetup: AES-XTS 256b Decryption cryptsetup: Serpent-XTS 256b Encryption cryptsetup: Serpent-XTS 256b Decryption cryptsetup: Twofish-XTS 256b Encryption cryptsetup: Twofish-XTS 256b Decryption cryptsetup: AES-XTS 512b Encryption cryptsetup: AES-XTS 512b Decryption cryptsetup: Serpent-XTS 512b Encryption cryptsetup: Twofish-XTS 512b Encryption cryptsetup: Twofish-XTS 512b Decryption cryptsetup: Serpent-XTS 512b Decryption cryptopp: Keyed Algorithms cryptopp: Unkeyed Algorithms webp: Default webp: Quality 100 webp: Quality 100, Highest Compression rocksdb: Rand Read rocksdb: Update Rand cockroach: MoVR - 256 cockroach: KV, 10% Reads - 256 cockroach: KV, 60% Reads - 256 cockroach: KV, 95% Reads - 256 clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache clickhouse: 100M Rows Hits Dataset, Second Run clickhouse: 100M Rows Hits Dataset, Third Run openssl: spacy: en_core_web_lg spacy: en_core_web_trf pgbench: 100 - 1000 - Read Only pgbench: 100 - 800 - Read Write pgbench: 100 - 1000 - Read Write openssl: pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 1000 - Read Write - Average Latency deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream openradioss: Bumper Beam openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation avifenc: 0 avifenc: 2 avifenc: 6 build-godot: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig build-llvm: Ninja build-nodejs: Time To Compile blender: BMW27 - CPU-Only blender: Classroom - CPU-Only Linux w DODT doitm=off 94.725 699147.73 3592.88 41364067.75 85533.02 195951844.81 26508.09 1128077.91 669198.10 295621666667 47.06 47.95 42.68 42.61 2368.116 26451.3 41450.1 48.5123 890.9486 219.3059 316.8622 830.5710 454.9543 81.8802 223.6458 48.4603 1392546 583195 3864.3 3891.5 559.5 527.6 351.4 357.7 3445.4 3449.8 561.0 352.2 357.5 527.2 565.089482 361.672421 13.09 8.42 2.81 278841535 592576 974.8 81359.8 102448.9 119187.4 398.75 423.13 427.42 17781.6 10971 3169 2021337 75223 68328 1188255.4 0.506 10.635 14.635 823.5023 44.8577 182.2006 125.9126 48.1234 87.8377 487.3610 178.6233 821.2112 85.13 138.61 78.06 79.482 44.038 3.519 44.646 34.515 261.028 146.956 170.929 23.44 62.71 94.940 958957.41 3624.42 41978074.41 84975.84 201328390.34 26532.30 1146358.16 669178.00 298310333333 46.47 47.75 42.72 42.62 2373.489 26578.0 41317.2 48.4485 902.0563 218.7781 316.8167 832.3903 455.1038 82.7051 224.0542 48.4895 1396866 582551 3876.2 3886.1 559.1 527.3 351.5 358.5 3458.6 3446.3 560.5 352.2 358.1 527.3 564.749237 361.884748 13.09 8.43 2.81 278886510 588787 392.39 421.39 424.96 17837.9 10888 3225 2020130 75123 68055 1188684.8 0.504 10.650 14.695 821.5239 44.3071 182.6993 125.9736 48.0139 87.8216 481.3864 178.2763 818.3017 84.63 139.19 78.00 79.684 43.995 3.499 44.951 34.393 261.454 147.340 170.361 23.30 62.68 OpenBenchmarking.org
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 doitm=off Linux w DODT 20 40 60 80 100 SE +/- 0.43, N = 3 SE +/- 0.72, N = 3 94.94 94.73 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex doitm=off Linux w DODT 200K 400K 600K 800K 1000K SE +/- 56672.45, N = 15 SE +/- 6957.32, N = 3 958957.41 699147.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD doitm=off Linux w DODT 800 1600 2400 3200 4000 SE +/- 1.55, N = 3 SE +/- 2.24, N = 3 3624.42 3592.88 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex doitm=off Linux w DODT 9M 18M 27M 36M 45M SE +/- 34997.86, N = 3 SE +/- 162165.00, N = 3 41978074.41 41364067.75 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto Linux w DODT doitm=off 20K 40K 60K 80K 100K SE +/- 445.46, N = 3 SE +/- 991.74, N = 3 85533.02 84975.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc doitm=off Linux w DODT 40M 80M 120M 160M 200M SE +/- 454146.57, N = 3 SE +/- 655475.74, N = 3 201328390.34 195951844.81 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring doitm=off Linux w DODT 6K 12K 18K 24K 30K SE +/- 1.30, N = 3 SE +/- 16.56, N = 3 26532.30 26508.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE doitm=off Linux w DODT 200K 400K 600K 800K 1000K SE +/- 8323.86, N = 3 SE +/- 8091.69, N = 3 1146358.16 1128077.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: x86_64 RdRand Linux w DODT doitm=off 140K 280K 420K 560K 700K SE +/- 68.80, N = 3 SE +/- 67.49, N = 3 669198.10 669178.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic doitm=off Linux w DODT 60000M 120000M 180000M 240000M 300000M SE +/- 1474277940.03, N = 3 SE +/- 500223394.54, N = 3 298310333333 295621666667 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
Kvazaar Video Input: Bosphorus 4K - Video Preset: Super Fast OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Super Fast Linux w DODT doitm=off 11 22 33 44 55 SE +/- 0.46, N = 15 SE +/- 0.63, N = 3 47.06 46.47 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
Kvazaar Video Input: Bosphorus 4K - Video Preset: Ultra Fast OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Ultra Fast Linux w DODT doitm=off 11 22 33 44 55 SE +/- 0.37, N = 10 SE +/- 0.60, N = 3 47.95 47.75 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
uvg266 Video Input: Bosphorus 4K - Video Preset: Super Fast OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 4K - Video Preset: Super Fast doitm=off Linux w DODT 10 20 30 40 50 SE +/- 0.36, N = 8 SE +/- 0.33, N = 10 42.72 42.68
uvg266 Video Input: Bosphorus 4K - Video Preset: Ultra Fast OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 4K - Video Preset: Ultra Fast doitm=off Linux w DODT 10 20 30 40 50 SE +/- 0.18, N = 3 SE +/- 0.45, N = 3 42.62 42.61
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 doitm=off Linux w DODT 500 1000 1500 2000 2500 SE +/- 10.72, N = 3 SE +/- 17.94, N = 3 2373.49 2368.12 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M doitm=off Linux w DODT 6K 12K 18K 24K 30K SE +/- 48.01, N = 3 SE +/- 76.73, N = 3 26578.0 26451.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M Linux w DODT doitm=off 9K 18K 27K 36K 45K SE +/- 149.86, N = 3 SE +/- 35.05, N = 3 41450.1 41317.2 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream Linux w DODT doitm=off 11 22 33 44 55 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 48.51 48.45
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 200 400 600 800 1000 SE +/- 3.12, N = 3 SE +/- 2.39, N = 3 902.06 890.95
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream Linux w DODT doitm=off 50 100 150 200 250 SE +/- 0.42, N = 3 SE +/- 2.51, N = 3 219.31 218.78
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream Linux w DODT doitm=off 70 140 210 280 350 SE +/- 0.14, N = 3 SE +/- 1.00, N = 3 316.86 316.82
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 200 400 600 800 1000 SE +/- 0.57, N = 3 SE +/- 0.98, N = 3 832.39 830.57
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 100 200 300 400 500 SE +/- 0.57, N = 3 SE +/- 0.28, N = 3 455.10 454.95
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 20 40 60 80 100 SE +/- 0.60, N = 3 SE +/- 0.12, N = 3 82.71 81.88
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 50 100 150 200 250 SE +/- 0.28, N = 3 SE +/- 0.20, N = 3 224.05 223.65
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.18, N = 3 48.49 48.46
Cryptsetup PBKDF2-sha512 OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-sha512 doitm=off Linux w DODT 300K 600K 900K 1200K 1500K SE +/- 2233.59, N = 3 SE +/- 3203.15, N = 3 1396866 1392546
Cryptsetup PBKDF2-whirlpool OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-whirlpool Linux w DODT doitm=off 120K 240K 360K 480K 600K SE +/- 1294.67, N = 3 SE +/- 1624.74, N = 3 583195 582551
Cryptsetup AES-XTS 256b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Encryption doitm=off Linux w DODT 800 1600 2400 3200 4000 SE +/- 7.45, N = 3 SE +/- 16.70, N = 3 3876.2 3864.3
Cryptsetup AES-XTS 256b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Decryption Linux w DODT doitm=off 800 1600 2400 3200 4000 SE +/- 17.78, N = 3 SE +/- 3.34, N = 3 3891.5 3886.1
Cryptsetup Serpent-XTS 256b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Encryption Linux w DODT doitm=off 120 240 360 480 600 SE +/- 1.27, N = 3 SE +/- 1.35, N = 3 559.5 559.1
Cryptsetup Serpent-XTS 256b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Decryption Linux w DODT doitm=off 110 220 330 440 550 SE +/- 0.27, N = 3 SE +/- 0.09, N = 3 527.6 527.3
Cryptsetup Twofish-XTS 256b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Encryption doitm=off Linux w DODT 80 160 240 320 400 SE +/- 0.88, N = 3 SE +/- 0.92, N = 3 351.5 351.4
Cryptsetup Twofish-XTS 256b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Decryption doitm=off Linux w DODT 80 160 240 320 400 SE +/- 0.21, N = 3 SE +/- 0.17, N = 3 358.5 357.7
Cryptsetup AES-XTS 512b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Encryption doitm=off Linux w DODT 700 1400 2100 2800 3500 SE +/- 3.15, N = 3 SE +/- 13.51, N = 3 3458.6 3445.4
Cryptsetup AES-XTS 512b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Decryption Linux w DODT doitm=off 700 1400 2100 2800 3500 SE +/- 13.35, N = 3 SE +/- 2.54, N = 3 3449.8 3446.3
Cryptsetup Serpent-XTS 512b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Encryption Linux w DODT doitm=off 120 240 360 480 600 SE +/- 0.45, N = 2 SE +/- 0.06, N = 3 561.0 560.5
Cryptsetup Twofish-XTS 512b Encryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 512b Encryption doitm=off Linux w DODT 80 160 240 320 400 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 352.2 352.2
Cryptsetup Twofish-XTS 512b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 512b Decryption doitm=off Linux w DODT 80 160 240 320 400 SE +/- 0.23, N = 3 SE +/- 0.25, N = 2 358.1 357.5
Cryptsetup Serpent-XTS 512b Decryption OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Decryption doitm=off Linux w DODT 110 220 330 440 550 527.3 527.2
Crypto++ Test: Keyed Algorithms OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Keyed Algorithms Linux w DODT doitm=off 120 240 360 480 600 SE +/- 0.07, N = 3 SE +/- 0.20, N = 3 565.09 564.75 1. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
Crypto++ Test: Unkeyed Algorithms OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Unkeyed Algorithms doitm=off Linux w DODT 80 160 240 320 400 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 361.88 361.67 1. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Default doitm=off Linux w DODT 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 13.09 13.09 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100 doitm=off Linux w DODT 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 8.43 8.42 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression doitm=off Linux w DODT 0.6323 1.2646 1.8969 2.5292 3.1615 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.81 2.81 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Random Read doitm=off Linux w DODT 60M 120M 180M 240M 300M SE +/- 2757403.57, N = 3 SE +/- 2251545.82, N = 3 278886510 278841535 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Update Random Linux w DODT doitm=off 130K 260K 390K 520K 650K SE +/- 1503.74, N = 3 SE +/- 3603.95, N = 3 592576 588787 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
CockroachDB Workload: MoVR - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 256 Linux w DODT 200 400 600 800 1000 SE +/- 3.64, N = 3 974.8
CockroachDB Workload: KV, 10% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 256 Linux w DODT 20K 40K 60K 80K 100K SE +/- 849.68, N = 15 81359.8
CockroachDB Workload: KV, 60% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 256 Linux w DODT 20K 40K 60K 80K 100K SE +/- 1847.96, N = 15 102448.9
CockroachDB Workload: KV, 95% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 256 Linux w DODT 30K 60K 90K 120K 150K SE +/- 2483.59, N = 15 119187.4
ClickHouse 100M Rows Hits Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache Linux w DODT doitm=off 90 180 270 360 450 SE +/- 3.64, N = 3 SE +/- 1.76, N = 3 398.75 392.39 MIN: 34.82 / MAX: 4615.38 MIN: 33.75 / MAX: 5000
ClickHouse 100M Rows Hits Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run Linux w DODT doitm=off 90 180 270 360 450 SE +/- 1.99, N = 3 SE +/- 1.56, N = 3 423.13 421.39 MIN: 35.89 / MAX: 5000 MIN: 34.86 / MAX: 5454.55
ClickHouse 100M Rows Hits Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Linux w DODT doitm=off 90 180 270 360 450 SE +/- 3.25, N = 3 SE +/- 0.79, N = 3 427.42 424.96 MIN: 35.99 / MAX: 5454.55 MIN: 36.19 / MAX: 5454.55
OpenSSL OpenBenchmarking.org sign/s, More Is Better OpenSSL doitm=off Linux w DODT 4K 8K 12K 16K 20K SE +/- 20.72, N = 3 SE +/- 63.70, N = 3 17837.9 17781.6 1. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg Linux w DODT doitm=off 2K 4K 6K 8K 10K SE +/- 10.12, N = 3 SE +/- 69.82, N = 3 10971 10888
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf doitm=off Linux w DODT 700 1400 2100 2800 3500 SE +/- 48.89, N = 3 SE +/- 30.17, N = 3 3225 3169
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only Linux w DODT doitm=off 400K 800K 1200K 1600K 2000K SE +/- 89751.30, N = 12 SE +/- 80874.14, N = 12 2021337 2020130 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write Linux w DODT doitm=off 16K 32K 48K 64K 80K SE +/- 197.73, N = 3 SE +/- 295.95, N = 3 75223 75123 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write Linux w DODT doitm=off 15K 30K 45K 60K 75K SE +/- 95.69, N = 3 SE +/- 295.57, N = 3 68328 68055 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenSSL OpenBenchmarking.org verify/s, More Is Better OpenSSL doitm=off Linux w DODT 300K 600K 900K 1200K 1500K SE +/- 1026.50, N = 3 SE +/- 1656.25, N = 3 1188684.8 1188255.4 1. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency doitm=off Linux w DODT 0.1139 0.2278 0.3417 0.4556 0.5695 SE +/- 0.021, N = 12 SE +/- 0.024, N = 12 0.504 0.506 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency Linux w DODT doitm=off 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 10.64 10.65 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency Linux w DODT doitm=off 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 14.64 14.70 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 200 400 600 800 1000 SE +/- 0.15, N = 3 SE +/- 0.30, N = 3 821.52 823.50
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 10 20 30 40 50 SE +/- 0.16, N = 3 SE +/- 0.12, N = 3 44.31 44.86
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream Linux w DODT doitm=off 40 80 120 160 200 SE +/- 0.39, N = 3 SE +/- 2.12, N = 3 182.20 182.70
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream Linux w DODT doitm=off 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.38, N = 3 125.91 125.97
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 48.01 48.12
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 20 40 60 80 100 SE +/- 0.10, N = 3 SE +/- 0.05, N = 3 87.82 87.84
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 110 220 330 440 550 SE +/- 3.55, N = 3 SE +/- 0.57, N = 3 481.39 487.36
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 40 80 120 160 200 SE +/- 0.20, N = 3 SE +/- 0.19, N = 3 178.28 178.62
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream doitm=off Linux w DODT 200 400 600 800 1000 SE +/- 2.36, N = 3 SE +/- 0.79, N = 3 818.30 821.21
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam doitm=off Linux w DODT 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.28, N = 3 84.63 85.13
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield Linux w DODT doitm=off 30 60 90 120 150 SE +/- 0.54, N = 3 SE +/- 0.51, N = 3 138.61 139.19
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation doitm=off Linux w DODT 20 40 60 80 100 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 78.00 78.06
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 Linux w DODT doitm=off 20 40 60 80 100 SE +/- 0.33, N = 3 SE +/- 0.31, N = 3 79.48 79.68 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 doitm=off Linux w DODT 10 20 30 40 50 SE +/- 0.23, N = 3 SE +/- 0.03, N = 3 44.00 44.04 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 doitm=off Linux w DODT 0.7918 1.5836 2.3754 3.1672 3.959 SE +/- 0.014, N = 3 SE +/- 0.031, N = 15 3.499 3.519 1. (CXX) g++ options: -O3 -fPIC -lm
Timed Godot Game Engine Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Godot Game Engine Compilation 3.2.3 Time To Compile Linux w DODT doitm=off 10 20 30 40 50 SE +/- 0.20, N = 3 SE +/- 0.31, N = 3 44.65 44.95
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig doitm=off Linux w DODT 8 16 24 32 40 SE +/- 0.40, N = 4 SE +/- 0.39, N = 4 34.39 34.52
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig Linux w DODT doitm=off 60 120 180 240 300 SE +/- 0.63, N = 3 SE +/- 0.84, N = 3 261.03 261.45
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 13.0 Build System: Ninja Linux w DODT doitm=off 30 60 90 120 150 SE +/- 0.07, N = 3 SE +/- 0.50, N = 3 146.96 147.34
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile doitm=off Linux w DODT 40 80 120 160 200 SE +/- 0.77, N = 3 SE +/- 1.24, N = 3 170.36 170.93
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only doitm=off Linux w DODT 6 12 18 24 30 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 23.30 23.44
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: CPU-Only doitm=off Linux w DODT 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.10, N = 3 62.68 62.71
Phoronix Test Suite v10.8.4