Xeon Platinum 8380 DODT Mitigation Impact

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301271-NE-XEONPLATI60
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 4 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 6 Tests
Creator Workloads 6 Tests
Cryptography 3 Tests
Database Test Suite 4 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Imaging 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 2 Tests
Multi-Core 9 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 5 Tests
Server 4 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux w DODT
January 26 2023
  6 Hours, 40 Minutes
doitm=off
January 26 2023
  5 Hours, 15 Minutes
Invert Hiding All Results Option
  5 Hours, 57 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Platinum 8380 DODT Mitigation ImpactOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Ice Lake IEH512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 22.106.2.0-rc5-phx-dodt (x86_64)GNOME ShellX Server 1.21.1.31.3.211GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Platinum 8380 DODT Mitigation Impact BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375 - Python 3.10.6- Linux w DODT: dodt: Mitigation of DOITM + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - doitm=off: dodt: Vulnerable + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Xeon Platinum 8380 DODT Mitigation Impactcryptopp: Keyed Algorithmscryptopp: Unkeyed Algorithmsminibude: OpenMP - BM1minibude: OpenMP - BM1nekrs: TurboPipe Periodicopenradioss: Bumper Beamopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationxmrig: Monero - 1Mxmrig: Wownero - 1Mwebp: Defaultwebp: Quality 100webp: Quality 100, Highest Compressionkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastavifenc: 0avifenc: 2avifenc: 6build-godot: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-llvm: Ninjabuild-nodejs: Time To Compileopenssl: openssl: clickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runcockroach: MoVR - 256cockroach: KV, 10% Reads - 256cockroach: KV, 60% Reads - 256cockroach: KV, 95% Reads - 256cryptsetup: PBKDF2-sha512cryptsetup: PBKDF2-whirlpoolcryptsetup: AES-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Decryptionpgbench: 100 - 1000 - Read Onlypgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 800 - Read Writepgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 1000 - Read Write - Average Latencydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Cryptostress-ng: Mallocstress-ng: IO_uringstress-ng: SENDFILEstress-ng: x86_64 RdRandspacy: en_core_web_lgspacy: en_core_web_trfblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyrocksdb: Rand Readrocksdb: Update RandLinux w DODTdoitm=off565.089482361.6724212368.11694.72529562166666785.13138.6178.0626451.341450.113.098.422.8147.0647.9542.6842.6179.48244.0383.51944.64634.515261.028146.956170.92917781.61188255.4398.75423.13427.42974.881359.8102448.9119187.413925465831953864.33891.5559.5527.6351.4357.73445.43449.8561.0352.2357.5527.220213370.5067522310.6356832814.63548.5123823.5023890.948644.8577219.3059182.2006316.8622125.9126830.571048.1234454.954387.837781.8802487.3610223.6458178.623348.4603821.2112699147.733592.8841364067.7585533.02195951844.8126508.091128077.91669198.1010971316923.4462.71278841535592576564.749237361.8847482373.48994.94029831033333384.63139.1978.0026578.041317.213.098.432.8146.4747.7542.7242.6279.68443.9953.49944.95134.393261.454147.340170.36117837.91188684.8392.39421.39424.9613968665825513876.23886.1559.1527.3351.5358.53458.63446.3560.5352.2358.1527.320201300.5047512310.6506805514.69548.4485821.5239902.056344.3071218.7781182.6993316.8167125.9736832.390348.0139455.103887.821682.7051481.3864224.0542178.276348.4895818.3017958957.413624.4241978074.4184975.84201328390.3426532.301146358.16669178.0010888322523.3062.68278886510588787OpenBenchmarking.org

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsLinux w DODTdoitm=off120240360480600SE +/- 0.07, N = 3SE +/- 0.20, N = 3565.09564.751. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsLinux w DODTdoitm=off100200300400500Min: 564.98 / Avg: 565.09 / Max: 565.23Min: 564.37 / Avg: 564.75 / Max: 565.041. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsLinux w DODTdoitm=off80160240320400SE +/- 0.04, N = 3SE +/- 0.02, N = 3361.67361.881. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsLinux w DODTdoitm=off60120180240300Min: 361.6 / Avg: 361.67 / Max: 361.72Min: 361.86 / Avg: 361.88 / Max: 361.921. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux w DODTdoitm=off5001000150020002500SE +/- 17.94, N = 3SE +/- 10.72, N = 32368.122373.491. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux w DODTdoitm=off400800120016002000Min: 2344.9 / Avg: 2368.12 / Max: 2403.42Min: 2353.37 / Avg: 2373.49 / Max: 2389.951. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux w DODTdoitm=off20406080100SE +/- 0.72, N = 3SE +/- 0.43, N = 394.7394.941. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux w DODTdoitm=off20406080100Min: 93.8 / Avg: 94.72 / Max: 96.14Min: 94.14 / Avg: 94.94 / Max: 95.61. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicLinux w DODTdoitm=off60000M120000M180000M240000M300000MSE +/- 500223394.54, N = 3SE +/- 1474277940.03, N = 32956216666672983103333331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicLinux w DODTdoitm=off50000M100000M150000M200000M250000MMin: 294627000000 / Avg: 295621666666.67 / Max: 296212000000Min: 296604000000 / Avg: 298310333333.33 / Max: 3012460000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamLinux w DODTdoitm=off20406080100SE +/- 0.28, N = 3SE +/- 0.06, N = 385.1384.63
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamLinux w DODTdoitm=off1632486480Min: 84.78 / Avg: 85.13 / Max: 85.68Min: 84.54 / Avg: 84.63 / Max: 84.75

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldLinux w DODTdoitm=off306090120150SE +/- 0.54, N = 3SE +/- 0.51, N = 3138.61139.19
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldLinux w DODTdoitm=off306090120150Min: 137.56 / Avg: 138.61 / Max: 139.33Min: 138.2 / Avg: 139.19 / Max: 139.88

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationLinux w DODTdoitm=off20406080100SE +/- 0.01, N = 3SE +/- 0.08, N = 378.0678.00
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationLinux w DODTdoitm=off1530456075Min: 78.04 / Avg: 78.06 / Max: 78.07Min: 77.83 / Avg: 78 / Max: 78.08

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MLinux w DODTdoitm=off6K12K18K24K30KSE +/- 76.73, N = 3SE +/- 48.01, N = 326451.326578.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MLinux w DODTdoitm=off5K10K15K20K25KMin: 26317.9 / Avg: 26451.27 / Max: 26583.7Min: 26485.9 / Avg: 26578.03 / Max: 26647.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MLinux w DODTdoitm=off9K18K27K36K45KSE +/- 149.86, N = 3SE +/- 35.05, N = 341450.141317.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MLinux w DODTdoitm=off7K14K21K28K35KMin: 41255.8 / Avg: 41450.13 / Max: 41744.9Min: 41247.3 / Avg: 41317.23 / Max: 41356.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultLinux w DODTdoitm=off3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 313.0913.091. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultLinux w DODTdoitm=off48121620Min: 13.08 / Avg: 13.09 / Max: 13.1Min: 13.08 / Avg: 13.09 / Max: 13.11. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Linux w DODTdoitm=off246810SE +/- 0.00, N = 3SE +/- 0.01, N = 38.428.431. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Linux w DODTdoitm=off3691215Min: 8.42 / Avg: 8.42 / Max: 8.43Min: 8.42 / Avg: 8.43 / Max: 8.441. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionLinux w DODTdoitm=off0.63231.26461.89692.52923.1615SE +/- 0.00, N = 3SE +/- 0.00, N = 32.812.811. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionLinux w DODTdoitm=off246810Min: 2.81 / Avg: 2.81 / Max: 2.81Min: 2.81 / Avg: 2.81 / Max: 2.811. (CC) gcc options: -fvisibility=hidden -O2 -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastLinux w DODTdoitm=off1122334455SE +/- 0.46, N = 15SE +/- 0.63, N = 347.0646.471. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastLinux w DODTdoitm=off1020304050Min: 44.36 / Avg: 47.06 / Max: 49.16Min: 45.59 / Avg: 46.47 / Max: 47.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux w DODTdoitm=off1122334455SE +/- 0.37, N = 10SE +/- 0.60, N = 347.9547.751. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux w DODTdoitm=off1020304050Min: 46.46 / Avg: 47.95 / Max: 49.84Min: 46.84 / Avg: 47.75 / Max: 48.891. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastLinux w DODTdoitm=off1020304050SE +/- 0.33, N = 10SE +/- 0.36, N = 842.6842.72
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastLinux w DODTdoitm=off918273645Min: 41.43 / Avg: 42.68 / Max: 44.06Min: 40.98 / Avg: 42.72 / Max: 44.05

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux w DODTdoitm=off1020304050SE +/- 0.45, N = 3SE +/- 0.18, N = 342.6142.62
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux w DODTdoitm=off918273645Min: 42 / Avg: 42.61 / Max: 43.49Min: 42.36 / Avg: 42.62 / Max: 42.97

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Linux w DODTdoitm=off20406080100SE +/- 0.33, N = 3SE +/- 0.31, N = 379.4879.681. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Linux w DODTdoitm=off1530456075Min: 78.82 / Avg: 79.48 / Max: 79.86Min: 79.26 / Avg: 79.68 / Max: 80.291. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Linux w DODTdoitm=off1020304050SE +/- 0.03, N = 3SE +/- 0.23, N = 344.0444.001. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Linux w DODTdoitm=off918273645Min: 43.97 / Avg: 44.04 / Max: 44.07Min: 43.56 / Avg: 43.99 / Max: 44.341. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Linux w DODTdoitm=off0.79181.58362.37543.16723.959SE +/- 0.031, N = 15SE +/- 0.014, N = 33.5193.4991. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Linux w DODTdoitm=off246810Min: 3.39 / Avg: 3.52 / Max: 3.75Min: 3.47 / Avg: 3.5 / Max: 3.521. (CXX) g++ options: -O3 -fPIC -lm

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux w DODTdoitm=off1020304050SE +/- 0.20, N = 3SE +/- 0.31, N = 344.6544.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux w DODTdoitm=off918273645Min: 44.43 / Avg: 44.65 / Max: 45.05Min: 44.57 / Avg: 44.95 / Max: 45.56

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigLinux w DODTdoitm=off816243240SE +/- 0.39, N = 4SE +/- 0.40, N = 434.5234.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigLinux w DODTdoitm=off714212835Min: 34.03 / Avg: 34.52 / Max: 35.68Min: 33.96 / Avg: 34.39 / Max: 35.58

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigLinux w DODTdoitm=off60120180240300SE +/- 0.63, N = 3SE +/- 0.84, N = 3261.03261.45
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigLinux w DODTdoitm=off50100150200250Min: 260.2 / Avg: 261.03 / Max: 262.25Min: 260.59 / Avg: 261.45 / Max: 263.14

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaLinux w DODTdoitm=off306090120150SE +/- 0.07, N = 3SE +/- 0.50, N = 3146.96147.34
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaLinux w DODTdoitm=off306090120150Min: 146.87 / Avg: 146.96 / Max: 147.09Min: 146.58 / Avg: 147.34 / Max: 148.27

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileLinux w DODTdoitm=off4080120160200SE +/- 1.24, N = 3SE +/- 0.77, N = 3170.93170.36
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileLinux w DODTdoitm=off306090120150Min: 169.48 / Avg: 170.93 / Max: 173.39Min: 169.16 / Avg: 170.36 / Max: 171.8

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLLinux w DODTdoitm=off4K8K12K16K20KSE +/- 63.70, N = 3SE +/- 20.72, N = 317781.617837.91. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)
OpenBenchmarking.orgsign/s, More Is BetterOpenSSLLinux w DODTdoitm=off3K6K9K12K15KMin: 17707 / Avg: 17781.57 / Max: 17908.3Min: 17800.7 / Avg: 17837.93 / Max: 17872.31. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLLinux w DODTdoitm=off300K600K900K1200K1500KSE +/- 1656.25, N = 3SE +/- 1026.50, N = 31188255.41188684.81. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)
OpenBenchmarking.orgverify/s, More Is BetterOpenSSLLinux w DODTdoitm=off200K400K600K800K1000KMin: 1185175 / Avg: 1188255.43 / Max: 1190850.5Min: 1187214.9 / Avg: 1188684.83 / Max: 11906611. OpenSSL 3.0.5 5 Jul 2022 (Library: OpenSSL 3.0.5 5 Jul 2022)

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheLinux w DODTdoitm=off90180270360450SE +/- 3.64, N = 3SE +/- 1.76, N = 3398.75392.39MIN: 34.82 / MAX: 4615.38MIN: 33.75 / MAX: 5000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheLinux w DODTdoitm=off70140210280350Min: 393.56 / Avg: 398.75 / Max: 405.76Min: 388.89 / Avg: 392.39 / Max: 394.44

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunLinux w DODTdoitm=off90180270360450SE +/- 1.99, N = 3SE +/- 1.56, N = 3423.13421.39MIN: 35.89 / MAX: 5000MIN: 34.86 / MAX: 5454.55
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunLinux w DODTdoitm=off80160240320400Min: 419.18 / Avg: 423.13 / Max: 425.4Min: 418.85 / Avg: 421.39 / Max: 424.22

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunLinux w DODTdoitm=off90180270360450SE +/- 3.25, N = 3SE +/- 0.79, N = 3427.42424.96MIN: 35.99 / MAX: 5454.55MIN: 36.19 / MAX: 5454.55
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunLinux w DODTdoitm=off80160240320400Min: 422.2 / Avg: 427.42 / Max: 433.38Min: 423.59 / Avg: 424.96 / Max: 426.34

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256Linux w DODT2004006008001000SE +/- 3.64, N = 3974.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256Linux w DODT20K40K60K80K100KSE +/- 849.68, N = 1581359.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256Linux w DODT20K40K60K80K100KSE +/- 1847.96, N = 15102448.9

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 256Linux w DODT30K60K90K120K150KSE +/- 2483.59, N = 15119187.4

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Linux w DODTdoitm=off300K600K900K1200K1500KSE +/- 3203.15, N = 3SE +/- 2233.59, N = 313925461396866
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Linux w DODTdoitm=off200K400K600K800K1000KMin: 1387005 / Avg: 1392545.67 / Max: 1398101Min: 1392531 / Avg: 1396866.33 / Max: 1399967

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolLinux w DODTdoitm=off120K240K360K480K600KSE +/- 1294.67, N = 3SE +/- 1624.74, N = 3583195582551
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolLinux w DODTdoitm=off100K200K300K400K500KMin: 580606 / Avg: 583195.33 / Max: 584490Min: 579323 / Avg: 582550.67 / Max: 584490

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionLinux w DODTdoitm=off8001600240032004000SE +/- 16.70, N = 3SE +/- 7.45, N = 33864.33876.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionLinux w DODTdoitm=off7001400210028003500Min: 3831 / Avg: 3864.33 / Max: 3882.7Min: 3861.4 / Avg: 3876.2 / Max: 3885.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionLinux w DODTdoitm=off8001600240032004000SE +/- 17.78, N = 3SE +/- 3.34, N = 33891.53886.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionLinux w DODTdoitm=off7001400210028003500Min: 3856.2 / Avg: 3891.47 / Max: 3913Min: 3879.6 / Avg: 3886.1 / Max: 3890.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionLinux w DODTdoitm=off120240360480600SE +/- 1.27, N = 3SE +/- 1.35, N = 3559.5559.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionLinux w DODTdoitm=off100200300400500Min: 557 / Avg: 559.5 / Max: 561.1Min: 556.4 / Avg: 559.1 / Max: 560.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionLinux w DODTdoitm=off110220330440550SE +/- 0.27, N = 3SE +/- 0.09, N = 3527.6527.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionLinux w DODTdoitm=off90180270360450Min: 527.1 / Avg: 527.63 / Max: 528Min: 527.1 / Avg: 527.27 / Max: 527.4

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionLinux w DODTdoitm=off80160240320400SE +/- 0.92, N = 3SE +/- 0.88, N = 3351.4351.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionLinux w DODTdoitm=off60120180240300Min: 349.6 / Avg: 351.43 / Max: 352.5Min: 349.7 / Avg: 351.47 / Max: 352.4

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionLinux w DODTdoitm=off80160240320400SE +/- 0.17, N = 3SE +/- 0.21, N = 3357.7358.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionLinux w DODTdoitm=off60120180240300Min: 357.4 / Avg: 357.7 / Max: 358Min: 358.1 / Avg: 358.5 / Max: 358.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionLinux w DODTdoitm=off7001400210028003500SE +/- 13.51, N = 3SE +/- 3.15, N = 33445.43458.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionLinux w DODTdoitm=off6001200180024003000Min: 3418.4 / Avg: 3445.4 / Max: 3459.9Min: 3453.4 / Avg: 3458.63 / Max: 3464.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionLinux w DODTdoitm=off7001400210028003500SE +/- 13.35, N = 3SE +/- 2.54, N = 33449.83446.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionLinux w DODTdoitm=off6001200180024003000Min: 3423.1 / Avg: 3449.8 / Max: 3463.5Min: 3441.8 / Avg: 3446.33 / Max: 3450.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionLinux w DODTdoitm=off120240360480600SE +/- 0.45, N = 2SE +/- 0.06, N = 3561.0560.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionLinux w DODTdoitm=off100200300400500Min: 560.5 / Avg: 560.95 / Max: 561.4Min: 560.4 / Avg: 560.5 / Max: 560.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionLinux w DODTdoitm=off80160240320400SE +/- 0.12, N = 3SE +/- 0.03, N = 3352.2352.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionLinux w DODTdoitm=off60120180240300Min: 352 / Avg: 352.23 / Max: 352.4Min: 352.1 / Avg: 352.17 / Max: 352.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionLinux w DODTdoitm=off80160240320400SE +/- 0.25, N = 2SE +/- 0.23, N = 3357.5358.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionLinux w DODTdoitm=off60120180240300Min: 357.2 / Avg: 357.45 / Max: 357.7Min: 357.7 / Avg: 358.13 / Max: 358.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionLinux w DODTdoitm=off110220330440550527.2527.3

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyLinux w DODTdoitm=off400K800K1200K1600K2000KSE +/- 89751.30, N = 12SE +/- 80874.14, N = 12202133720201301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyLinux w DODTdoitm=off400K800K1200K1600K2000KMin: 1601299.71 / Avg: 2021337.36 / Max: 2324977.73Min: 1702181.71 / Avg: 2020129.51 / Max: 2317807.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyLinux w DODTdoitm=off0.11390.22780.34170.45560.5695SE +/- 0.024, N = 12SE +/- 0.021, N = 120.5060.5041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyLinux w DODTdoitm=off246810Min: 0.43 / Avg: 0.51 / Max: 0.62Min: 0.43 / Avg: 0.5 / Max: 0.591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read WriteLinux w DODTdoitm=off16K32K48K64K80KSE +/- 197.73, N = 3SE +/- 295.95, N = 375223751231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read WriteLinux w DODTdoitm=off13K26K39K52K65KMin: 74833.35 / Avg: 75223.46 / Max: 75474.7Min: 74689.8 / Avg: 75123.1 / Max: 75688.951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyLinux w DODTdoitm=off3691215SE +/- 0.03, N = 3SE +/- 0.04, N = 310.6410.651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyLinux w DODTdoitm=off3691215Min: 10.6 / Avg: 10.64 / Max: 10.69Min: 10.57 / Avg: 10.65 / Max: 10.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteLinux w DODTdoitm=off15K30K45K60K75KSE +/- 95.69, N = 3SE +/- 295.57, N = 368328680551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteLinux w DODTdoitm=off12K24K36K48K60KMin: 68169.45 / Avg: 68328.09 / Max: 68500.11Min: 67463.62 / Avg: 68054.55 / Max: 683641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyLinux w DODTdoitm=off48121620SE +/- 0.02, N = 3SE +/- 0.06, N = 314.6414.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyLinux w DODTdoitm=off48121620Min: 14.6 / Avg: 14.64 / Max: 14.67Min: 14.63 / Avg: 14.7 / Max: 14.821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1122334455SE +/- 0.01, N = 3SE +/- 0.02, N = 348.5148.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1020304050Min: 48.49 / Avg: 48.51 / Max: 48.53Min: 48.43 / Avg: 48.45 / Max: 48.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off2004006008001000SE +/- 0.30, N = 3SE +/- 0.15, N = 3823.50821.52
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off140280420560700Min: 822.98 / Avg: 823.5 / Max: 824.03Min: 821.33 / Avg: 821.52 / Max: 821.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off2004006008001000SE +/- 2.39, N = 3SE +/- 3.12, N = 3890.95902.06
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off160320480640800Min: 886.81 / Avg: 890.95 / Max: 895.1Min: 897.17 / Avg: 902.06 / Max: 907.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1020304050SE +/- 0.12, N = 3SE +/- 0.16, N = 344.8644.31
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off918273645Min: 44.65 / Avg: 44.86 / Max: 45.05Min: 44.01 / Avg: 44.31 / Max: 44.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off50100150200250SE +/- 0.42, N = 3SE +/- 2.51, N = 3219.31218.78
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off4080120160200Min: 218.46 / Avg: 219.31 / Max: 219.73Min: 215.67 / Avg: 218.78 / Max: 223.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off4080120160200SE +/- 0.39, N = 3SE +/- 2.12, N = 3182.20182.70
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off306090120150Min: 181.74 / Avg: 182.2 / Max: 182.98Min: 178.49 / Avg: 182.7 / Max: 185.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off70140210280350SE +/- 0.14, N = 3SE +/- 1.00, N = 3316.86316.82
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off60120180240300Min: 316.58 / Avg: 316.86 / Max: 317.05Min: 315.04 / Avg: 316.82 / Max: 318.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off306090120150SE +/- 0.09, N = 3SE +/- 0.38, N = 3125.91125.97
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off20406080100Min: 125.78 / Avg: 125.91 / Max: 126.09Min: 125.35 / Avg: 125.97 / Max: 126.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off2004006008001000SE +/- 0.98, N = 3SE +/- 0.57, N = 3830.57832.39
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off150300450600750Min: 828.63 / Avg: 830.57 / Max: 831.73Min: 831.41 / Avg: 832.39 / Max: 833.39

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1122334455SE +/- 0.06, N = 3SE +/- 0.03, N = 348.1248.01
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1020304050Min: 48.06 / Avg: 48.12 / Max: 48.24Min: 47.96 / Avg: 48.01 / Max: 48.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off100200300400500SE +/- 0.28, N = 3SE +/- 0.57, N = 3454.95455.10
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off80160240320400Min: 454.41 / Avg: 454.95 / Max: 455.32Min: 453.96 / Avg: 455.1 / Max: 455.72

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off20406080100SE +/- 0.05, N = 3SE +/- 0.10, N = 387.8487.82
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off20406080100Min: 87.74 / Avg: 87.84 / Max: 87.92Min: 87.72 / Avg: 87.82 / Max: 88.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off20406080100SE +/- 0.12, N = 3SE +/- 0.60, N = 381.8882.71
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1632486480Min: 81.67 / Avg: 81.88 / Max: 82.08Min: 81.97 / Avg: 82.71 / Max: 83.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off110220330440550SE +/- 0.57, N = 3SE +/- 3.55, N = 3487.36481.39
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off90180270360450Min: 486.22 / Avg: 487.36 / Max: 488.03Min: 474.52 / Avg: 481.39 / Max: 486.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off50100150200250SE +/- 0.20, N = 3SE +/- 0.28, N = 3223.65224.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off4080120160200Min: 223.36 / Avg: 223.65 / Max: 224.03Min: 223.52 / Avg: 224.05 / Max: 224.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off4080120160200SE +/- 0.19, N = 3SE +/- 0.20, N = 3178.62178.28
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off306090120150Min: 178.33 / Avg: 178.62 / Max: 178.97Min: 177.95 / Avg: 178.28 / Max: 178.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1122334455SE +/- 0.18, N = 3SE +/- 0.03, N = 348.4648.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off1020304050Min: 48.1 / Avg: 48.46 / Max: 48.69Min: 48.44 / Avg: 48.49 / Max: 48.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off2004006008001000SE +/- 0.79, N = 3SE +/- 2.36, N = 3821.21818.30
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux w DODTdoitm=off140280420560700Min: 819.74 / Avg: 821.21 / Max: 822.42Min: 813.63 / Avg: 818.3 / Max: 821.23

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexLinux w DODTdoitm=off200K400K600K800K1000KSE +/- 6957.32, N = 3SE +/- 56672.45, N = 15699147.73958957.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexLinux w DODTdoitm=off170K340K510K680K850KMin: 685679.41 / Avg: 699147.73 / Max: 708909.45Min: 645300.55 / Avg: 958957.41 / Max: 1186392.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDLinux w DODTdoitm=off8001600240032004000SE +/- 2.24, N = 3SE +/- 1.55, N = 33592.883624.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDLinux w DODTdoitm=off6001200180024003000Min: 3588.48 / Avg: 3592.88 / Max: 3595.84Min: 3621.52 / Avg: 3624.42 / Max: 3626.81. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexLinux w DODTdoitm=off9M18M27M36M45MSE +/- 162165.00, N = 3SE +/- 34997.86, N = 341364067.7541978074.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexLinux w DODTdoitm=off7M14M21M28M35MMin: 41056559.01 / Avg: 41364067.75 / Max: 41607103.91Min: 41908181.12 / Avg: 41978074.41 / Max: 42016299.31. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoLinux w DODTdoitm=off20K40K60K80K100KSE +/- 445.46, N = 3SE +/- 991.74, N = 385533.0284975.841. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoLinux w DODTdoitm=off15K30K45K60K75KMin: 84642.3 / Avg: 85533.02 / Max: 85994.65Min: 83196.83 / Avg: 84975.84 / Max: 86624.931. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocLinux w DODTdoitm=off40M80M120M160M200MSE +/- 655475.74, N = 3SE +/- 454146.57, N = 3195951844.81201328390.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocLinux w DODTdoitm=off30M60M90M120M150MMin: 195064630.12 / Avg: 195951844.81 / Max: 197231264.89Min: 200857160.75 / Avg: 201328390.34 / Max: 202236466.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringLinux w DODTdoitm=off6K12K18K24K30KSE +/- 16.56, N = 3SE +/- 1.30, N = 326508.0926532.301. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringLinux w DODTdoitm=off5K10K15K20K25KMin: 26478.74 / Avg: 26508.09 / Max: 26536.05Min: 26530.55 / Avg: 26532.3 / Max: 26534.841. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILELinux w DODTdoitm=off200K400K600K800K1000KSE +/- 8091.69, N = 3SE +/- 8323.86, N = 31128077.911146358.161. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILELinux w DODTdoitm=off200K400K600K800K1000KMin: 1113908.45 / Avg: 1128077.91 / Max: 1141933.63Min: 1131275.45 / Avg: 1146358.16 / Max: 1160002.281. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandLinux w DODTdoitm=off140K280K420K560K700KSE +/- 68.80, N = 3SE +/- 67.49, N = 3669198.10669178.001. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandLinux w DODTdoitm=off120K240K360K480K600KMin: 669060.51 / Avg: 669198.1 / Max: 669268.82Min: 669077.57 / Avg: 669178 / Max: 669306.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgLinux w DODTdoitm=off2K4K6K8K10KSE +/- 10.12, N = 3SE +/- 69.82, N = 31097110888
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgLinux w DODTdoitm=off2K4K6K8K10KMin: 10954 / Avg: 10971 / Max: 10989Min: 10789 / Avg: 10888.33 / Max: 11023

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfLinux w DODTdoitm=off7001400210028003500SE +/- 30.17, N = 3SE +/- 48.89, N = 331693225
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfLinux w DODTdoitm=off6001200180024003000Min: 3138 / Avg: 3168.67 / Max: 3229Min: 3168 / Avg: 3224.67 / Max: 3322

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyLinux w DODTdoitm=off612182430SE +/- 0.07, N = 3SE +/- 0.04, N = 323.4423.30
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyLinux w DODTdoitm=off510152025Min: 23.37 / Avg: 23.44 / Max: 23.57Min: 23.23 / Avg: 23.3 / Max: 23.37

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyLinux w DODTdoitm=off1428425670SE +/- 0.10, N = 3SE +/- 0.06, N = 362.7162.68
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyLinux w DODTdoitm=off1224364860Min: 62.54 / Avg: 62.71 / Max: 62.89Min: 62.57 / Avg: 62.68 / Max: 62.78

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadLinux w DODTdoitm=off60M120M180M240M300MSE +/- 2251545.82, N = 3SE +/- 2757403.57, N = 32788415352788865101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadLinux w DODTdoitm=off50M100M150M200M250MMin: 274417365 / Avg: 278841534.67 / Max: 281780543Min: 273546917 / Avg: 278886509.67 / Max: 2827506201. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomLinux w DODTdoitm=off130K260K390K520K650KSE +/- 1503.74, N = 3SE +/- 3603.95, N = 35925765887871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomLinux w DODTdoitm=off100K200K300K400K500KMin: 589902 / Avg: 592576.33 / Max: 595105Min: 581601 / Avg: 588787.33 / Max: 5928631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

86 Results Shown

Crypto++:
  Keyed Algorithms
  Unkeyed Algorithms
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
nekRS
OpenRadioss:
  Bumper Beam
  Bird Strike on Windshield
  Rubber O-Ring Seal Installation
Xmrig:
  Monero - 1M
  Wownero - 1M
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Highest Compression
Kvazaar:
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
uvg266:
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
libavif avifenc:
  0
  2
  6
Timed Godot Game Engine Compilation
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Timed LLVM Compilation
Timed Node.js Compilation
OpenSSL:
 
 
ClickHouse:
  100M Rows Hits Dataset, First Run / Cold Cache
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, Third Run
CockroachDB:
  MoVR - 256
  KV, 10% Reads - 256
  KV, 60% Reads - 256
  KV, 95% Reads - 256
Cryptsetup:
  PBKDF2-sha512
  PBKDF2-whirlpool
  AES-XTS 256b Encryption
  AES-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  Serpent-XTS 256b Decryption
  Twofish-XTS 256b Encryption
  Twofish-XTS 256b Decryption
  AES-XTS 512b Encryption
  AES-XTS 512b Decryption
  Serpent-XTS 512b Encryption
  Twofish-XTS 512b Encryption
  Twofish-XTS 512b Decryption
  Serpent-XTS 512b Decryption
PostgreSQL:
  100 - 1000 - Read Only
  100 - 1000 - Read Only - Average Latency
  100 - 800 - Read Write
  100 - 800 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 1000 - Read Write - Average Latency
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
Stress-NG:
  Futex
  MEMFD
  Mutex
  Crypto
  Malloc
  IO_uring
  SENDFILE
  x86_64 RdRand
spaCy:
  en_core_web_lg
  en_core_web_trf
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
RocksDB:
  Rand Read
  Update Rand