Core i7 7900X 2021

Intel Core i7-7900X testing with a ASRock X299 Extreme4 (P1.50 BIOS) and Zotac NVIDIA GeForce GT 610 1GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102017-HA-COREI779055
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 3 Tests
Compression Tests 3 Tests
CPU Massive 7 Tests
Creator Workloads 4 Tests
Cryptography 3 Tests
Finance 2 Tests
HPC - High Performance Computing 11 Tests
Machine Learning 5 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 9 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 2 Tests
Scientific Computing 6 Tests
Server CPU Tests 4 Tests
Single-Threaded 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
January 31 2021
  5 Hours, 7 Minutes
2
February 01 2021
  5 Hours, 11 Minutes
3
February 01 2021
  4 Hours, 57 Minutes
Invert Hiding All Results Option
  5 Hours, 5 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 7900X 2021ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverCompilerFile-SystemScreen Resolution123Intel Core i7-7900X @ 4.50GHz (10 Cores / 20 Threads)ASRock X299 Extreme4 (P1.50 BIOS)Intel Sky Lake-E DMI3 Registers16GB120GB Corsair Force MP500Zotac NVIDIA GeForce GT 610 1GBRealtek ALC1220LG Ultra HDIntel I219-VUbuntu 19.045.0.0-38-generic (x86_64)X Server 1.20.4zotacGCC 11.0.0 20200929ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --disable-multilib --enable-checking=release --enable-languages=c,c++,fortran Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2000064Python Details- Python 2.7.16 + Python 3.7.3Security Details- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%102%105%107%109%RedisUnpacking FirefoxoneDNNLULESHTimed Godot Game Engine CompilationNCNNLAMMPS Molecular Dynamics SimulatorAlgebraic Multi-Grid BenchmarkKripkeBuild2QuantLibQMCPACKGnuPGMobile Neural Network7-Zip CompressionGcrypt LibrarylzbenchGoogle SynthMarkZstd CompressionTimed Eigen CompilationONNX RuntimeCoremarkrav1eCryptsetupOpenFOAMFinanceBenchTNN

Core i7 7900X 2021quantlib: lzbench: XZ 0 - Compressionlzbench: XZ 0 - Decompressionlzbench: Zstd 1 - Compressionlzbench: Zstd 1 - Decompressionlzbench: Zstd 8 - Compressionlzbench: Zstd 8 - Decompressionlzbench: Crush 0 - Compressionlzbench: Crush 0 - Decompressionlzbench: Brotli 0 - Compressionlzbench: Brotli 0 - Decompressionlzbench: Brotli 2 - Compressionlzbench: Brotli 2 - Decompressionlzbench: Libdeflate 1 - Compressionamg: qmcpack: simple-H2Oopenfoam: Motorbike 30Mopenfoam: Motorbike 60Mlammps: Rhodopsin Proteinlulesh: compress-zstd: 3compress-zstd: 19onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUrav1e: 1rav1e: 5rav1e: 6rav1e: 10coremark: CoreMark Size 666 - Iterations Per Secondcompress-7zip: Compress Speed Testbuild-godot: Time To Compilebuild2: Time To Compilebuild-eigen: Time To Compilegcrypt: synthmark: VoiceMark_100financebench: Repo OpenMPfinancebench: Bonds OpenMPcryptsetup: PBKDF2-sha512cryptsetup: PBKDF2-whirlpoolcryptsetup: AES-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Twofish-XTS 512b Decryptionredis: LPOPredis: SADDredis: LPUSHredis: GETredis: SETmnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1onnx: yolov4 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUgnupg: 2.7GB Sample File Encryptionunpack-firefox: firefox-84.0.source.tar.xzkripke: 1232765.146124534170990172311053248366119976626842966687529.224139.55701.377.4256332.79534359.754.23.225276.200631.232171.675608.075503.6339210.60743.273513.509709.996651.467552.203812104.771170.552104.7912.577815.744316.40301168.782.063082104.211172.550.8641362.868170.3941.0931.4373.112393486.16098260523142.104139.47087.929196.670617.57537075.50260452224.90234418027317698802488.32415.0813.9838.8452.0455.92312.52274.4818.1836.3454.1455.92987638.082420991.581889307.212861475.422149992.087.35140.1704.5584.98541.06717.445.404.536.264.936.732.4513.8141.0211.739.0023.0325.8619.5219.39314.945296.12248372810311550708166.81421.429581926432737.546124532170789172010953248466219976626843564626729.072139.50701.057.3236440.07924365.454.33.176835.809721.195201.647858.070763.6066710.60673.239483.505739.989151.465422.163132088.201157.222110.8712.590215.744616.34851154.722.069012096.251154.540.8944792.894870.3941.0921.4383.119392945.70237760395144.277137.88488.053196.534618.25036848.84114653124.96093818017087695052494.42407.6817.8837.7454.0456.12306.82264.7817.9836.8453.8454.81984735.792410939.671905897.882750396.002181880.087.34040.1954.5244.96841.02117.705.244.326.104.866.632.3313.2541.2911.789.0023.0825.8619.4919.41315.021296.01648672010311588711066.45820.078575825732744.645124535170989172111053248466019976626843561940029.026139.17701.627.4206401.29264364.854.43.617736.204321.246721.816778.095213.7601710.51963.366353.699799.887301.480042.403592187.261239.502146.8612.569615.720816.87071218.952.039812164.691214.470.8444652.839830.3941.0931.4393.110393426.96813060267142.016138.14687.870196.151616.74436914.41927152234.41406218058297695052492.42410.0817.0837.4453.7455.92307.52266.1816.7837.5452.8455.72013855.592433452.331927407.212765324.332174653.007.34740.2544.4554.97641.02117.415.254.366.094.826.692.3613.5240.4111.818.9922.8225.8719.2619.49315.069296.09348672510311540713366.62619.81357484260OpenBenchmarking.org

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.211236001200180024003000SE +/- 3.07, N = 3SE +/- 32.26, N = 3SE +/- 29.12, N = 32765.12737.52744.61. (CXX) g++ options: -O3 -march=native -rdynamic -lboost_timer -lboost_system -lboost_chrono
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.211235001000150020002500Min: 2759 / Avg: 2765.1 / Max: 2768.8Min: 2673.3 / Avg: 2737.53 / Max: 2774.9Min: 2686.7 / Avg: 2744.63 / Max: 2778.81. (CXX) g++ options: -O3 -march=native -rdynamic -lboost_timer -lboost_system -lboost_chrono

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Compression1231020304050SE +/- 0.33, N = 34646451. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Compression123918273645Min: 45 / Avg: 45.67 / Max: 461. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Decompression1233060901201501241241241. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression123120240360480600SE +/- 3.71, N = 3SE +/- 3.33, N = 3SE +/- 2.33, N = 35345325351. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression12390180270360450Min: 527 / Avg: 534.33 / Max: 539Min: 525 / Avg: 531.67 / Max: 535Min: 530 / Avg: 534.67 / Max: 5371. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression123400800120016002000SE +/- 0.33, N = 31709170717091. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression12330060090012001500Min: 1708 / Avg: 1708.67 / Max: 17091. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Compression12320406080100SE +/- 0.33, N = 39089891. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Compression12320406080100Min: 89 / Avg: 89.33 / Max: 901. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression123400800120016002000SE +/- 1.20, N = 3SE +/- 1.15, N = 31723172017211. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression12330060090012001500Min: 1721 / Avg: 1723.33 / Max: 1725Min: 1719 / Avg: 1721 / Max: 17231. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Compression123204060801001101091101. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Decompression123120240360480600SE +/- 0.33, N = 35325325321. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Decompression12390180270360450Min: 531 / Avg: 531.67 / Max: 5321. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Compression123100200300400500SE +/- 1.67, N = 3SE +/- 1.00, N = 3SE +/- 0.58, N = 34834844841. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Compression12390180270360450Min: 480 / Avg: 483.33 / Max: 485Min: 482 / Avg: 484 / Max: 485Min: 483 / Avg: 484 / Max: 4851. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression123140280420560700SE +/- 0.33, N = 3SE +/- 2.00, N = 36616626601. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression123120240360480600Min: 662 / Avg: 662.33 / Max: 663Min: 656 / Avg: 660 / Max: 6621. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Compression12340801201602001991991991. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression123170340510680850SE +/- 0.67, N = 3SE +/- 0.33, N = 37667667661. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression123130260390520650Min: 765 / Avg: 765.67 / Max: 767Min: 765 / Avg: 765.67 / Max: 7661. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Compression123601201802403002682682681. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.212390M180M270M360M450MSE +/- 4426948.99, N = 8SE +/- 112596.07, N = 3SE +/- 61971.85, N = 34296668754356462674356194001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.212380M160M240M320M400MMin: 398680700 / Avg: 429666875 / Max: 434338200Min: 435423300 / Avg: 435646266.67 / Max: 435785100Min: 435501800 / Avg: 435619400 / Max: 4357121001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O123714212835SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 329.2229.0729.031. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -lm -pthread
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O123612182430Min: 29.14 / Avg: 29.22 / Max: 29.31Min: 28.89 / Avg: 29.07 / Max: 29.29Min: 28.88 / Avg: 29.03 / Max: 29.261. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -lm -pthread

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M123306090120150SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3139.55139.50139.171. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M123306090120150Min: 139.49 / Avg: 139.55 / Max: 139.59Min: 139.26 / Avg: 139.5 / Max: 139.68Min: 139.06 / Avg: 139.17 / Max: 139.331. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M123150300450600750SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.49, N = 3701.37701.05701.621. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M123120240360480600Min: 700.98 / Avg: 701.37 / Max: 701.7Min: 700.92 / Avg: 701.05 / Max: 701.24Min: 700.91 / Avg: 701.62 / Max: 702.571. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810SE +/- 0.032, N = 3SE +/- 0.113, N = 3SE +/- 0.013, N = 37.4257.3237.4201. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1233691215Min: 7.36 / Avg: 7.42 / Max: 7.47Min: 7.1 / Avg: 7.32 / Max: 7.44Min: 7.4 / Avg: 7.42 / Max: 7.451. (CXX) g++ options: -O3 -pthread -lm

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.312314002800420056007000SE +/- 71.32, N = 3SE +/- 14.52, N = 3SE +/- 44.49, N = 36332.806440.086401.291. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.312311002200330044005500Min: 6190.21 / Avg: 6332.8 / Max: 6407.41Min: 6411.42 / Avg: 6440.08 / Max: 6458.45Min: 6312.36 / Avg: 6401.29 / Max: 6448.271. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31239001800270036004500SE +/- 1.62, N = 3SE +/- 2.12, N = 3SE +/- 10.23, N = 34359.74365.44364.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31238001600240032004000Min: 4356.9 / Avg: 4359.67 / Max: 4362.5Min: 4361.6 / Avg: 4365.43 / Max: 4368.9Min: 4345.9 / Avg: 4364.83 / Max: 43811. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 191231224364860SE +/- 0.32, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 354.254.354.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 191231122334455Min: 53.6 / Avg: 54.23 / Max: 54.6Min: 53.9 / Avg: 54.27 / Max: 54.6Min: 54.3 / Avg: 54.4 / Max: 54.51. (CC) gcc options: -O3 -pthread -lz -llzma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1230.8141.6282.4423.2564.07SE +/- 0.03023, N = 15SE +/- 0.00055, N = 3SE +/- 0.10171, N = 153.225273.176833.61773MIN: 3.08MIN: 3.03MIN: 3.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU123246810Min: 3.17 / Avg: 3.23 / Max: 3.59Min: 3.18 / Avg: 3.18 / Max: 3.18Min: 3.17 / Avg: 3.62 / Max: 4.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU123246810SE +/- 0.02631, N = 3SE +/- 0.00225, N = 3SE +/- 0.06455, N = 36.200635.809726.20432MIN: 6.13MIN: 5.78MIN: 6.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU123246810Min: 6.16 / Avg: 6.2 / Max: 6.25Min: 5.81 / Avg: 5.81 / Max: 5.81Min: 6.11 / Avg: 6.2 / Max: 6.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.28050.5610.84151.1221.4025SE +/- 0.01498, N = 6SE +/- 0.00136, N = 3SE +/- 0.00521, N = 31.232171.195201.24672MIN: 1.17MIN: 1.16MIN: 1.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU123246810Min: 1.2 / Avg: 1.23 / Max: 1.3Min: 1.19 / Avg: 1.2 / Max: 1.2Min: 1.24 / Avg: 1.25 / Max: 1.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.40880.81761.22641.63522.044SE +/- 0.00470, N = 3SE +/- 0.00048, N = 3SE +/- 0.01894, N = 31.675601.647851.81677MIN: 1.6MIN: 1.59MIN: 1.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 1.67 / Avg: 1.68 / Max: 1.68Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.78 / Avg: 1.82 / Max: 1.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU123246810SE +/- 0.00411, N = 3SE +/- 0.00140, N = 3SE +/- 0.00816, N = 38.075508.070768.09521MIN: 8.01MIN: 8.01MIN: 8.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU1233691215Min: 8.07 / Avg: 8.08 / Max: 8.08Min: 8.07 / Avg: 8.07 / Max: 8.07Min: 8.08 / Avg: 8.1 / Max: 8.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU1230.8461.6922.5383.3844.23SE +/- 0.00378, N = 3SE +/- 0.00372, N = 3SE +/- 0.03347, N = 33.633923.606673.76017MIN: 3.58MIN: 3.55MIN: 3.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU123246810Min: 3.63 / Avg: 3.63 / Max: 3.64Min: 3.6 / Avg: 3.61 / Max: 3.61Min: 3.69 / Avg: 3.76 / Max: 3.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 310.6110.6110.52MIN: 10.56MIN: 10.56MIN: 10.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1233691215Min: 10.6 / Avg: 10.61 / Max: 10.61Min: 10.6 / Avg: 10.61 / Max: 10.61Min: 10.5 / Avg: 10.52 / Max: 10.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1230.75741.51482.27223.02963.787SE +/- 0.01902, N = 3SE +/- 0.01514, N = 3SE +/- 0.00338, N = 33.273513.239483.36635MIN: 3.19MIN: 3.17MIN: 3.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU123246810Min: 3.24 / Avg: 3.27 / Max: 3.3Min: 3.21 / Avg: 3.24 / Max: 3.27Min: 3.36 / Avg: 3.37 / Max: 3.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1230.83251.6652.49753.334.1625SE +/- 0.00098, N = 3SE +/- 0.00461, N = 3SE +/- 0.04752, N = 33.509703.505733.69979MIN: 3.48MIN: 3.47MIN: 3.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123246810Min: 3.51 / Avg: 3.51 / Max: 3.51Min: 3.5 / Avg: 3.51 / Max: 3.51Min: 3.63 / Avg: 3.7 / Max: 3.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.00034, N = 3SE +/- 0.00261, N = 3SE +/- 0.00661, N = 39.996659.989159.88730MIN: 9.95MIN: 9.94MIN: 9.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215Min: 10 / Avg: 10 / Max: 10Min: 9.98 / Avg: 9.99 / Max: 9.99Min: 9.88 / Avg: 9.89 / Max: 9.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1230.3330.6660.9991.3321.665SE +/- 0.00099, N = 3SE +/- 0.00133, N = 3SE +/- 0.00459, N = 31.467551.465421.48004MIN: 1.45MIN: 1.45MIN: 1.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123246810Min: 1.47 / Avg: 1.47 / Max: 1.47Min: 1.46 / Avg: 1.47 / Max: 1.47Min: 1.48 / Avg: 1.48 / Max: 1.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1230.54081.08161.62242.16322.704SE +/- 0.00646, N = 3SE +/- 0.00487, N = 3SE +/- 0.02317, N = 152.203812.163132.40359MIN: 2.17MIN: 2.15MIN: 2.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810Min: 2.19 / Avg: 2.2 / Max: 2.22Min: 2.15 / Avg: 2.16 / Max: 2.17Min: 2.27 / Avg: 2.4 / Max: 2.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1235001000150020002500SE +/- 2.16, N = 3SE +/- 5.39, N = 3SE +/- 9.79, N = 32104.772088.202187.26MIN: 2095.67MIN: 2073.68MIN: 2167.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU123400800120016002000Min: 2100.5 / Avg: 2104.77 / Max: 2107.43Min: 2080.79 / Avg: 2088.2 / Max: 2098.69Min: 2172.29 / Avg: 2187.26 / Max: 2205.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12330060090012001500SE +/- 1.50, N = 3SE +/- 3.58, N = 3SE +/- 12.35, N = 31170.551157.221239.50MIN: 1163.52MIN: 1147.67MIN: 1216.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1232004006008001000Min: 1167.69 / Avg: 1170.55 / Max: 1172.75Min: 1152.13 / Avg: 1157.22 / Max: 1164.13Min: 1219.67 / Avg: 1239.5 / Max: 1262.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1235001000150020002500SE +/- 2.17, N = 3SE +/- 10.38, N = 3SE +/- 11.05, N = 32104.792110.872146.86MIN: 2087.83MIN: 2079.23MIN: 2120.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU123400800120016002000Min: 2101.35 / Avg: 2104.79 / Max: 2108.81Min: 2093.12 / Avg: 2110.87 / Max: 2129.06Min: 2129.98 / Avg: 2146.86 / Max: 2167.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 312.5812.5912.57MIN: 12.38MIN: 12.37MIN: 12.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU12348121620Min: 12.57 / Avg: 12.58 / Max: 12.58Min: 12.58 / Avg: 12.59 / Max: 12.6Min: 12.57 / Avg: 12.57 / Max: 12.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU12348121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 315.7415.7415.72MIN: 15.5MIN: 15.5MIN: 15.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU12348121620Min: 15.73 / Avg: 15.74 / Max: 15.75Min: 15.74 / Avg: 15.74 / Max: 15.75Min: 15.7 / Avg: 15.72 / Max: 15.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU12348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 316.4016.3516.87MIN: 16.32MIN: 16.27MIN: 16.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU12348121620Min: 16.38 / Avg: 16.4 / Max: 16.43Min: 16.34 / Avg: 16.35 / Max: 16.36Min: 16.78 / Avg: 16.87 / Max: 16.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12330060090012001500SE +/- 1.92, N = 3SE +/- 0.17, N = 3SE +/- 13.30, N = 31168.781154.721218.95MIN: 1161.69MIN: 1150.38MIN: 1190.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1232004006008001000Min: 1166.59 / Avg: 1168.78 / Max: 1172.62Min: 1154.47 / Avg: 1154.72 / Max: 1155.04Min: 1193.31 / Avg: 1218.95 / Max: 1237.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1230.46550.9311.39651.8622.3275SE +/- 0.00176, N = 3SE +/- 0.00205, N = 3SE +/- 0.00179, N = 32.063082.069012.03981MIN: 2.03MIN: 2.04MIN: 2.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 2.06 / Avg: 2.06 / Max: 2.07Min: 2.07 / Avg: 2.07 / Max: 2.07Min: 2.04 / Avg: 2.04 / Max: 2.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1235001000150020002500SE +/- 3.56, N = 3SE +/- 3.05, N = 3SE +/- 18.06, N = 32104.212096.252164.69MIN: 2088.02MIN: 2078.2MIN: 2131.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU123400800120016002000Min: 2097.81 / Avg: 2104.21 / Max: 2110.1Min: 2091.17 / Avg: 2096.25 / Max: 2101.71Min: 2140.21 / Avg: 2164.69 / Max: 2199.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12330060090012001500SE +/- 1.66, N = 3SE +/- 0.72, N = 3SE +/- 7.34, N = 31172.551154.541214.47MIN: 1166.22MIN: 1148.49MIN: 1197.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1232004006008001000Min: 1170.38 / Avg: 1172.55 / Max: 1175.8Min: 1153.73 / Avg: 1154.54 / Max: 1155.98Min: 1201.7 / Avg: 1214.47 / Max: 1227.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1230.20130.40260.60390.80521.0065SE +/- 0.001667, N = 3SE +/- 0.001069, N = 3SE +/- 0.004799, N = 30.8641360.8944790.844465MIN: 0.83MIN: 0.86MIN: 0.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123246810Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.89 / Avg: 0.89 / Max: 0.9Min: 0.84 / Avg: 0.84 / Max: 0.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU1230.65131.30261.95392.60523.2565SE +/- 0.00245, N = 3SE +/- 0.00187, N = 3SE +/- 0.01096, N = 32.868172.894872.83983MIN: 2.7MIN: 2.74MIN: 2.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU123246810Min: 2.86 / Avg: 2.87 / Max: 2.87Min: 2.89 / Avg: 2.89 / Max: 2.9Min: 2.82 / Avg: 2.84 / Max: 2.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 11230.08870.17740.26610.35480.4435SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.3940.3940.394
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 112312345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 51230.24590.49180.73770.98361.2295SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 31.0931.0921.093
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5123246810Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.1

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 61230.32380.64760.97141.29521.619SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 31.4371.4381.439
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6123246810Min: 1.43 / Avg: 1.44 / Max: 1.44Min: 1.43 / Avg: 1.44 / Max: 1.44Min: 1.44 / Avg: 1.44 / Max: 1.44

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 101230.70181.40362.10542.80723.509SE +/- 0.005, N = 3SE +/- 0.013, N = 3SE +/- 0.008, N = 33.1123.1193.110
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10123246810Min: 3.1 / Avg: 3.11 / Max: 3.12Min: 3.09 / Avg: 3.12 / Max: 3.13Min: 3.1 / Avg: 3.11 / Max: 3.12

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12380K160K240K320K400KSE +/- 324.29, N = 3SE +/- 493.57, N = 3SE +/- 551.11, N = 3393486.16392945.70393426.971. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12370K140K210K280K350KMin: 393081.76 / Avg: 393486.16 / Max: 394127.5Min: 392323.54 / Avg: 392945.7 / Max: 393920.49Min: 392464.68 / Avg: 393426.97 / Max: 394373.61. (CC) gcc options: -O2 -lrt" -lrt

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed Test12313K26K39K52K65KSE +/- 160.37, N = 3SE +/- 519.08, N = 3SE +/- 543.97, N = 36052360395602671. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed Test12310K20K30K40K50KMin: 60224 / Avg: 60523 / Max: 60773Min: 59696 / Avg: 60394.67 / Max: 61409Min: 59630 / Avg: 60266.67 / Max: 613491. (CXX) g++ options: -pipe -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile123306090120150SE +/- 0.08, N = 3SE +/- 1.71, N = 5SE +/- 0.16, N = 3142.10144.28142.02
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile123306090120150Min: 141.97 / Avg: 142.1 / Max: 142.24Min: 141.79 / Avg: 144.28 / Max: 150.8Min: 141.75 / Avg: 142.02 / Max: 142.3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile123306090120150SE +/- 1.89, N = 4SE +/- 1.71, N = 3SE +/- 0.49, N = 3139.47137.88138.15
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile123306090120150Min: 135.99 / Avg: 139.47 / Max: 144.8Min: 135.72 / Avg: 137.88 / Max: 141.26Min: 137.22 / Avg: 138.15 / Max: 138.88

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100SE +/- 0.25, N = 3SE +/- 0.19, N = 3SE +/- 0.18, N = 387.9388.0587.87
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100Min: 87.43 / Avg: 87.93 / Max: 88.24Min: 87.74 / Avg: 88.05 / Max: 88.39Min: 87.62 / Avg: 87.87 / Max: 88.22

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.91234080120160200SE +/- 0.56, N = 3SE +/- 0.22, N = 3SE +/- 0.19, N = 3196.67196.53196.151. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.91234080120160200Min: 196.04 / Avg: 196.67 / Max: 197.78Min: 196.19 / Avg: 196.53 / Max: 196.94Min: 195.93 / Avg: 196.15 / Max: 196.531. (CC) gcc options: -O2 -fvisibility=hidden

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100123130260390520650SE +/- 2.52, N = 3SE +/- 3.42, N = 3SE +/- 2.92, N = 3617.58618.25616.741. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100123110220330440550Min: 612.54 / Avg: 617.58 / Max: 620.38Min: 612.56 / Avg: 618.25 / Max: 624.4Min: 613.23 / Avg: 616.74 / Max: 622.531. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMP1238K16K24K32K40KSE +/- 163.40, N = 3SE +/- 15.63, N = 3SE +/- 56.51, N = 337075.5036848.8436914.421. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMP1236K12K18K24K30KMin: 36898.92 / Avg: 37075.5 / Max: 37401.95Min: 36825.07 / Avg: 36848.84 / Max: 36878.32Min: 36836.98 / Avg: 36914.42 / Max: 37024.441. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMP12311K22K33K44K55KSE +/- 109.70, N = 3SE +/- 680.64, N = 15SE +/- 73.43, N = 352224.9053124.9652234.411. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMP1239K18K27K36K45KMin: 52112.76 / Avg: 52224.9 / Max: 52444.29Min: 52035.14 / Avg: 53124.96 / Max: 59705.19Min: 52098.63 / Avg: 52234.41 / Max: 52350.751. (CXX) g++ options: -O3 -march=native -fopenmp

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512123400K800K1200K1600K2000KSE +/- 4507.19, N = 3SE +/- 5362.16, N = 3SE +/- 3732.03, N = 3180273118017081805829
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512123300K600K900K1200K1500KMin: 1795506 / Avg: 1802731.33 / Max: 1811012Min: 1792437 / Avg: 1801708.33 / Max: 1811012Min: 1798586 / Avg: 1805829 / Max: 1811012

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool123160K320K480K640K800KSE +/- 652.69, N = 3SE +/- 995.18, N = 3SE +/- 995.18, N = 3769880769505769505
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool123130K260K390K520K650KMin: 768750 / Avg: 769880 / Max: 771011Min: 767625 / Avg: 769505 / Max: 771011Min: 767625 / Avg: 769505 / Max: 771011

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryption1235001000150020002500SE +/- 13.15, N = 3SE +/- 6.84, N = 3SE +/- 1.44, N = 32488.32494.42492.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryption123400800120016002000Min: 2462.6 / Avg: 2488.33 / Max: 2505.9Min: 2482 / Avg: 2494.43 / Max: 2505.6Min: 2490.7 / Avg: 2492.43 / Max: 2495.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption1235001000150020002500SE +/- 5.28, N = 3SE +/- 3.87, N = 3SE +/- 3.03, N = 32415.02407.62410.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption123400800120016002000Min: 2404.6 / Avg: 2414.97 / Max: 2421.9Min: 2400.3 / Avg: 2407.57 / Max: 2413.5Min: 2405.9 / Avg: 2409.97 / Max: 2415.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption1232004006008001000SE +/- 5.12, N = 3SE +/- 0.15, N = 3SE +/- 0.99, N = 3813.9817.8817.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption123140280420560700Min: 803.8 / Avg: 813.87 / Max: 820.5Min: 817.5 / Avg: 817.8 / Max: 818Min: 815.8 / Avg: 817.03 / Max: 819

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption1232004006008001000SE +/- 0.65, N = 3SE +/- 0.95, N = 3SE +/- 0.09, N = 3838.8837.7837.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption123150300450600750Min: 837.8 / Avg: 838.77 / Max: 840Min: 835.8 / Avg: 837.7 / Max: 838.8Min: 837.2 / Avg: 837.37 / Max: 837.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryption123100200300400500SE +/- 2.45, N = 3SE +/- 0.67, N = 3SE +/- 0.82, N = 3452.0454.0453.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryption12380160240320400Min: 447.1 / Avg: 452 / Max: 454.6Min: 453.1 / Avg: 454 / Max: 455.3Min: 452.1 / Avg: 453.7 / Max: 454.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryption123100200300400500SE +/- 0.42, N = 3SE +/- 0.26, N = 3SE +/- 0.22, N = 3455.9456.1455.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryption12380160240320400Min: 455.3 / Avg: 455.9 / Max: 456.7Min: 455.7 / Avg: 456.1 / Max: 456.6Min: 455.6 / Avg: 455.87 / Max: 456.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption1235001000150020002500SE +/- 7.46, N = 3SE +/- 5.67, N = 3SE +/- 6.30, N = 32312.52306.82307.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption123400800120016002000Min: 2303.3 / Avg: 2312.53 / Max: 2327.3Min: 2295.5 / Avg: 2306.83 / Max: 2312.6Min: 2297.6 / Avg: 2307.5 / Max: 2319.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption1235001000150020002500SE +/- 4.20, N = 3SE +/- 5.64, N = 3SE +/- 1.94, N = 32274.42264.72266.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption123400800120016002000Min: 2266.6 / Avg: 2274.43 / Max: 2281Min: 2253.5 / Avg: 2264.73 / Max: 2271.2Min: 2262.9 / Avg: 2266.1 / Max: 2269.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryption1232004006008001000SE +/- 0.09, N = 3SE +/- 0.91, N = 3SE +/- 0.57, N = 3818.1817.9816.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryption123140280420560700Min: 818 / Avg: 818.13 / Max: 818.3Min: 816.2 / Avg: 817.93 / Max: 819.3Min: 815.9 / Avg: 816.7 / Max: 817.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption1232004006008001000SE +/- 1.94, N = 3SE +/- 0.93, N = 3SE +/- 0.32, N = 3836.3836.8837.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption123150300450600750Min: 832.9 / Avg: 836.33 / Max: 839.6Min: 835.1 / Avg: 836.8 / Max: 838.3Min: 836.9 / Avg: 837.47 / Max: 838

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption123100200300400500SE +/- 0.39, N = 3SE +/- 0.21, N = 3SE +/- 0.79, N = 3454.1453.8452.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption12380160240320400Min: 453.3 / Avg: 454.07 / Max: 454.6Min: 453.4 / Avg: 453.8 / Max: 454.1Min: 451.3 / Avg: 452.8 / Max: 454

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryption123100200300400500SE +/- 0.20, N = 3SE +/- 0.85, N = 2SE +/- 0.12, N = 3455.9454.8455.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryption12380160240320400Min: 455.5 / Avg: 455.9 / Max: 456.1Min: 453.9 / Avg: 454.75 / Max: 455.6Min: 455.5 / Avg: 455.67 / Max: 455.9

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123600K1200K1800K2400K3000KSE +/- 32160.04, N = 3SE +/- 8420.44, N = 3SE +/- 3188.51, N = 32987638.081984735.792013855.591. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KMin: 2950789 / Avg: 2987638.08 / Max: 3051718Min: 1973554.38 / Avg: 1984735.79 / Max: 2001232.62Min: 2007635.38 / Avg: 2013855.59 / Max: 2018182.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123500K1000K1500K2000K2500KSE +/- 7288.90, N = 3SE +/- 18189.17, N = 3SE +/- 12483.84, N = 32420991.582410939.672433452.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KMin: 2407936.5 / Avg: 2420991.58 / Max: 2433136.75Min: 2374793.75 / Avg: 2410939.67 / Max: 2432568.25Min: 2409096.5 / Avg: 2433452.33 / Max: 2450387.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123400K800K1200K1600K2000KSE +/- 14490.39, N = 3SE +/- 6380.74, N = 3SE +/- 9875.33, N = 31889307.211905897.881927407.211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KMin: 1862615.75 / Avg: 1889307.21 / Max: 1912429.88Min: 1893222.25 / Avg: 1905897.88 / Max: 1913515.5Min: 1910231.88 / Avg: 1927407.21 / Max: 1944440.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123600K1200K1800K2400K3000KSE +/- 25840.54, N = 3SE +/- 9485.22, N = 3SE +/- 5857.53, N = 32861475.422750396.002765324.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KMin: 2827309 / Avg: 2861475.42 / Max: 2912139.75Min: 2740494.5 / Avg: 2750396 / Max: 2769360.25Min: 2756401.25 / Avg: 2765324.33 / Max: 2776359.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123500K1000K1500K2000K2500KSE +/- 14882.04, N = 3SE +/- 6079.59, N = 3SE +/- 20349.76, N = 32149992.082181880.082174653.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123400K800K1200K1600K2000KMin: 2133105.75 / Avg: 2149992.08 / Max: 2179661.75Min: 2170187.5 / Avg: 2181880.08 / Max: 2190615.5Min: 2144549 / Avg: 2174653 / Max: 2213425.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0123246810SE +/- 0.029, N = 3SE +/- 0.023, N = 3SE +/- 0.025, N = 37.3517.3407.347MIN: 7.03 / MAX: 8.65MIN: 7.01 / MAX: 8.96MIN: 7.04 / MAX: 10.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.01233691215Min: 7.31 / Avg: 7.35 / Max: 7.41Min: 7.3 / Avg: 7.34 / Max: 7.37Min: 7.3 / Avg: 7.35 / Max: 7.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50123918273645SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.16, N = 340.1740.2040.25MIN: 39.79 / MAX: 45.68MIN: 39.88 / MAX: 44.04MIN: 39.87 / MAX: 41.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50123816243240Min: 39.95 / Avg: 40.17 / Max: 40.3Min: 40.15 / Avg: 40.2 / Max: 40.25Min: 40.08 / Avg: 40.25 / Max: 40.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2241231.02562.05123.07684.10245.128SE +/- 0.024, N = 3SE +/- 0.039, N = 3SE +/- 0.056, N = 34.5584.5244.455MIN: 4.3 / MAX: 5.13MIN: 4.24 / MAX: 5.44MIN: 4.11 / MAX: 5.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224123246810Min: 4.51 / Avg: 4.56 / Max: 4.6Min: 4.45 / Avg: 4.52 / Max: 4.57Min: 4.39 / Avg: 4.46 / Max: 4.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.01231.12162.24323.36484.48645.608SE +/- 0.021, N = 3SE +/- 0.018, N = 3SE +/- 0.018, N = 34.9854.9684.976MIN: 4.88 / MAX: 6.46MIN: 4.88 / MAX: 6.39MIN: 4.86 / MAX: 6.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0123246810Min: 4.96 / Avg: 4.99 / Max: 5.03Min: 4.94 / Avg: 4.97 / Max: 5Min: 4.94 / Avg: 4.98 / Max: 51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3123918273645SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 341.0741.0241.02MIN: 40.75 / MAX: 44.98MIN: 40.66 / MAX: 45.66MIN: 40.55 / MAX: 45.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3123918273645Min: 41.01 / Avg: 41.07 / Max: 41.14Min: 40.94 / Avg: 41.02 / Max: 41.11Min: 40.89 / Avg: 41.02 / Max: 41.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet12348121620SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 317.4417.7017.41MIN: 17.18 / MAX: 19.14MIN: 17.51 / MAX: 18.43MIN: 17.21 / MAX: 17.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet12348121620Min: 17.28 / Avg: 17.44 / Max: 17.71Min: 17.58 / Avg: 17.7 / Max: 17.92Min: 17.29 / Avg: 17.41 / Max: 17.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21231.2152.433.6454.866.075SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 35.405.245.25MIN: 5.12 / MAX: 6.21MIN: 5.05 / MAX: 6.67MIN: 5.12 / MAX: 8.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810Min: 5.28 / Avg: 5.4 / Max: 5.59Min: 5.21 / Avg: 5.24 / Max: 5.26Min: 5.2 / Avg: 5.25 / Max: 5.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31231.01932.03863.05794.07725.0965SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 34.534.324.36MIN: 4.36 / MAX: 4.86MIN: 4.24 / MAX: 4.46MIN: 4.25 / MAX: 5.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810Min: 4.39 / Avg: 4.53 / Max: 4.8Min: 4.28 / Avg: 4.32 / Max: 4.4Min: 4.29 / Avg: 4.36 / Max: 4.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2123246810SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 36.266.106.09MIN: 6.2 / MAX: 6.36MIN: 5.95 / MAX: 6.36MIN: 5.97 / MAX: 6.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215Min: 6.24 / Avg: 6.26 / Max: 6.29Min: 6.01 / Avg: 6.1 / Max: 6.28Min: 6.02 / Avg: 6.09 / Max: 6.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1231.10932.21863.32794.43725.5465SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 34.934.864.82MIN: 4.7 / MAX: 5.15MIN: 4.61 / MAX: 5.17MIN: 4.65 / MAX: 5.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810Min: 4.82 / Avg: 4.93 / Max: 5.07Min: 4.78 / Avg: 4.86 / Max: 4.94Min: 4.72 / Avg: 4.82 / Max: 4.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0123246810SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 36.736.636.69MIN: 6.46 / MAX: 8.32MIN: 6.5 / MAX: 7.34MIN: 6.54 / MAX: 6.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215Min: 6.61 / Avg: 6.73 / Max: 6.98Min: 6.6 / Avg: 6.63 / Max: 6.69Min: 6.67 / Avg: 6.69 / Max: 6.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.55131.10261.65392.20522.7565SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 32.452.332.36MIN: 2.28 / MAX: 2.68MIN: 2.25 / MAX: 2.65MIN: 2.25 / MAX: 8.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 2.32 / Avg: 2.45 / Max: 2.61Min: 2.27 / Avg: 2.33 / Max: 2.46Min: 2.28 / Avg: 2.36 / Max: 2.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet12348121620SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.24, N = 313.8113.2513.52MIN: 13.67 / MAX: 14.03MIN: 13.16 / MAX: 13.37MIN: 13.16 / MAX: 14.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet12348121620Min: 13.74 / Avg: 13.81 / Max: 13.94Min: 13.23 / Avg: 13.25 / Max: 13.28Min: 13.28 / Avg: 13.52 / Max: 13.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16123918273645SE +/- 0.21, N = 3SE +/- 0.31, N = 3SE +/- 0.52, N = 341.0241.2940.41MIN: 40.71 / MAX: 42.29MIN: 40.61 / MAX: 48.77MIN: 39.37 / MAX: 42.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16123918273645Min: 40.8 / Avg: 41.02 / Max: 41.44Min: 40.67 / Avg: 41.29 / Max: 41.62Min: 39.46 / Avg: 40.41 / Max: 41.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181233691215SE +/- 0.16, N = 3SE +/- 0.19, N = 3SE +/- 0.22, N = 311.7311.7811.81MIN: 11.32 / MAX: 13.34MIN: 11.35 / MAX: 12.58MIN: 11.28 / MAX: 15.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181233691215Min: 11.4 / Avg: 11.73 / Max: 11.91Min: 11.41 / Avg: 11.78 / Max: 11.97Min: 11.38 / Avg: 11.81 / Max: 12.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 39.009.008.99MIN: 8.94 / MAX: 11.7MIN: 8.93 / MAX: 9.34MIN: 8.92 / MAX: 9.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1233691215Min: 8.98 / Avg: 9 / Max: 9.03Min: 8.97 / Avg: 9 / Max: 9.03Min: 8.96 / Avg: 8.99 / Max: 9.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123612182430SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.23, N = 323.0323.0822.82MIN: 22.81 / MAX: 23.45MIN: 22.8 / MAX: 27.95MIN: 22.23 / MAX: 24.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123510152025Min: 22.93 / Avg: 23.03 / Max: 23.14Min: 22.91 / Avg: 23.08 / Max: 23.2Min: 22.36 / Avg: 22.82 / Max: 23.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123612182430SE +/- 0.25, N = 3SE +/- 0.22, N = 3SE +/- 0.21, N = 325.8625.8625.87MIN: 25.54 / MAX: 26.92MIN: 25.5 / MAX: 27.62MIN: 25.52 / MAX: 27.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123612182430Min: 25.61 / Avg: 25.86 / Max: 26.36Min: 25.61 / Avg: 25.86 / Max: 26.31Min: 25.62 / Avg: 25.87 / Max: 26.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123510152025SE +/- 0.19, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 319.5219.4919.26MIN: 19.22 / MAX: 20.82MIN: 19.18 / MAX: 20.34MIN: 18.97 / MAX: 19.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123510152025Min: 19.3 / Avg: 19.52 / Max: 19.9Min: 19.24 / Avg: 19.49 / Max: 19.69Min: 19.04 / Avg: 19.26 / Max: 19.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 319.3919.4119.49MIN: 19.13 / MAX: 20.04MIN: 19.11 / MAX: 21.01MIN: 19.22 / MAX: 35.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025Min: 19.31 / Avg: 19.39 / Max: 19.44Min: 19.31 / Avg: 19.41 / Max: 19.6Min: 19.42 / Avg: 19.49 / Max: 19.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212370140210280350SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3314.95315.02315.07MIN: 314.32 / MAX: 315.9MIN: 314.45 / MAX: 316.38MIN: 314.61 / MAX: 315.971. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212360120180240300Min: 314.79 / Avg: 314.94 / Max: 315.04Min: 314.97 / Avg: 315.02 / Max: 315.09Min: 314.99 / Avg: 315.07 / Max: 315.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3296.12296.02296.09MIN: 295.65 / MAX: 297.87MIN: 295.7 / MAX: 297.1MIN: 295.68 / MAX: 296.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112350100150200250Min: 296.01 / Avg: 296.12 / Max: 296.32Min: 295.98 / Avg: 296.02 / Max: 296.07Min: 296.04 / Avg: 296.09 / Max: 296.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU123110220330440550SE +/- 1.45, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 34834864861. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU12390180270360450Min: 480.5 / Avg: 483.17 / Max: 485.5Min: 484.5 / Avg: 485.83 / Max: 487.5Min: 484.5 / Avg: 485.83 / Max: 486.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU123160320480640800SE +/- 0.73, N = 3SE +/- 0.93, N = 3SE +/- 2.35, N = 37287207251. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU123130260390520650Min: 727 / Avg: 728.33 / Max: 729.5Min: 718.5 / Avg: 720.33 / Max: 721.5Min: 722 / Avg: 724.83 / Max: 729.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU12320406080100SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.17, N = 31031031031. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU12320406080100Min: 102.5 / Avg: 103 / Max: 103.5Min: 102 / Avg: 102.67 / Max: 103.5Min: 103 / Avg: 103.33 / Max: 103.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU1232K4K6K8K10KSE +/- 10.09, N = 3SE +/- 16.55, N = 3SE +/- 52.90, N = 31155011588115401. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU1232K4K6K8K10KMin: 11531 / Avg: 11550.33 / Max: 11565Min: 11565 / Avg: 11587.83 / Max: 11620Min: 11455 / Avg: 11539.83 / Max: 116371. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU12315003000450060007500SE +/- 9.39, N = 3SE +/- 17.72, N = 3SE +/- 22.59, N = 37081711071331. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU12312002400360048006000Min: 7062 / Avg: 7080.5 / Max: 7092.5Min: 7074.5 / Avg: 7109.67 / Max: 7131Min: 7109.5 / Avg: 7133.33 / Max: 7178.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

GnuPG

This test times how long it takes to encrypt a sample file using GnuPG. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File Encryption1231530456075SE +/- 0.41, N = 3SE +/- 0.12, N = 3SE +/- 0.22, N = 366.8166.4666.631. (CC) gcc options: -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File Encryption1231326395265Min: 66.35 / Avg: 66.81 / Max: 67.63Min: 66.31 / Avg: 66.46 / Max: 66.69Min: 66.36 / Avg: 66.63 / Max: 67.071. (CC) gcc options: -O2

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz123510152025SE +/- 0.96, N = 20SE +/- 0.12, N = 4SE +/- 0.07, N = 421.4320.0819.81
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz123510152025Min: 19.37 / Avg: 21.43 / Max: 35.43Min: 19.83 / Avg: 20.08 / Max: 20.31Min: 19.72 / Avg: 19.81 / Max: 20.01

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412312M24M36M48M60MSE +/- 266395.21, N = 3SE +/- 36554.18, N = 3SE +/- 233375.54, N = 35819264357582573574842601. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412310M20M30M40M50MMin: 57725400 / Avg: 58192643.33 / Max: 58647990Min: 57511150 / Avg: 57582573.33 / Max: 57631800Min: 57083070 / Avg: 57484260 / Max: 578914401. (CXX) g++ options: -O3 -fopenmp

108 Results Shown

QuantLib
lzbench:
  XZ 0 - Compression
  XZ 0 - Decompression
  Zstd 1 - Compression
  Zstd 1 - Decompression
  Zstd 8 - Compression
  Zstd 8 - Decompression
  Crush 0 - Compression
  Crush 0 - Decompression
  Brotli 0 - Compression
  Brotli 0 - Decompression
  Brotli 2 - Compression
  Brotli 2 - Decompression
  Libdeflate 1 - Compression
Algebraic Multi-Grid Benchmark
QMCPACK
OpenFOAM:
  Motorbike 30M
  Motorbike 60M
LAMMPS Molecular Dynamics Simulator
LULESH
Zstd Compression:
  3
  19
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
rav1e:
  1
  5
  6
  10
Coremark
7-Zip Compression
Timed Godot Game Engine Compilation
Build2
Timed Eigen Compilation
Gcrypt Library
Google SynthMark
FinanceBench:
  Repo OpenMP
  Bonds OpenMP
Cryptsetup:
  PBKDF2-sha512
  PBKDF2-whirlpool
  AES-XTS 256b Encryption
  AES-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  Serpent-XTS 256b Decryption
  Twofish-XTS 256b Encryption
  Twofish-XTS 256b Decryption
  AES-XTS 512b Encryption
  AES-XTS 512b Decryption
  Serpent-XTS 512b Encryption
  Serpent-XTS 512b Decryption
  Twofish-XTS 512b Encryption
  Twofish-XTS 512b Decryption
Redis:
  LPOP
  SADD
  LPUSH
  GET
  SET
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
ONNX Runtime:
  yolov4 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
  fcn-resnet101-11 - OpenMP CPU
  shufflenet-v2-10 - OpenMP CPU
  super-resolution-10 - OpenMP CPU
GnuPG
Unpacking Firefox
Kripke