Xeon Platinum 8380 AVX-512 Workloads

Benchmarks for a future article. 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308099-NE-XEONPLATI49
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Bioinformatics 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 10 Tests
Creator Workloads 12 Tests
Encoding 5 Tests
Fortran Tests 4 Tests
Game Development 3 Tests
HPC - High Performance Computing 12 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 3 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 10 Tests
Python Tests 5 Tests
Renderers 2 Tests
Scientific Computing 5 Tests
Server CPU Tests 6 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
0xd000390
August 06 2023
  11 Hours, 56 Minutes
0xd0003a5
August 08 2023
  15 Hours, 40 Minutes
Invert Hiding All Results Option
  13 Hours, 48 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Platinum 8380 AVX-512 WorkloadsOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Ice Lake IEH512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 22.106.5.0-060500rc4daily20230804-generic (x86_64)6.5.0-rc5-phx-tues (x86_64)GNOME Shell 43.0X Server 1.21.1.31.3.224GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelsDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Platinum 8380 AVX-512 Workloads PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - 0xd000390: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000390 - 0xd0003a5: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd0003a5 - Python 3.10.7- 0xd000390: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - 0xd0003a5: gather_data_sampling: Mitigation of Microcode + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

0xd000390 vs. 0xd0003a5 ComparisonPhoronix Test SuiteBaseline+13.1%+13.1%+26.2%+26.2%+39.3%+39.3%17.2%12.9%10.4%10%8%7.1%6.1%5.8%5%3.9%3.7%3.6%3.4%3.2%3%2.9%2.9%2.8%2.7%2.2%2.1%2%2%5.1%particle_volume/scivis/real_time52.3%particle_volume/ao/real_time50.3%Garlicoin32.2%B.L.N.Q.A.S.I - A.M.S21%B.L.N.Q.A.S.I - A.M.S21%FeCO6_b3lyp_gms20.8%CPU - regnety_400mLargeRandgravity_spheres_volume/dim_512/ao/real_time11.6%gravity_spheres_volume/dim_512/scivis/real_time11.5%N.T.C.B.b.u.S.S.I - A.M.S11.3%N.T.C.B.b.u.S.S.I - A.M.S11.3%CPU - 512 - AlexNetKostyaparticle_volume/pathtracer/real_time9.8%CPU - 256 - AlexNetCPU - googlenet7.9%CPU - resnet507.9%CPU - vgg167.8%R.5.S.I - A.M.S7.7%R.5.S.I - A.M.S7.6%64C.D.Y.C.S.I - A.M.S7.1%C.D.Y.C.S.I - A.M.S7.1%vklBenchmark ISPC6.5%r2c - FFTW - double - 128GPT-2 - CPU - StandardPathtracer ISPC - Crown5.7%fcn-resnet101-11 - CPU - Standard5.6%CPU - FastestDetN.T.C.B.b.u.S - A.M.S5%CPU - resnet185%CPU - squeezenet_ssd5%N.T.C.B.b.u.S - A.M.S4.9%CaffeNet 12-int8 - CPU - Standard4.8%yolov4 - CPU - Standard4.5%CPU - blazeface4.4%B.L.N.Q.A - A.M.S4.3%simple-H2O4.3%N.T.C.D.m - A.M.S3.9%CPU - 256 - GoogLeNetN.T.C.D.m - A.M.S3.9%B.L.N.Q.A - A.M.S3.9%CPU - vision_transformerCPU - 256 - ResNet-50Pathtracer ISPC - Asian Dragon3.5%DistinctUserIDPartialTweetsTomographic ModelOpenMP - BM2OpenMP - BM2c2c - FFTW - float - 128TopTweetC.D.Y.C - A.M.S2.5%bertsquad-12 - CPU - Standard2.5%C.D.Y.C - A.M.S2.4%Speed 5 - Bosphorus 4K2.4%CPU - mnasnetR.v.1.i - CPU - Standard2.1%CPU - 512 - GoogLeNetFeCO6_b3lyp_gmsR.N.N.T - bf16bf16bf16 - CPUsuper-resolution-10 - CPU - Standard6.4%R.v.1.i - CPU - Standard2.1%fcn-resnet101-11 - CPU - Standard6.5%CaffeNet 12-int8 - CPU - Standard5.3%bertsquad-12 - CPU - Standard2.8%yolov4 - CPU - Standard4.7%GPT-2 - CPU - StandardOSPRayOSPRayCpuminer-OptNeural Magic DeepSparseNeural Magic DeepSparseQMCPACKNCNNsimdjsonOSPRayOSPRayNeural Magic DeepSparseNeural Magic DeepSparseTensorFlowsimdjsonOSPRayTensorFlowNCNNNCNNNCNNNeural Magic DeepSparseNeural Magic DeepSparselibxsmmNeural Magic DeepSparseNeural Magic DeepSparseOpenVKLHeFFTe - Highly Efficient FFT for ExascaleONNX RuntimeEmbreeONNX RuntimeNCNNNeural Magic DeepSparseNCNNNCNNNeural Magic DeepSparseONNX RuntimeONNX RuntimeNCNNNeural Magic DeepSparseQMCPACKNeural Magic DeepSparseTensorFlowNeural Magic DeepSparseNeural Magic DeepSparseNCNNTensorFlowEmbreesimdjsonsimdjsonSPECFEM3DminiBUDEminiBUDEHeFFTe - Highly Efficient FFT for ExascalesimdjsonNeural Magic DeepSparseONNX RuntimeNeural Magic DeepSparseVP9 libvpx EncodingNCNNONNX RuntimeTensorFlowQMCPACKoneDNNONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX Runtime0xd0003900xd0003a5

Xeon Platinum 8380 AVX-512 Workloadsospray: particle_volume/scivis/real_timeospray: particle_volume/ao/real_timedeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamqmcpack: FeCO6_b3lyp_gmssimdjson: LargeRandospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timedeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamtensorflow: CPU - 512 - AlexNetsimdjson: Kostyaospray: particle_volume/pathtracer/real_timetensorflow: CPU - 256 - AlexNetncnn: CPU - googlenetncnn: CPU - vgg16deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamlibxsmm: 64deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamopenvkl: vklBenchmark ISPCheffte: r2c - FFTW - double - 128embree: Pathtracer ISPC - Crowndeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamncnn: CPU - resnet18ncnn: CPU - squeezenet_ssddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamonnx: yolov4 - CPU - Standardncnn: CPU - blazefacedeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamqmcpack: simple-H2Odeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 256 - GoogLeNetdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamncnn: CPU - vision_transformertensorflow: CPU - 256 - ResNet-50embree: Pathtracer ISPC - Asian Dragonsimdjson: DistinctUserIDsimdjson: PartialTweetsspecfem3d: Tomographic Modelminibude: OpenMP - BM2minibude: OpenMP - BM2heffte: c2c - FFTW - float - 128simdjson: TopTweetdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonnx: bertsquad-12 - CPU - Standarddeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 5 - Bosphorus 4Kncnn: CPU - mnasnetonnx: ResNet50 v1-12-int8 - CPU - Standardtensorflow: CPU - 512 - GoogLeNetqmcpack: FeCO6_b3lyp_gmslibxsmm: 128heffte: r2c - FFTW - float - 128deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamsvt-av1: Preset 12 - Bosphorus 4Kdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamopenvino: Machine Translation EN To DE FP16 - CPUncnn: CPU - yolov4-tinyopenvino: Machine Translation EN To DE FP16 - CPUspecfem3d: Mount St. Helensgromacs: MPI CPU - water_GMX50_barespecfem3d: Homogeneous Halfspaceopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUpalabos: 400ncnn: CPU-v3-v3 - mobilenet-v3heffte: c2c - FFTW - double - 256ncnn: CPU - shufflenet-v2tensorflow: CPU - 512 - ResNet-50ncnn: CPU - alexnetncnn: CPU - mobilenetremhos: Sample Remap Examplesvt-av1: Preset 13 - Bosphorus 4Kheffte: r2c - FFTW - double - 256svt-av1: Preset 8 - Bosphorus 4Kheffte: r2c - FFTW - float - 256palabos: 500onnx: ArcFace ResNet-100 - CPU - Standarddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamlibxsmm: 256heffte: c2c - FFTW - float - 256svt-hevc: 10 - Bosphorus 4Kheffte: c2c - FFTW - double - 128deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamncnn: CPU-v2-v2 - mobilenet-v2deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamminibude: OpenMP - BM1minibude: OpenMP - BM1qmcpack: Li2_STO_aencnn: CPU - efficientnet-b0openvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUspecfem3d: Water-layered Halfspacecpuminer-opt: Myriad-Groestllibxsmm: 32deepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamonednn: IP Shapes 3D - bf16bf16bf16 - CPUmrbayes: Primate Phylogeny Analysiscpuminer-opt: Skeincoindeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUblender: Fishy Cat - CPU-Onlyonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUcloverleaf: Lagrangian-Eulerian Hydrodynamicscpuminer-opt: Quad SHA-256, Pyritevvenc: Bosphorus 4K - Fasterblender: BMW27 - CPU-Onlyopenvino: Vehicle Detection FP16-INT8 - CPUspecfem3d: Layered Halfspaceopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamopenvino: Person Detection FP16 - CPUcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Deepcoinopenvino: Person Detection FP16 - CPUvvenc: Bosphorus 4K - Fastdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUdav1d: Chimera 1080popenvino: Weld Porosity Detection FP16-INT8 - CPUlaghos: Triple Point Problemopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsvt-hevc: 7 - Bosphorus 4Kdav1d: Summer Nature 4Kincompact3d: input.i3d 193 Cells Per Directionopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUpalabos: 100cpuminer-opt: scryptcpuminer-opt: Blake-2 Ssvt-hevc: 1 - Bosphorus 4Kopenvino: Person Detection FP32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUcpuminer-opt: Triple SHA-256, Onecoinlaghos: Sedov Blast Wave, ube_922_hex.meshcpuminer-opt: Magionednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUcpuminer-opt: x25xoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - FastestDetncnn: CPU - regnety_400mncnn: CPU - resnet50onnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardcpuminer-opt: Garlicoinonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU0xd0003900xd0003a524.950624.74731051.786837.9943147.510.8521.076120.578017.35042301.1554760.152.61150.281723.2715.3623.865.22107633.25981098.893.5399426.9495912144.94288.1941307.87298.9715.34129.809511.66054.3598.339439.55562.0099309.63644.4430405.966246.9283.89104.68445.524.6214.5741920222526.887101.076154.7315.6094.612616.7110422.116712.637.57221.020317.27268.561941.1195.1991005.5843180.96739.730879.4723.71251.0013.1486923629.23418.0222364702039.639.77388.4768.8846.35169.8985.975.3915.4612.245175.10293.747467.170224.417413.20739.115784.0181594.6101.977184.3893.090672.17888.03474.6917922.930243.30332353.38994.136124.2311.6233.892344.9731.14695123843127604.71006.3407173.794539.69542.59271166.528613333230.0525551.818024.044419.17827.5130.74524.38112.0492173010.36423.834.5129.4979878981121.5617.80551.11951490.764216606467713.295.72272.06749396.523.90360515.818.50256.2767604.00138.75281.3611.024027859274.061517.69209.3195.42312.1952319.31446232710.4613.033.625261332237385.892309.472.069362659.171.463.031.161.3310.2045.5417.156.31428158.3554.5240325.5687110.3239.080671.43407696.72559.841585.79875.54783180.16329203832.45216.384916.4611869.039345.9597178.190.9618.886218.462619.30902068.3047839.412.87136.853781.2516.5825.715.62107092.50301177.1100.1856398.7153856153.79083.4621293.12419.4216.10136.178011.15394.5494.257341.24664.4439321.72620.2548421.754445.2386.93101.09595.714.7714.1459995462601.212104.048159.1005.7596.942916.3098412.053112.337.41216.481323.79263.231978.9198.869987.5452177.72140.450878.1824.10255.0912.9487597959.09417.7516289722070.729.63393.8448.7646.98339.7684.845.4615.6612.401177.20394.861466.460226.783417.48338.718583.2041600.2102.920182.7493.920771.54367.96478.8510930.750642.94052372.41894.897123.2611.7133.632362.8431.38179852843450609.21013.8215172.557939.41312.57437165.419617130231.4277555.087624.184442.98823.0930.90521.74211.9892627710.41523.724.4929.3736065511117.0917.87553.26821496.194231306489713.255.70571.88689419.773.91322514.588.48256.8767754.34138.49280.8411.004196859377.961519.84209.0295.54312.5302321.74446665310.4513.043.622661333117386.082308.662.069672659.551.463.031.161.339.7138.8518.516.71968158.0144.6208525.8601117.5458.599081.51029664.53761.509189.82995.28095190.57022086.25816.508OpenBenchmarking.org

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_time0xd0003900xd0003a5612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 324.9516.38
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_time0xd0003900xd0003a5612182430Min: 24.93 / Avg: 24.95 / Max: 24.97Min: 16.37 / Avg: 16.38 / Max: 16.4

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_time0xd0003900xd0003a5612182430SE +/- 0.11, N = 3SE +/- 0.01, N = 324.7516.46
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_time0xd0003900xd0003a5612182430Min: 24.55 / Avg: 24.75 / Max: 24.93Min: 16.44 / Avg: 16.46 / Max: 16.48

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000SE +/- 1.88, N = 3SE +/- 1.83, N = 31051.79869.04
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000Min: 1049.78 / Avg: 1051.79 / Max: 1055.54Min: 865.38 / Avg: 869.04 / Max: 870.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51020304050SE +/- 0.07, N = 3SE +/- 0.10, N = 337.9945.96
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5918273645Min: 37.86 / Avg: 37.99 / Max: 38.07Min: 45.85 / Avg: 45.96 / Max: 46.15

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gms0xd0003900xd0003a54080120160200SE +/- 0.12, N = 3SE +/- 0.31, N = 3147.51178.191. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gms0xd0003900xd0003a5306090120150Min: 147.28 / Avg: 147.51 / Max: 147.64Min: 177.74 / Avg: 178.19 / Max: 178.791. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom0xd0003900xd0003a50.2160.4320.6480.8641.08SE +/- 0.00, N = 3SE +/- 0.00, N = 30.850.961. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom0xd0003900xd0003a5246810Min: 0.85 / Avg: 0.85 / Max: 0.85Min: 0.96 / Avg: 0.96 / Max: 0.961. (CXX) g++ options: -O3

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_time0xd0003900xd0003a5510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 321.0818.89
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_time0xd0003900xd0003a5510152025Min: 21.02 / Avg: 21.08 / Max: 21.18Min: 18.86 / Avg: 18.89 / Max: 18.93

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_time0xd0003900xd0003a5510152025SE +/- 0.08, N = 3SE +/- 0.03, N = 320.5818.46
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_time0xd0003900xd0003a5510152025Min: 20.46 / Avg: 20.58 / Max: 20.72Min: 18.42 / Avg: 18.46 / Max: 18.51

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5510152025SE +/- 0.01, N = 3SE +/- 0.03, N = 317.3519.31
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5510152025Min: 17.34 / Avg: 17.35 / Max: 17.37Min: 19.25 / Avg: 19.31 / Max: 19.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a55001000150020002500SE +/- 1.14, N = 3SE +/- 3.15, N = 32301.162068.30
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5400800120016002000Min: 2299.03 / Avg: 2301.16 / Max: 2302.92Min: 2063.88 / Avg: 2068.3 / Max: 2074.4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNet0xd0003900xd0003a52004006008001000SE +/- 3.77, N = 3SE +/- 2.61, N = 3760.15839.41
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNet0xd0003900xd0003a5150300450600750Min: 752.64 / Avg: 760.15 / Max: 764.55Min: 835.77 / Avg: 839.41 / Max: 844.47

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya0xd0003900xd0003a50.64581.29161.93742.58323.229SE +/- 0.00, N = 3SE +/- 0.00, N = 32.612.871. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya0xd0003900xd0003a5246810Min: 2.61 / Avg: 2.61 / Max: 2.61Min: 2.87 / Avg: 2.87 / Max: 2.881. (CXX) g++ options: -O3

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_time0xd0003900xd0003a5306090120150SE +/- 0.54, N = 3SE +/- 0.45, N = 3150.28136.85
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_time0xd0003900xd0003a5306090120150Min: 149.29 / Avg: 150.28 / Max: 151.15Min: 136.07 / Avg: 136.85 / Max: 137.64

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNet0xd0003900xd0003a52004006008001000SE +/- 2.99, N = 3SE +/- 4.27, N = 3723.27781.25
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNet0xd0003900xd0003a5140280420560700Min: 719.1 / Avg: 723.27 / Max: 729.07Min: 776.4 / Avg: 781.25 / Max: 789.77

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenet0xd0003900xd0003a548121620SE +/- 0.17, N = 3SE +/- 0.48, N = 315.3616.58MIN: 14.6 / MAX: 182.22MIN: 15.29 / MAX: 39.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenet0xd0003900xd0003a548121620Min: 15.03 / Avg: 15.36 / Max: 15.62Min: 15.78 / Avg: 16.58 / Max: 17.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg160xd0003900xd0003a5612182430SE +/- 0.25, N = 3SE +/- 0.19, N = 323.8625.71MIN: 23.05 / MAX: 47.41MIN: 24.88 / MAX: 62.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg160xd0003900xd0003a5612182430Min: 23.36 / Avg: 23.86 / Max: 24.17Min: 25.46 / Avg: 25.71 / Max: 26.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51.26472.52943.79415.05886.3235SE +/- 0.0056, N = 3SE +/- 0.0018, N = 35.22105.6210
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5246810Min: 5.21 / Avg: 5.22 / Max: 5.23Min: 5.62 / Avg: 5.62 / Max: 5.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a516003200480064008000SE +/- 8.14, N = 3SE +/- 2.44, N = 37633.267092.50
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a513002600390052006500Min: 7617.56 / Avg: 7633.26 / Max: 7644.82Min: 7087.81 / Avg: 7092.5 / Max: 7096.03

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 640xd0003900xd0003a530060090012001500SE +/- 8.60, N = 3SE +/- 13.09, N = 151098.81177.11. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 640xd0003900xd0003a52004006008001000Min: 1081.8 / Avg: 1098.8 / Max: 1109.6Min: 1080.8 / Avg: 1177.08 / Max: 1235.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100SE +/- 0.07, N = 3SE +/- 1.28, N = 493.54100.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100Min: 93.42 / Avg: 93.54 / Max: 93.64Min: 98.8 / Avg: 100.19 / Max: 104.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a590180270360450SE +/- 0.21, N = 3SE +/- 4.97, N = 4426.95398.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a580160240320400Min: 426.57 / Avg: 426.95 / Max: 427.3Min: 383.81 / Avg: 398.72 / Max: 404.35

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC0xd0003900xd0003a52004006008001000SE +/- 1.53, N = 3SE +/- 0.88, N = 3912856MIN: 140 / MAX: 7236MIN: 137 / MAX: 7211
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC0xd0003900xd0003a5160320480640800Min: 909 / Avg: 912 / Max: 914Min: 854 / Avg: 855.67 / Max: 857

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 1280xd0003900xd0003a5306090120150SE +/- 1.81, N = 4SE +/- 1.71, N = 3144.94153.791. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 1280xd0003900xd0003a5306090120150Min: 140.6 / Avg: 144.94 / Max: 148.82Min: 151.39 / Avg: 153.79 / Max: 157.091. (CXX) g++ options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crown0xd0003900xd0003a520406080100SE +/- 0.07, N = 3SE +/- 0.32, N = 388.1983.46MIN: 85.24 / MAX: 92.72MIN: 80.27 / MAX: 87.69
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crown0xd0003900xd0003a520406080100Min: 88.09 / Avg: 88.19 / Max: 88.34Min: 83.01 / Avg: 83.46 / Max: 84.09

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a570140210280350SE +/- 0.57, N = 3SE +/- 1.05, N = 3307.87293.12
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a560120180240300Min: 306.76 / Avg: 307.87 / Max: 308.64Min: 291.74 / Avg: 293.12 / Max: 295.19

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet180xd0003900xd0003a53691215SE +/- 0.14, N = 3SE +/- 0.09, N = 28.979.42MIN: 8.63 / MAX: 27.27MIN: 9.21 / MAX: 32.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet180xd0003900xd0003a53691215Min: 8.82 / Avg: 8.97 / Max: 9.25Min: 9.33 / Avg: 9.42 / Max: 9.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssd0xd0003900xd0003a548121620SE +/- 0.12, N = 3SE +/- 0.25, N = 315.3416.10MIN: 14.63 / MAX: 39.65MIN: 15.43 / MAX: 48.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssd0xd0003900xd0003a548121620Min: 15.16 / Avg: 15.34 / Max: 15.58Min: 15.77 / Avg: 16.1 / Max: 16.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5306090120150SE +/- 0.25, N = 3SE +/- 0.60, N = 3129.81136.18
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5306090120150Min: 129.5 / Avg: 129.81 / Max: 130.3Min: 134.98 / Avg: 136.18 / Max: 136.91

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standard0xd0003900xd0003a53691215SE +/- 0.11, N = 7SE +/- 0.13, N = 1511.6611.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standard0xd0003900xd0003a53691215Min: 11.3 / Avg: 11.66 / Max: 11.98Min: 10.41 / Avg: 11.15 / Max: 12.011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazeface0xd0003900xd0003a51.02152.0433.06454.0865.1075SE +/- 0.06, N = 3SE +/- 0.07, N = 34.354.54MIN: 4.16 / MAX: 4.97MIN: 4.37 / MAX: 5.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazeface0xd0003900xd0003a5246810Min: 4.25 / Avg: 4.35 / Max: 4.47Min: 4.47 / Avg: 4.54 / Max: 4.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100SE +/- 0.12, N = 3SE +/- 0.51, N = 398.3494.26
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100Min: 98.11 / Avg: 98.34 / Max: 98.47Min: 93.26 / Avg: 94.26 / Max: 94.95

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: simple-H2O0xd0003900xd0003a5918273645SE +/- 0.12, N = 3SE +/- 0.02, N = 339.5641.251. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: simple-H2O0xd0003900xd0003a5918273645Min: 39.39 / Avg: 39.56 / Max: 39.8Min: 41.2 / Avg: 41.25 / Max: 41.281. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51428425670SE +/- 0.23, N = 3SE +/- 0.78, N = 362.0164.44
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51326395265Min: 61.62 / Avg: 62.01 / Max: 62.42Min: 62.91 / Avg: 64.44 / Max: 65.43

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNet0xd0003900xd0003a570140210280350SE +/- 1.89, N = 3SE +/- 2.10, N = 3309.63321.72
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNet0xd0003900xd0003a560120180240300Min: 306.14 / Avg: 309.63 / Max: 312.63Min: 317.53 / Avg: 321.72 / Max: 323.84

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5140280420560700SE +/- 2.45, N = 3SE +/- 7.47, N = 3644.44620.25
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5110220330440550Min: 640.17 / Avg: 644.44 / Max: 648.66Min: 610.87 / Avg: 620.25 / Max: 635.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a590180270360450SE +/- 0.42, N = 3SE +/- 2.15, N = 3405.97421.75
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a580160240320400Min: 405.31 / Avg: 405.97 / Max: 406.76Min: 418.23 / Avg: 421.75 / Max: 425.64

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformer0xd0003900xd0003a51122334455SE +/- 1.17, N = 3SE +/- 0.31, N = 346.9245.23MIN: 43.11 / MAX: 881.49MIN: 43.43 / MAX: 73.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformer0xd0003900xd0003a51020304050Min: 44.59 / Avg: 46.92 / Max: 48.21Min: 44.7 / Avg: 45.23 / Max: 45.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-500xd0003900xd0003a520406080100SE +/- 0.37, N = 3SE +/- 0.80, N = 983.8986.93
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-500xd0003900xd0003a51632486480Min: 83.45 / Avg: 83.89 / Max: 84.63Min: 84.03 / Avg: 86.93 / Max: 90.03

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon0xd0003900xd0003a520406080100SE +/- 0.43, N = 3SE +/- 0.19, N = 3104.68101.10MIN: 101.9 / MAX: 109.48MIN: 98.59 / MAX: 105.53
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon0xd0003900xd0003a520406080100Min: 104.05 / Avg: 104.68 / Max: 105.52Min: 100.74 / Avg: 101.1 / Max: 101.38

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID0xd0003900xd0003a51.28482.56963.85445.13926.424SE +/- 0.02, N = 3SE +/- 0.00, N = 35.525.711. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID0xd0003900xd0003a5246810Min: 5.5 / Avg: 5.52 / Max: 5.56Min: 5.71 / Avg: 5.71 / Max: 5.711. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets0xd0003900xd0003a51.07332.14663.21994.29325.3665SE +/- 0.01, N = 3SE +/- 0.01, N = 34.624.771. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets0xd0003900xd0003a5246810Min: 4.6 / Avg: 4.62 / Max: 4.64Min: 4.76 / Avg: 4.77 / Max: 4.781. (CXX) g++ options: -O3

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Model0xd0003900xd0003a548121620SE +/- 0.02, N = 3SE +/- 0.15, N = 314.5714.151. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Model0xd0003900xd0003a548121620Min: 14.54 / Avg: 14.57 / Max: 14.62Min: 13.86 / Avg: 14.15 / Max: 14.361. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM20xd0003900xd0003a56001200180024003000SE +/- 9.78, N = 3SE +/- 9.53, N = 32526.892601.211. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM20xd0003900xd0003a55001000150020002500Min: 2508 / Avg: 2526.89 / Max: 2540.77Min: 2589.09 / Avg: 2601.21 / Max: 2620.011. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM20xd0003900xd0003a520406080100SE +/- 0.39, N = 3SE +/- 0.38, N = 3101.08104.051. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM20xd0003900xd0003a520406080100Min: 100.32 / Avg: 101.08 / Max: 101.63Min: 103.56 / Avg: 104.05 / Max: 104.81. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 1280xd0003900xd0003a54080120160200SE +/- 1.53, N = 5SE +/- 1.01, N = 3154.73159.101. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 1280xd0003900xd0003a5306090120150Min: 151.93 / Avg: 154.73 / Max: 160.45Min: 157.87 / Avg: 159.1 / Max: 161.11. (CXX) g++ options: -O3

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet0xd0003900xd0003a51.29382.58763.88145.17526.469SE +/- 0.03, N = 3SE +/- 0.01, N = 35.605.751. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet0xd0003900xd0003a5246810Min: 5.56 / Avg: 5.6 / Max: 5.65Min: 5.74 / Avg: 5.75 / Max: 5.761. (CXX) g++ options: -O3

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100SE +/- 0.07, N = 3SE +/- 0.29, N = 394.6196.94
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100Min: 94.48 / Avg: 94.61 / Max: 94.7Min: 96.4 / Avg: 96.94 / Max: 97.4

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standard0xd0003900xd0003a548121620SE +/- 0.08, N = 3SE +/- 0.26, N = 1216.7116.311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standard0xd0003900xd0003a548121620Min: 16.57 / Avg: 16.71 / Max: 16.84Min: 13.58 / Avg: 16.31 / Max: 16.871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a590180270360450SE +/- 0.36, N = 3SE +/- 1.24, N = 3422.12412.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a580160240320400Min: 421.71 / Avg: 422.12 / Max: 422.84Min: 410.22 / Avg: 412.05 / Max: 414.41

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4K0xd0003900xd0003a53691215SE +/- 0.12, N = 3SE +/- 0.13, N = 312.6312.331. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4K0xd0003900xd0003a548121620Min: 12.39 / Avg: 12.63 / Max: 12.8Min: 12.1 / Avg: 12.33 / Max: 12.541. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnet0xd0003900xd0003a5246810SE +/- 0.10, N = 3SE +/- 0.04, N = 37.577.41MIN: 7.28 / MAX: 30.26MIN: 7.15 / MAX: 31.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnet0xd0003900xd0003a53691215Min: 7.45 / Avg: 7.57 / Max: 7.77Min: 7.35 / Avg: 7.41 / Max: 7.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a550100150200250SE +/- 1.39, N = 3SE +/- 1.91, N = 8221.02216.481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a54080120160200Min: 218.26 / Avg: 221.02 / Max: 222.64Min: 211.9 / Avg: 216.48 / Max: 224.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNet0xd0003900xd0003a570140210280350SE +/- 0.87, N = 3SE +/- 0.43, N = 3317.27323.79
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNet0xd0003900xd0003a560120180240300Min: 316.02 / Avg: 317.27 / Max: 318.94Min: 323.22 / Avg: 323.79 / Max: 324.64

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gms0xd0003900xd0003a560120180240300SE +/- 3.60, N = 3SE +/- 2.19, N = 3268.56263.231. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gms0xd0003900xd0003a550100150200250Min: 261.97 / Avg: 268.56 / Max: 274.38Min: 259.25 / Avg: 263.23 / Max: 266.811. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 1280xd0003900xd0003a5400800120016002000SE +/- 32.53, N = 7SE +/- 20.92, N = 31941.11978.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 1280xd0003900xd0003a530060090012001500Min: 1808.5 / Avg: 1941.13 / Max: 2050.2Min: 1938.1 / Avg: 1978.9 / Max: 2007.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 1280xd0003900xd0003a54080120160200SE +/- 0.92, N = 3SE +/- 1.08, N = 3195.20198.871. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 1280xd0003900xd0003a54080120160200Min: 193.69 / Avg: 195.2 / Max: 196.86Min: 196.71 / Avg: 198.87 / Max: 200.091. (CXX) g++ options: -O3

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000SE +/- 1.41, N = 3SE +/- 2.17, N = 31005.58987.55
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000Min: 1003.93 / Avg: 1005.58 / Max: 1008.39Min: 983.81 / Avg: 987.55 / Max: 991.34

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4K0xd0003900xd0003a54080120160200SE +/- 1.29, N = 3SE +/- 1.20, N = 3180.97177.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4K0xd0003900xd0003a5306090120150Min: 178.73 / Avg: 180.97 / Max: 183.19Min: 175.43 / Avg: 177.72 / Max: 179.51. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5918273645SE +/- 0.06, N = 3SE +/- 0.10, N = 339.7340.45
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5816243240Min: 39.61 / Avg: 39.73 / Max: 39.79Min: 40.29 / Avg: 40.45 / Max: 40.63

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU0xd0003900xd0003a520406080100SE +/- 0.22, N = 3SE +/- 0.19, N = 379.4778.18MIN: 62.57 / MAX: 232.01MIN: 66.08 / MAX: 194.381. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU0xd0003900xd0003a51530456075Min: 79.16 / Avg: 79.47 / Max: 79.9Min: 77.8 / Avg: 78.18 / Max: 78.431. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tiny0xd0003900xd0003a5612182430SE +/- 0.22, N = 3SE +/- 0.17, N = 323.7124.10MIN: 22.57 / MAX: 46.11MIN: 23.25 / MAX: 51.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tiny0xd0003900xd0003a5612182430Min: 23.27 / Avg: 23.71 / Max: 23.95Min: 23.77 / Avg: 24.1 / Max: 24.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU0xd0003900xd0003a560120180240300SE +/- 0.68, N = 3SE +/- 0.63, N = 3251.00255.091. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU0xd0003900xd0003a550100150200250Min: 249.68 / Avg: 251 / Max: 251.96Min: 254.24 / Avg: 255.09 / Max: 256.331. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helens0xd0003900xd0003a53691215SE +/- 0.18, N = 3SE +/- 0.12, N = 313.1512.951. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helens0xd0003900xd0003a548121620Min: 12.94 / Avg: 13.15 / Max: 13.52Min: 12.75 / Avg: 12.95 / Max: 13.181. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bare0xd0003900xd0003a53691215SE +/- 0.021, N = 3SE +/- 0.026, N = 39.2349.0941. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bare0xd0003900xd0003a53691215Min: 9.19 / Avg: 9.23 / Max: 9.26Min: 9.04 / Avg: 9.09 / Max: 9.131. (CXX) g++ options: -O3

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspace0xd0003900xd0003a548121620SE +/- 0.15, N = 3SE +/- 0.16, N = 318.0217.751. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspace0xd0003900xd0003a5510152025Min: 17.86 / Avg: 18.02 / Max: 18.33Min: 17.54 / Avg: 17.75 / Max: 18.061. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU0xd0003900xd0003a5400800120016002000SE +/- 1.77, N = 3SE +/- 3.75, N = 32039.632070.721. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU0xd0003900xd0003a5400800120016002000Min: 2036.14 / Avg: 2039.63 / Max: 2041.89Min: 2064.98 / Avg: 2070.72 / Max: 2077.781. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU0xd0003900xd0003a53691215SE +/- 0.01, N = 3SE +/- 0.02, N = 39.779.63MIN: 7.83 / MAX: 20.01MIN: 8.33 / MAX: 19.331. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU0xd0003900xd0003a53691215Min: 9.76 / Avg: 9.77 / Max: 9.79Min: 9.6 / Avg: 9.63 / Max: 9.661. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 4000xd0003900xd0003a590180270360450SE +/- 0.94, N = 3SE +/- 0.06, N = 3388.48393.841. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 4000xd0003900xd0003a570140210280350Min: 386.65 / Avg: 388.48 / Max: 389.75Min: 393.72 / Avg: 393.84 / Max: 393.931. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v30xd0003900xd0003a5246810SE +/- 0.05, N = 3SE +/- 0.12, N = 38.888.76MIN: 8.69 / MAX: 32.33MIN: 8.3 / MAX: 32.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v30xd0003900xd0003a53691215Min: 8.79 / Avg: 8.88 / Max: 8.95Min: 8.54 / Avg: 8.76 / Max: 8.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 2560xd0003900xd0003a51122334455SE +/- 0.25, N = 3SE +/- 0.61, N = 346.3546.981. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 2560xd0003900xd0003a51020304050Min: 45.85 / Avg: 46.35 / Max: 46.65Min: 45.76 / Avg: 46.98 / Max: 47.631. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v20xd0003900xd0003a53691215SE +/- 0.10, N = 3SE +/- 0.13, N = 39.899.76MIN: 9.61 / MAX: 33.59MIN: 9.32 / MAX: 33.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v20xd0003900xd0003a53691215Min: 9.74 / Avg: 9.89 / Max: 10.09Min: 9.51 / Avg: 9.76 / Max: 9.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-500xd0003900xd0003a520406080100SE +/- 1.13, N = 3SE +/- 1.17, N = 385.9784.84
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-500xd0003900xd0003a51632486480Min: 83.72 / Avg: 85.97 / Max: 87.24Min: 82.71 / Avg: 84.84 / Max: 86.76

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnet0xd0003900xd0003a51.22852.4573.68554.9146.1425SE +/- 0.16, N = 3SE +/- 0.15, N = 35.395.46MIN: 4.83 / MAX: 151.52MIN: 5.01 / MAX: 29.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnet0xd0003900xd0003a5246810Min: 5.07 / Avg: 5.39 / Max: 5.59Min: 5.19 / Avg: 5.46 / Max: 5.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenet0xd0003900xd0003a548121620SE +/- 0.09, N = 3SE +/- 0.07, N = 315.4615.66MIN: 14.85 / MAX: 106.56MIN: 15.24 / MAX: 38.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenet0xd0003900xd0003a548121620Min: 15.28 / Avg: 15.46 / Max: 15.56Min: 15.56 / Avg: 15.66 / Max: 15.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Example0xd0003900xd0003a53691215SE +/- 0.03, N = 3SE +/- 0.04, N = 312.2512.401. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Example0xd0003900xd0003a548121620Min: 12.2 / Avg: 12.25 / Max: 12.28Min: 12.34 / Avg: 12.4 / Max: 12.471. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4K0xd0003900xd0003a54080120160200SE +/- 0.79, N = 3SE +/- 2.02, N = 3175.10177.201. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4K0xd0003900xd0003a5306090120150Min: 173.72 / Avg: 175.1 / Max: 176.46Min: 174.87 / Avg: 177.2 / Max: 181.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 2560xd0003900xd0003a520406080100SE +/- 1.09, N = 3SE +/- 1.13, N = 393.7594.861. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 2560xd0003900xd0003a520406080100Min: 91.58 / Avg: 93.75 / Max: 95.11Min: 92.94 / Avg: 94.86 / Max: 96.861. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4K0xd0003900xd0003a51530456075SE +/- 0.47, N = 3SE +/- 0.46, N = 367.1766.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4K0xd0003900xd0003a51326395265Min: 66.24 / Avg: 67.17 / Max: 67.7Min: 65.62 / Avg: 66.46 / Max: 67.191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 2560xd0003900xd0003a550100150200250SE +/- 2.25, N = 3SE +/- 2.85, N = 3224.42226.781. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 2560xd0003900xd0003a54080120160200Min: 219.98 / Avg: 224.42 / Max: 227.31Min: 221.5 / Avg: 226.78 / Max: 231.261. (CXX) g++ options: -O3

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 5000xd0003900xd0003a590180270360450SE +/- 0.42, N = 3SE +/- 0.72, N = 3413.21417.481. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 5000xd0003900xd0003a570140210280350Min: 412.59 / Avg: 413.21 / Max: 414.01Min: 416.07 / Avg: 417.48 / Max: 418.421. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard0xd0003900xd0003a5918273645SE +/- 0.46, N = 3SE +/- 0.38, N = 1539.1238.721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard0xd0003900xd0003a5816243240Min: 38.44 / Avg: 39.12 / Max: 40Min: 35.67 / Avg: 38.72 / Max: 40.331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a520406080100SE +/- 0.32, N = 3SE +/- 0.18, N = 384.0283.20
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51632486480Min: 83.39 / Avg: 84.02 / Max: 84.4Min: 82.85 / Avg: 83.2 / Max: 83.46

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 2560xd0003900xd0003a5130260390520650SE +/- 2.33, N = 3SE +/- 2.54, N = 3594.6600.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 2560xd0003900xd0003a5110220330440550Min: 590.9 / Avg: 594.57 / Max: 598.9Min: 596.2 / Avg: 600.17 / Max: 604.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 2560xd0003900xd0003a520406080100SE +/- 0.27, N = 3SE +/- 0.20, N = 3101.98102.921. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 2560xd0003900xd0003a520406080100Min: 101.59 / Avg: 101.98 / Max: 102.5Min: 102.55 / Avg: 102.92 / Max: 103.231. (CXX) g++ options: -O3

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K0xd0003900xd0003a54080120160200SE +/- 2.03, N = 3SE +/- 0.38, N = 3184.38182.741. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K0xd0003900xd0003a5306090120150Min: 181 / Avg: 184.38 / Max: 188.03Min: 181.98 / Avg: 182.74 / Max: 183.211. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 1280xd0003900xd0003a520406080100SE +/- 0.15, N = 3SE +/- 0.28, N = 393.0993.921. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 1280xd0003900xd0003a520406080100Min: 92.83 / Avg: 93.09 / Max: 93.33Min: 93.37 / Avg: 93.92 / Max: 94.311. (CXX) g++ options: -O3

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51632486480SE +/- 0.15, N = 3SE +/- 0.12, N = 372.1871.54
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51428425670Min: 71.92 / Avg: 72.18 / Max: 72.43Min: 71.39 / Avg: 71.54 / Max: 71.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v20xd0003900xd0003a5246810SE +/- 0.09, N = 3SE +/- 0.04, N = 38.037.96MIN: 7.8 / MAX: 31.34MIN: 7.75 / MAX: 31.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v20xd0003900xd0003a53691215Min: 7.9 / Avg: 8.03 / Max: 8.21Min: 7.9 / Avg: 7.96 / Max: 8.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5100200300400500SE +/- 1.00, N = 3SE +/- 1.39, N = 3474.69478.85
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a580160240320400Min: 473.52 / Avg: 474.69 / Max: 476.67Min: 476.4 / Avg: 478.85 / Max: 481.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000SE +/- 11.47, N = 3SE +/- 1.22, N = 3922.93930.75
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5160320480640800Min: 902.3 / Avg: 922.93 / Max: 941.92Min: 928.58 / Avg: 930.75 / Max: 932.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51020304050SE +/- 0.53, N = 3SE +/- 0.06, N = 343.3042.94
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5918273645Min: 42.43 / Avg: 43.3 / Max: 44.27Min: 42.85 / Avg: 42.94 / Max: 43.04

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM10xd0003900xd0003a55001000150020002500SE +/- 3.60, N = 3SE +/- 10.90, N = 32353.392372.421. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM10xd0003900xd0003a5400800120016002000Min: 2346.18 / Avg: 2353.39 / Max: 2357.09Min: 2360.68 / Avg: 2372.42 / Max: 2394.191. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM10xd0003900xd0003a520406080100SE +/- 0.14, N = 3SE +/- 0.44, N = 394.1494.901. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM10xd0003900xd0003a520406080100Min: 93.85 / Avg: 94.14 / Max: 94.28Min: 94.43 / Avg: 94.9 / Max: 95.771. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: Li2_STO_ae0xd0003900xd0003a5306090120150SE +/- 1.55, N = 3SE +/- 1.03, N = 3124.23123.261. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: Li2_STO_ae0xd0003900xd0003a520406080100Min: 121.55 / Avg: 124.23 / Max: 126.91Min: 121.46 / Avg: 123.26 / Max: 125.011. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b00xd0003900xd0003a53691215SE +/- 0.38, N = 3SE +/- 0.11, N = 311.6211.71MIN: 10.82 / MAX: 21.03MIN: 11.15 / MAX: 19.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b00xd0003900xd0003a53691215Min: 11.09 / Avg: 11.62 / Max: 12.35Min: 11.52 / Avg: 11.71 / Max: 11.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU0xd0003900xd0003a5816243240SE +/- 0.05, N = 3SE +/- 0.06, N = 333.8933.63MIN: 29.83 / MAX: 113.13MIN: 29.54 / MAX: 113.551. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU0xd0003900xd0003a5714212835Min: 33.8 / Avg: 33.89 / Max: 33.97Min: 33.51 / Avg: 33.63 / Max: 33.691. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU0xd0003900xd0003a55001000150020002500SE +/- 3.01, N = 3SE +/- 3.91, N = 32344.972362.841. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU0xd0003900xd0003a5400800120016002000Min: 2339.78 / Avg: 2344.97 / Max: 2350.2Min: 2358.54 / Avg: 2362.84 / Max: 2370.651. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspace0xd0003900xd0003a5714212835SE +/- 0.24, N = 3SE +/- 0.31, N = 531.1531.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspace0xd0003900xd0003a5714212835Min: 30.77 / Avg: 31.15 / Max: 31.58Min: 30.75 / Avg: 31.38 / Max: 32.411. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl0xd0003900xd0003a59K18K27K36K45KSE +/- 386.00, N = 15SE +/- 406.32, N = 343127434501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl0xd0003900xd0003a58K16K24K32K40KMin: 40650 / Avg: 43126.67 / Max: 46540Min: 42680 / Avg: 43450 / Max: 440601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 320xd0003900xd0003a5130260390520650SE +/- 8.71, N = 15SE +/- 4.92, N = 3604.7609.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 320xd0003900xd0003a5110220330440550Min: 522 / Avg: 604.69 / Max: 641.8Min: 602.8 / Avg: 609.23 / Max: 618.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000SE +/- 0.56, N = 3SE +/- 0.31, N = 31006.341013.82
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a52004006008001000Min: 1005.29 / Avg: 1006.34 / Max: 1007.19Min: 1013.43 / Avg: 1013.82 / Max: 1014.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a54080120160200SE +/- 0.34, N = 3SE +/- 1.64, N = 3173.79172.56
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5306090120150Min: 173.17 / Avg: 173.79 / Max: 174.33Min: 169.3 / Avg: 172.56 / Max: 174.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5918273645SE +/- 0.02, N = 3SE +/- 0.01, N = 339.7039.41
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5816243240Min: 39.68 / Avg: 39.7 / Max: 39.73Min: 39.39 / Avg: 39.41 / Max: 39.43

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a50.58341.16681.75022.33362.917SE +/- 0.03265, N = 15SE +/- 0.02489, N = 152.592712.57437MIN: 1.91MIN: 1.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5246810Min: 2.3 / Avg: 2.59 / Max: 2.77Min: 2.4 / Avg: 2.57 / Max: 2.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis0xd0003900xd0003a54080120160200SE +/- 1.36, N = 3SE +/- 0.98, N = 3166.53165.421. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis0xd0003900xd0003a5306090120150Min: 164.75 / Avg: 166.53 / Max: 169.2Min: 163.96 / Avg: 165.42 / Max: 167.281. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoin0xd0003900xd0003a5130K260K390K520K650KSE +/- 1652.89, N = 3SE +/- 3788.20, N = 36133336171301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoin0xd0003900xd0003a5110K220K330K440K550KMin: 610640 / Avg: 613333.33 / Max: 616340Min: 612640 / Avg: 617130 / Max: 6246601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a550100150200250SE +/- 0.46, N = 3SE +/- 2.37, N = 3230.05231.43
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a54080120160200Min: 229.32 / Avg: 230.05 / Max: 230.89Min: 228.94 / Avg: 231.43 / Max: 236.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5120240360480600SE +/- 0.62, N = 3SE +/- 1.12, N = 3551.82555.09
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5100200300400500Min: 550.61 / Avg: 551.82 / Max: 552.65Min: 552.85 / Avg: 555.09 / Max: 556.34

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU0xd0003900xd0003a5612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 324.0424.181. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU0xd0003900xd0003a5612182430Min: 24.02 / Avg: 24.04 / Max: 24.07Min: 24.14 / Avg: 24.18 / Max: 24.21. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU0xd0003900xd0003a510002000300040005000SE +/- 3.29, N = 3SE +/- 2.64, N = 34419.174442.981. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU0xd0003900xd0003a58001600240032004000Min: 4412.62 / Avg: 4419.17 / Max: 4423.01Min: 4438.67 / Avg: 4442.98 / Max: 4447.771. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU0xd0003900xd0003a52004006008001000SE +/- 0.42, N = 3SE +/- 0.71, N = 3827.51823.09MIN: 628.21 / MAX: 980.48MIN: 550.41 / MAX: 926.211. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU0xd0003900xd0003a5150300450600750Min: 826.68 / Avg: 827.51 / Max: 828.07Min: 821.98 / Avg: 823.09 / Max: 824.41. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Only0xd0003900xd0003a5714212835SE +/- 0.06, N = 3SE +/- 0.02, N = 330.7430.90
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Only0xd0003900xd0003a5714212835Min: 30.67 / Avg: 30.74 / Max: 30.87Min: 30.86 / Avg: 30.9 / Max: 30.94

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5110220330440550SE +/- 7.19, N = 3SE +/- 2.82, N = 3524.38521.74MIN: 499.75MIN: 505.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a590180270360450Min: 510.41 / Avg: 524.38 / Max: 534.32Min: 517.01 / Avg: 521.74 / Max: 526.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics0xd0003900xd0003a53691215SE +/- 0.06, N = 3SE +/- 0.09, N = 312.0411.981. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics0xd0003900xd0003a548121620Min: 11.93 / Avg: 12.04 / Max: 12.14Min: 11.8 / Avg: 11.98 / Max: 12.081. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite0xd0003900xd0003a5200K400K600K800K1000KSE +/- 3352.41, N = 3SE +/- 1690.96, N = 39217309262771. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite0xd0003900xd0003a5160K320K480K640K800KMin: 918160 / Avg: 921730 / Max: 928430Min: 923140 / Avg: 926276.67 / Max: 9289401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Faster0xd0003900xd0003a53691215SE +/- 0.09, N = 3SE +/- 0.06, N = 310.3610.421. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Faster0xd0003900xd0003a53691215Min: 10.18 / Avg: 10.36 / Max: 10.49Min: 10.3 / Avg: 10.42 / Max: 10.491. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Only0xd0003900xd0003a5612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 323.8323.72
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Only0xd0003900xd0003a5612182430Min: 23.74 / Avg: 23.83 / Max: 23.93Min: 23.69 / Avg: 23.72 / Max: 23.77

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU0xd0003900xd0003a51.01482.02963.04444.05925.074SE +/- 0.00, N = 3SE +/- 0.00, N = 34.514.49MIN: 4.02 / MAX: 13.95MIN: 4.04 / MAX: 15.851. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU0xd0003900xd0003a5246810Min: 4.51 / Avg: 4.51 / Max: 4.52Min: 4.48 / Avg: 4.49 / Max: 4.491. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspace0xd0003900xd0003a5714212835SE +/- 0.05, N = 3SE +/- 0.18, N = 329.5029.371. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspace0xd0003900xd0003a5714212835Min: 29.4 / Avg: 29.5 / Max: 29.59Min: 29.04 / Avg: 29.37 / Max: 29.651. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU0xd0003900xd0003a52004006008001000SE +/- 0.88, N = 3SE +/- 1.14, N = 31121.561117.091. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU0xd0003900xd0003a52004006008001000Min: 1120.19 / Avg: 1121.56 / Max: 1123.2Min: 1115.13 / Avg: 1117.09 / Max: 1119.081. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU0xd0003900xd0003a548121620SE +/- 0.01, N = 3SE +/- 0.02, N = 317.8017.87MIN: 12.83 / MAX: 32.64MIN: 12.24 / MAX: 38.451. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU0xd0003900xd0003a5510152025Min: 17.77 / Avg: 17.8 / Max: 17.82Min: 17.83 / Avg: 17.87 / Max: 17.91. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5120240360480600SE +/- 0.77, N = 3SE +/- 1.19, N = 3551.12553.27
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a5100200300400500Min: 550.25 / Avg: 551.12 / Max: 552.66Min: 551.33 / Avg: 553.27 / Max: 555.44

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU0xd0003900xd0003a530060090012001500SE +/- 2.35, N = 3SE +/- 0.81, N = 31490.761496.19MIN: 1074.22 / MAX: 1692.48MIN: 1043.7 / MAX: 1711.441. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU0xd0003900xd0003a530060090012001500Min: 1488.05 / Avg: 1490.76 / Max: 1495.44Min: 1494.71 / Avg: 1496.19 / Max: 1497.481. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits0xd0003900xd0003a590K180K270K360K450KSE +/- 313.42, N = 3SE +/- 860.95, N = 34216604231301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits0xd0003900xd0003a570K140K210K280K350KMin: 421230 / Avg: 421660 / Max: 422270Min: 422200 / Avg: 423130 / Max: 4248501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoin0xd0003900xd0003a514K28K42K56K70KSE +/- 89.69, N = 3SE +/- 187.02, N = 364677648971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoin0xd0003900xd0003a511K22K33K44K55KMin: 64550 / Avg: 64676.67 / Max: 64850Min: 64690 / Avg: 64896.67 / Max: 652701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU0xd0003900xd0003a53691215SE +/- 0.02, N = 3SE +/- 0.01, N = 313.2913.251. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU0xd0003900xd0003a548121620Min: 13.25 / Avg: 13.29 / Max: 13.31Min: 13.23 / Avg: 13.25 / Max: 13.261. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fast0xd0003900xd0003a51.28752.5753.86255.156.4375SE +/- 0.033, N = 3SE +/- 0.029, N = 35.7225.7051. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fast0xd0003900xd0003a5246810Min: 5.69 / Avg: 5.72 / Max: 5.79Min: 5.65 / Avg: 5.71 / Max: 5.751. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51632486480SE +/- 0.09, N = 3SE +/- 0.10, N = 372.0771.89
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream0xd0003900xd0003a51428425670Min: 71.97 / Avg: 72.07 / Max: 72.24Min: 71.69 / Avg: 71.89 / Max: 72.01

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU0xd0003900xd0003a52K4K6K8K10KSE +/- 3.74, N = 3SE +/- 2.19, N = 39396.529419.771. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU0xd0003900xd0003a516003200480064008000Min: 9389.56 / Avg: 9396.52 / Max: 9402.37Min: 9415.76 / Avg: 9419.77 / Max: 9423.321. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a50.88051.7612.64153.5224.4025SE +/- 0.00202, N = 3SE +/- 0.00080, N = 33.903603.91322MIN: 3.68MIN: 3.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5246810Min: 3.9 / Avg: 3.9 / Max: 3.91Min: 3.91 / Avg: 3.91 / Max: 3.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p0xd0003900xd0003a5110220330440550SE +/- 0.80, N = 3SE +/- 0.51, N = 3515.81514.581. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p0xd0003900xd0003a590180270360450Min: 514.88 / Avg: 515.81 / Max: 517.4Min: 513.61 / Avg: 514.58 / Max: 515.331. (CC) gcc options: -pthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU0xd0003900xd0003a5246810SE +/- 0.00, N = 3SE +/- 0.00, N = 38.508.48MIN: 7.17 / MAX: 18.39MIN: 7.15 / MAX: 22.531. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU0xd0003900xd0003a53691215Min: 8.5 / Avg: 8.5 / Max: 8.51Min: 8.48 / Avg: 8.48 / Max: 8.481. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point Problem0xd0003900xd0003a560120180240300SE +/- 0.32, N = 3SE +/- 0.94, N = 3256.27256.871. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point Problem0xd0003900xd0003a550100150200250Min: 255.86 / Avg: 256.27 / Max: 256.9Min: 255.04 / Avg: 256.87 / Max: 258.161. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU0xd0003900xd0003a515K30K45K60K75KSE +/- 17.48, N = 3SE +/- 29.72, N = 367604.0067754.341. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU0xd0003900xd0003a512K24K36K48K60KMin: 67574.1 / Avg: 67604 / Max: 67634.65Min: 67713.95 / Avg: 67754.34 / Max: 67812.31. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K0xd0003900xd0003a5306090120150SE +/- 0.60, N = 3SE +/- 0.44, N = 3138.75138.491. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K0xd0003900xd0003a5306090120150Min: 137.68 / Avg: 138.75 / Max: 139.76Min: 137.61 / Avg: 138.49 / Max: 138.951. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4K0xd0003900xd0003a560120180240300SE +/- 1.06, N = 3SE +/- 0.81, N = 3281.36280.841. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4K0xd0003900xd0003a550100150200250Min: 279.24 / Avg: 281.36 / Max: 282.49Min: 279.23 / Avg: 280.84 / Max: 281.741. (CC) gcc options: -pthread -lm

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction0xd0003900xd0003a53691215SE +/- 0.02, N = 3SE +/- 0.02, N = 311.0211.001. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction0xd0003900xd0003a53691215Min: 11 / Avg: 11.02 / Max: 11.06Min: 10.98 / Avg: 11 / Max: 11.031. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU0xd0003900xd0003a513K26K39K52K65KSE +/- 17.38, N = 3SE +/- 18.68, N = 359274.0659377.961. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU0xd0003900xd0003a510K20K30K40K50KMin: 59253.6 / Avg: 59274.06 / Max: 59308.63Min: 59346.35 / Avg: 59377.96 / Max: 59411.021. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU0xd0003900xd0003a530060090012001500SE +/- 0.52, N = 3SE +/- 0.84, N = 31517.691519.84MIN: 1081.6 / MAX: 1690.6MIN: 1074.08 / MAX: 1721.591. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU0xd0003900xd0003a530060090012001500Min: 1516.91 / Avg: 1517.69 / Max: 1518.68Min: 1518.19 / Avg: 1519.84 / Max: 1520.961. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU0xd0003900xd0003a550100150200250SE +/- 0.03, N = 3SE +/- 0.14, N = 3209.31209.02MIN: 160.46 / MAX: 249.09MIN: 152.83 / MAX: 234.281. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU0xd0003900xd0003a54080120160200Min: 209.26 / Avg: 209.31 / Max: 209.35Min: 208.75 / Avg: 209.02 / Max: 209.171. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU0xd0003900xd0003a520406080100SE +/- 0.01, N = 3SE +/- 0.07, N = 395.4295.541. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU0xd0003900xd0003a520406080100Min: 95.41 / Avg: 95.42 / Max: 95.44Min: 95.47 / Avg: 95.54 / Max: 95.671. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 1000xd0003900xd0003a570140210280350SE +/- 0.89, N = 3SE +/- 0.20, N = 3312.20312.531. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 1000xd0003900xd0003a560120180240300Min: 310.41 / Avg: 312.2 / Max: 313.1Min: 312.24 / Avg: 312.53 / Max: 312.911. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt0xd0003900xd0003a55001000150020002500SE +/- 1.08, N = 3SE +/- 8.35, N = 32319.312321.741. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt0xd0003900xd0003a5400800120016002000Min: 2317.62 / Avg: 2319.31 / Max: 2321.33Min: 2308.54 / Avg: 2321.74 / Max: 2337.211. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S0xd0003900xd0003a51000K2000K3000K4000K5000KSE +/- 8325.80, N = 3SE +/- 8676.85, N = 3446232744666531. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S0xd0003900xd0003a5800K1600K2400K3200K4000KMin: 4446020 / Avg: 4462326.67 / Max: 4473400Min: 4453340 / Avg: 4466653.33 / Max: 44829501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4K0xd0003900xd0003a53691215SE +/- 0.06, N = 3SE +/- 0.01, N = 310.4610.451. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4K0xd0003900xd0003a53691215Min: 10.39 / Avg: 10.46 / Max: 10.58Min: 10.43 / Avg: 10.45 / Max: 10.471. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU0xd0003900xd0003a53691215SE +/- 0.00, N = 3SE +/- 0.01, N = 313.0313.041. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU0xd0003900xd0003a548121620Min: 13.03 / Avg: 13.03 / Max: 13.04Min: 13.02 / Avg: 13.04 / Max: 13.051. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a50.81571.63142.44713.26284.0785SE +/- 0.00896, N = 3SE +/- 0.00622, N = 33.625263.62266MIN: 3.54MIN: 3.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5246810Min: 3.61 / Avg: 3.63 / Max: 3.64Min: 3.62 / Avg: 3.62 / Max: 3.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoin0xd0003900xd0003a5300K600K900K1200K1500KSE +/- 7105.40, N = 3SE +/- 7235.75, N = 3133223713331171. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoin0xd0003900xd0003a5200K400K600K800K1000KMin: 1324150 / Avg: 1332236.67 / Max: 1346400Min: 1322790 / Avg: 1333116.67 / Max: 13470601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.mesh0xd0003900xd0003a580160240320400SE +/- 0.23, N = 3SE +/- 0.80, N = 3385.89386.081. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.mesh0xd0003900xd0003a570140210280350Min: 385.44 / Avg: 385.89 / Max: 386.16Min: 384.76 / Avg: 386.08 / Max: 387.521. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magi0xd0003900xd0003a55001000150020002500SE +/- 3.91, N = 3SE +/- 1.08, N = 32309.472308.661. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magi0xd0003900xd0003a5400800120016002000Min: 2301.77 / Avg: 2309.47 / Max: 2314.44Min: 2306.6 / Avg: 2308.66 / Max: 2310.231. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a50.46570.93141.39711.86282.3285SE +/- 0.00038, N = 3SE +/- 0.00191, N = 32.069362.06967MIN: 2.03MIN: 2.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5246810Min: 2.07 / Avg: 2.07 / Max: 2.07Min: 2.07 / Avg: 2.07 / Max: 2.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x0xd0003900xd0003a56001200180024003000SE +/- 4.75, N = 3SE +/- 5.54, N = 32659.172659.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x0xd0003900xd0003a55001000150020002500Min: 2651.79 / Avg: 2659.17 / Max: 2668.04Min: 2649.7 / Avg: 2659.55 / Max: 2668.881. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only0xd0003900xd0003a50.32850.6570.98551.3141.6425SE +/- 0.00, N = 3SE +/- 0.00, N = 31.461.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only0xd0003900xd0003a5246810Min: 1.46 / Avg: 1.46 / Max: 1.46Min: 1.46 / Avg: 1.46 / Max: 1.47

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only0xd0003900xd0003a50.68181.36362.04542.72723.409SE +/- 0.00, N = 3SE +/- 0.00, N = 33.033.03
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only0xd0003900xd0003a5246810Min: 3.02 / Avg: 3.03 / Max: 3.03Min: 3.02 / Avg: 3.03 / Max: 3.03

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU0xd0003900xd0003a50.2610.5220.7831.0441.305SE +/- 0.00, N = 3SE +/- 0.00, N = 31.161.16MIN: 0.88 / MAX: 12.61MIN: 0.86 / MAX: 17.981. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU0xd0003900xd0003a5246810Min: 1.16 / Avg: 1.16 / Max: 1.16Min: 1.16 / Avg: 1.16 / Max: 1.161. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU0xd0003900xd0003a50.29930.59860.89791.19721.4965SE +/- 0.00, N = 3SE +/- 0.00, N = 31.331.33MIN: 0.97 / MAX: 13MIN: 0.97 / MAX: 13.431. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU0xd0003900xd0003a5246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 1.33 / Avg: 1.33 / Max: 1.331. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDet0xd0003900xd0003a53691215SE +/- 0.64, N = 3SE +/- 0.07, N = 310.209.71MIN: 9.01 / MAX: 500.17MIN: 9.22 / MAX: 27.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDet0xd0003900xd0003a53691215Min: 9.48 / Avg: 10.2 / Max: 11.48Min: 9.63 / Avg: 9.71 / Max: 9.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400m0xd0003900xd0003a51020304050SE +/- 7.69, N = 3SE +/- 0.50, N = 345.5438.85MIN: 36.01 / MAX: 3343.68MIN: 37.13 / MAX: 233.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400m0xd0003900xd0003a5918273645Min: 37.74 / Avg: 45.54 / Max: 60.92Min: 37.87 / Avg: 38.85 / Max: 39.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet500xd0003900xd0003a5510152025SE +/- 0.52, N = 3SE +/- 0.69, N = 317.1518.51MIN: 16.19 / MAX: 41.83MIN: 17.32 / MAX: 299.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet500xd0003900xd0003a5510152025Min: 16.38 / Avg: 17.15 / Max: 18.14Min: 17.62 / Avg: 18.51 / Max: 19.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard0xd0003900xd0003a5246810SE +/- 0.00502, N = 3SE +/- 0.43315, N = 156.314286.719681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard0xd0003900xd0003a53691215Min: 6.31 / Avg: 6.31 / Max: 6.32Min: 5 / Avg: 6.72 / Max: 8.631. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard0xd0003900xd0003a5306090120150SE +/- 0.13, N = 3SE +/- 10.25, N = 15158.36158.011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard0xd0003900xd0003a5306090120150Min: 158.1 / Avg: 158.36 / Max: 158.51Min: 115.82 / Avg: 158.01 / Max: 200.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard0xd0003900xd0003a5306090120150SE +/- 1.29, N = 15SE +/- 3.29, N = 15110.32117.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard0xd0003900xd0003a520406080100Min: 105.54 / Avg: 110.32 / Max: 120.05Min: 105.06 / Avg: 117.55 / Max: 133.831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard0xd0003900xd0003a53691215SE +/- 0.10283, N = 15SE +/- 0.23491, N = 159.080678.599081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard0xd0003900xd0003a53691215Min: 8.33 / Avg: 9.08 / Max: 9.47Min: 7.47 / Avg: 8.6 / Max: 9.521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a50.33980.67961.01941.35921.699SE +/- 0.00633, N = 3SE +/- 0.02722, N = 151.434071.510291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a5246810Min: 1.42 / Avg: 1.43 / Max: 1.45Min: 1.4 / Avg: 1.51 / Max: 1.641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a5150300450600750SE +/- 3.08, N = 3SE +/- 12.01, N = 15696.73664.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard0xd0003900xd0003a5120240360480600Min: 690.83 / Avg: 696.73 / Max: 701.24Min: 610.7 / Avg: 664.54 / Max: 715.691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard0xd0003900xd0003a51.24832.49663.74494.99326.2415SE +/- 0.00245, N = 3SE +/- 0.11956, N = 155.547835.280951. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard0xd0003900xd0003a5246810Min: 5.54 / Avg: 5.55 / Max: 5.55Min: 4.7 / Avg: 5.28 / Max: 5.831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard0xd0003900xd0003a54080120160200SE +/- 0.08, N = 3SE +/- 4.37, N = 15180.16190.571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard0xd0003900xd0003a5306090120150Min: 180.08 / Avg: 180.16 / Max: 180.32Min: 171.28 / Avg: 190.57 / Max: 212.671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoin0xd0003900xd0003a56K12K18K24K30KSE +/- 330.17, N = 3SE +/- 3833.17, N = 1229203.0022086.251. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoin0xd0003900xd0003a55K10K15K20K25KMin: 28660 / Avg: 29203.33 / Max: 29800Min: 1428.76 / Avg: 22086.25 / Max: 330201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a52004006008001000SE +/- 16.59, N = 15SE +/- 14.20, N = 12832.45816.51MIN: 714.88MIN: 723.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU0xd0003900xd0003a5150300450600750Min: 739.17 / Avg: 832.45 / Max: 943.33Min: 759.67 / Avg: 816.51 / Max: 902.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

169 Results Shown

OSPRay:
  particle_volume/scivis/real_time
  particle_volume/ao/real_time
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
QMCPACK
simdjson
OSPRay:
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
TensorFlow
simdjson
OSPRay
TensorFlow
NCNN:
  CPU - googlenet
  CPU - vgg16
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
libxsmm
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenVKL
HeFFTe - Highly Efficient FFT for Exascale
Embree
Neural Magic DeepSparse
NCNN:
  CPU - resnet18
  CPU - squeezenet_ssd
Neural Magic DeepSparse
ONNX Runtime
NCNN
Neural Magic DeepSparse
QMCPACK
Neural Magic DeepSparse
TensorFlow
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
NCNN
TensorFlow
Embree
simdjson:
  DistinctUserID
  PartialTweets
SPECFEM3D
miniBUDE:
  OpenMP - BM2:
    GFInst/s
    Billion Interactions/s
HeFFTe - Highly Efficient FFT for Exascale
simdjson
Neural Magic DeepSparse
ONNX Runtime
Neural Magic DeepSparse
VP9 libvpx Encoding
NCNN
ONNX Runtime
TensorFlow
QMCPACK
libxsmm
HeFFTe - Highly Efficient FFT for Exascale
Neural Magic DeepSparse
SVT-AV1
Neural Magic DeepSparse
OpenVINO
NCNN
OpenVINO
SPECFEM3D
GROMACS
SPECFEM3D
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
Palabos
NCNN
HeFFTe - Highly Efficient FFT for Exascale
NCNN
TensorFlow
NCNN:
  CPU - alexnet
  CPU - mobilenet
Remhos
SVT-AV1
HeFFTe - Highly Efficient FFT for Exascale
SVT-AV1
HeFFTe - Highly Efficient FFT for Exascale
Palabos
ONNX Runtime
Neural Magic DeepSparse
libxsmm
HeFFTe - Highly Efficient FFT for Exascale
SVT-HEVC
HeFFTe - Highly Efficient FFT for Exascale
Neural Magic DeepSparse
NCNN
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
QMCPACK
NCNN
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
SPECFEM3D
Cpuminer-Opt
libxsmm
Neural Magic DeepSparse:
  ResNet-50, Baseline - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
oneDNN
Timed MrBayes Analysis
Cpuminer-Opt
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Face Detection FP16 - CPU
Blender
oneDNN
CloverLeaf
Cpuminer-Opt
VVenC
Blender
OpenVINO
SPECFEM3D
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
Neural Magic DeepSparse
OpenVINO
Cpuminer-Opt:
  LBC, LBRY Credits
  Deepcoin
OpenVINO
VVenC
Neural Magic DeepSparse
OpenVINO
oneDNN
dav1d
OpenVINO
Laghos
OpenVINO
SVT-HEVC
dav1d
Xcompact3d Incompact3d
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU
  Person Detection FP32 - CPU
  Face Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
Palabos
Cpuminer-Opt:
  scrypt
  Blake-2 S
SVT-HEVC
OpenVINO
oneDNN
Cpuminer-Opt
Laghos
Cpuminer-Opt
oneDNN
Cpuminer-Opt
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
NCNN:
  CPU - FastestDet
  CPU - regnety_400m
  CPU - resnet50
ONNX Runtime:
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Cpuminer-Opt
oneDNN