AMD EPYC Turin AVX-512 Comparison

AMD EPYC 9755 AVX-512 comparison by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2410104-NE-TURINAVX566
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
AVX-512 Off
September 29
  5 Hours, 55 Minutes
AVX-512 256b DP
September 28
  6 Hours, 28 Minutes
AVX-512 512b DP
September 30
  6 Hours, 26 Minutes
Invert Behavior (Only Show Selected Data)
  6 Hours, 16 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC Turin AVX-512 ComparisonOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 9755 128-Core @ 2.70GHz (128 Cores / 256 Threads)AMD VOLCANO (RVOT1000D BIOS)AMD Device 153a12 x 64GB DDR5-6000MT/s Samsung M321R8GA0PB1-CCPKC2 x 1920GB KIOXIA KCD8XPUG1T92ASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 24.046.10.0-phx (x86_64)GCC 13.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionAMD EPYC Turin AVX-512 Comparison BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xb002110 - Python 3.12.2- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AVX-512 256b DPAVX-512 OffAVX-512 512b DPResult OverviewPhoronix Test Suite100%137%174%212%249%oneDNNNAMDminiBUDEOpenVINOOSPRayTensorFlowsimdjsonGROMACSONNX RuntimeY-CruncherMobile Neural NetworkXmrigOSPRay StudioOpenVKLPyTorchlibxsmmSVT-AV1EmbreeNumpy BenchmarkSMHasherOpenFOAM

AVX-512 256b DPAVX-512 OffAVX-512 512b DPPer Watt Result OverviewPhoronix Test Suite100%138%176%215%miniBUDENAMDTensorFlowOSPRayGROMACSlibxsmmsimdjsonPyTorchOpenVKLXmrigEmbreeSVT-AV1Numpy BenchmarkP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD EPYC Turin AVX-512 Comparisonopenvkl: vklBenchmarkCPU ISPCxmrig: GhostRider - 1Mtensorflow: CPU - 512 - ResNet-50onnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardsimdjson: PartialTweetssvt-av1: Preset 3 - Bosphorus 4Ktensorflow: CPU - 256 - ResNet-50numpy: mnn: resnet-v2-50mnn: mobilenetV3simdjson: Kostyaopenfoam: drivaerFastback, Medium Mesh Size - Execution Timesimdjson: DistinctUserIDospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUonednn: Recurrent Neural Network Training - CPUsimdjson: LargeRandonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardsimdjson: TopTweetospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUlibxsmm: 128ospray: gravity_spheres_volume/dim_512/scivis/real_timey-cruncher: 10Bopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUospray: gravity_spheres_volume/dim_512/ao/real_timeopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUpytorch: CPU - 512 - ResNet-50ospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUpytorch: CPU - 256 - ResNet-50svt-av1: Preset 5 - Bosphorus 4Ky-cruncher: 5Bsvt-av1: Preset 13 - Bosphorus 4Knamd: STMV with 1,066,628 Atomspytorch: CPU - 1 - ResNet-50svt-av1: Preset 8 - Bosphorus 4Kminibude: OpenMP - BM2minibude: OpenMP - BM2gromacs: MPI CPU - water_GMX50_bareembree: Pathtracer ISPC - Asian Dragon Objy-cruncher: 1Bsmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXnamd: ATPase with 327,506 Atomsopenfoam: drivaerFastback, Small Mesh Size - Execution Timeonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragononednn: Deconvolution Batch shapes_3d - CPUminibude: OpenMP - BM1minibude: OpenMP - BM1AVX-512 256b DPAVX-512 OffAVX-512 512b DP356017480.4221.25177.4875.67626112.5428.9286122.436344.612955.821717.92098.6013.762190.08794.988.8511.9805.66163.51458.992164621539482.8811.576.13926162.9608.8449.349912693253933652.944.787647.271463.88137.5145.408711.7910112.3489.90710.9873.73866.837.908024.136.2910058.8422.362855.340.55175156.045.0424619.0430.354203.215.22862191.1767.488450.687919.2513579.9067167543.50108161073343.5548.59524.602388.5424.1702651.63108.124316.6567916.40919.544190.12007.75126.88534422.4113.3100219.9767230.3232210.350034178.1347221.38170.700379328.8708221.739309916826.7179.25181.7365.50491127.0197.8942425.745738.838059.030417.05827.3013.038158.25739.6910.0532.0554.69161.951947.642295522869447.7981.406.17395161.9647.4942.131613357308203431.730.607261.256544.38117.1431.719042.423010.63167.07382.43160.34398.5715.324150.4513.354727.3524.302626.830.57165052.366.2919828.7968.681861.725.50578181.55610.266189.6983610.7511702.4471471938.82114511139638.3846.24531.245378.4342.2809745.05101.844191.6054790.13218.350178.48398.02026.51232099.037.0006820.4219653.141980.373925167.0048205.95351.31214195.3124882.808366020005.2246.10143.9406.9679196.805010.329922.847343.822246.336921.59989.4614.278203.33795.317.6741.8725.90160.997429.922043820379425.2201.594.83481206.8239.8750.901511963240303764.446.326945.855329.05194.0446.972110.7510450.9883.32766.9957.471111.697.218755.754.4913925.7619.293297.850.48192118.563.8731620.5826.634780.245.18369192.8375.6111239.497526.6018690.3263964243.39102311016943.8550.73724.067400.8984.6256552.33114.980387.0509676.25422.839191.63907.78926.49134394.9814.1829320.248930.3223310.257632179.6822223.14310.509019395.3709884.254OpenBenchmarking.org

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCAVX-512 256b DPAVX-512 OffAVX-512 512b DP8001600240032004000SE +/- 1.76, N = 3SE +/- 0.33, N = 3SE +/- 0.58, N = 3356030993660MIN: 284 / MAX: 41727MIN: 245 / MAX: 36357MIN: 293 / MAX: 42710

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MAVX-512 256b DPAVX-512 OffAVX-512 512b DP4K8K12K16K20KSE +/- 973.55, N = 15SE +/- 973.00, N = 15SE +/- 779.44, N = 1517480.416826.720005.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP50100150200250SE +/- 0.23, N = 3SE +/- 0.20, N = 3SE +/- 0.45, N = 3221.25179.25246.10

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 4.12, N = 15SE +/- 2.26, N = 4SE +/- 2.10, N = 15177.49181.74143.941. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP246810SE +/- 0.12978, N = 15SE +/- 0.06677, N = 4SE +/- 0.10140, N = 155.676265.504916.967911. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP306090120150SE +/- 2.11, N = 15SE +/- 1.75, N = 15SE +/- 0.16, N = 3112.54127.0296.811. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.16447, N = 15SE +/- 0.11217, N = 15SE +/- 0.01681, N = 38.928617.8942410.329901. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP612182430SE +/- 0.19, N = 15SE +/- 0.01, N = 3SE +/- 0.22, N = 1522.4425.7522.851. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP1020304050SE +/- 0.38, N = 15SE +/- 0.02, N = 3SE +/- 0.43, N = 1544.6138.8443.821. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP1326395265SE +/- 0.65, N = 4SE +/- 1.68, N = 12SE +/- 0.38, N = 1555.8259.0346.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP510152025SE +/- 0.21, N = 4SE +/- 0.38, N = 12SE +/- 0.17, N = 1517.9217.0621.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetsAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.09, N = 6SE +/- 0.04, N = 3SE +/- 0.09, N = 158.607.309.461. (CXX) g++ options: -O3 -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.2Encoder Mode: Preset 3 - Input: Bosphorus 4KAVX-512 256b DPAVX-512 OffAVX-512 512b DP48121620SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 313.7613.0414.28-mavx2 -mavx512f -mavx512bw -mavx512dq-mavx2 -mavx512f -mavx512bw -mavx512dq1. (CXX) g++ options: -march=native -mno-avx

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.40, N = 3SE +/- 0.47, N = 3SE +/- 0.98, N = 3190.08158.25203.33

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkAVX-512 256b DPAVX-512 OffAVX-512 512b DP2004006008001000SE +/- 2.12, N = 3SE +/- 1.08, N = 3SE +/- 2.30, N = 3794.98739.69795.31

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.037, N = 3SE +/- 0.040, N = 3SE +/- 0.113, N = 38.85110.0537.674MIN: 8.53 / MAX: 10.37MIN: 9.74 / MAX: 12.33MIN: 7.38 / MAX: 8.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV3AVX-512 256b DPAVX-512 OffAVX-512 512b DP0.46240.92481.38721.84962.312SE +/- 0.033, N = 3SE +/- 0.012, N = 3SE +/- 0.008, N = 31.9802.0551.872MIN: 1.83 / MAX: 2.34MIN: 1.91 / MAX: 2.32MIN: 1.73 / MAX: 2.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: KostyaAVX-512 256b DPAVX-512 OffAVX-512 512b DP1.32752.6553.98255.316.6375SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.664.695.901. (CXX) g++ options: -O3 -lrt

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200163.51161.95161.001. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 48.997.649.921. (CXX) g++ options: -O3 -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP5K10K15K20K25KSE +/- 8.65, N = 3SE +/- 25.87, N = 3SE +/- 8.88, N = 3216462295520438

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP5K10K15K20K25KSE +/- 6.39, N = 3SE +/- 34.33, N = 3SE +/- 25.38, N = 3215392286920379

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP100200300400500SE +/- 0.27, N = 3SE +/- 0.35, N = 3SE +/- 0.60, N = 3482.88447.80425.22MIN: 478.35MIN: 443.23MIN: 418.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandomAVX-512 256b DPAVX-512 OffAVX-512 512b DP0.35780.71561.07341.43121.789SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.571.401.591. (CXX) g++ options: -O3 -lrt

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP246810SE +/- 0.06960, N = 5SE +/- 0.00589, N = 3SE +/- 0.00683, N = 36.139266.173954.834811. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP50100150200250SE +/- 1.79, N = 5SE +/- 0.15, N = 3SE +/- 0.29, N = 3162.96161.96206.821. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweetAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.847.499.871. (CXX) g++ options: -O3 -lrt

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeAVX-512 256b DPAVX-512 OffAVX-512 512b DP1122334455SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 349.3542.1350.90

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP3K6K9K12K15KSE +/- 7.80, N = 3SE +/- 13.35, N = 3SE +/- 7.06, N = 3126931335711963

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP7K14K21K28K35KSE +/- 2.00, N = 3SE +/- 109.14, N = 3SE +/- 20.00, N = 3253933082024030

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128AVX-512 256b DPAVX-512 OffAVX-512 512b DP8001600240032004000SE +/- 5.61, N = 3SE +/- 12.94, N = 3SE +/- 9.17, N = 33652.93431.73764.4-msse4.2-pedantic -fopenmp -march=core-avx2-msse4.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeAVX-512 256b DPAVX-512 OffAVX-512 512b DP1122334455SE +/- 0.13, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 344.7930.6146.33

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 10BAVX-512 256b DPAVX-512 OffAVX-512 512b DP1428425670SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 347.2761.2645.86

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP120240360480600SE +/- 0.07, N = 3SE +/- 0.31, N = 3SE +/- 0.76, N = 3463.88544.38329.05MIN: 405.2 / MAX: 482.04MIN: 261.15 / MAX: 569.93MIN: 146.73 / MAX: 360.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.46, N = 3137.51117.14194.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_timeAVX-512 256b DPAVX-512 OffAVX-512 512b DP1122334455SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.16, N = 345.4131.7246.97

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP1020304050SE +/- 0.06, N = 3SE +/- 0.24, N = 3SE +/- 0.07, N = 311.7942.4210.75MIN: 6.96 / MAX: 33.85MIN: 15.51 / MAX: 60.87MIN: 6.13 / MAX: 31.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 47.84, N = 3SE +/- 16.72, N = 3SE +/- 29.86, N = 310112.343010.6310450.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 389.90167.0783.32MIN: 40.69 / MAX: 160.4MIN: 75.83 / MAX: 256.32MIN: 35.6 / MAX: 146.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP170340510680850SE +/- 0.61, N = 3SE +/- 0.04, N = 3SE +/- 0.42, N = 3710.98382.43766.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.11, N = 3SE +/- 0.34, N = 3SE +/- 0.30, N = 373.73160.3457.47MIN: 35.26 / MAX: 113MIN: 88.77 / MAX: 245.28MIN: 26.92 / MAX: 106.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2004006008001000SE +/- 1.32, N = 3SE +/- 0.88, N = 3SE +/- 5.73, N = 3866.83398.571111.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP48121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 37.9015.327.21MIN: 5.02 / MAX: 29.4MIN: 7.91 / MAX: 45.71MIN: 4.13 / MAX: 25.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 14.15, N = 3SE +/- 9.55, N = 3SE +/- 25.40, N = 38024.134150.458755.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.2913.354.49MIN: 3.16 / MAX: 20.63MIN: 6.22 / MAX: 36.5MIN: 1.96 / MAX: 22.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP3K6K9K12K15KSE +/- 6.22, N = 3SE +/- 1.32, N = 3SE +/- 7.20, N = 310058.844727.3513925.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 322.3624.3019.29MIN: 11.44 / MAX: 40.97MIN: 13.82 / MAX: 53.77MIN: 9.84 / MAX: 44.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP7001400210028003500SE +/- 3.53, N = 3SE +/- 0.80, N = 3SE +/- 5.22, N = 32855.342626.833297.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP0.12830.25660.38490.51320.6415SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.570.48MIN: 0.18 / MAX: 22.43MIN: 0.2 / MAX: 26.73MIN: 0.13 / MAX: 25.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP40K80K120K160K200KSE +/- 517.64, N = 3SE +/- 339.72, N = 3SE +/- 544.06, N = 3175156.04165052.36192118.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.046.293.87MIN: 2.51 / MAX: 19.14MIN: 3.06 / MAX: 24.12MIN: 1.69 / MAX: 21.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP7K14K21K28K35KSE +/- 4.94, N = 3SE +/- 6.94, N = 3SE +/- 23.78, N = 324619.0419828.7931620.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP1530456075SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 330.3568.6826.63MIN: 17.71 / MAX: 42.78MIN: 40.62 / MAX: 87.32MIN: 15.61 / MAX: 47.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP10002000300040005000SE +/- 2.89, N = 3SE +/- 1.11, N = 3SE +/- 1.88, N = 34203.211861.724780.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP1.23882.47763.71644.95526.194SE +/- 0.00487, N = 3SE +/- 0.00462, N = 3SE +/- 0.02005, N = 35.228625.505785.183691. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: StandardAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.18, N = 3SE +/- 0.16, N = 3SE +/- 0.74, N = 3191.18181.56192.841. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.4810.265.61MIN: 4.14 / MAX: 23.69MIN: 6.21 / MAX: 38.75MIN: 2.24 / MAX: 30.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 0.88, N = 3SE +/- 1.18, N = 3SE +/- 0.53, N = 38450.686189.6911239.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2004006008001000SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3791836752

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.2510.756.60MIN: 4.08 / MAX: 21.6MIN: 4.66 / MAX: 32.11MIN: 2.24 / MAX: 26.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP4K8K12K16K20KSE +/- 5.42, N = 3SE +/- 1.96, N = 3SE +/- 12.83, N = 313579.9011702.4418690.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP150300450600750SE +/- 0.33, N = 3SE +/- 0.00, N = 3SE +/- 0.58, N = 3671714639

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP160320480640800SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 3675719642

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP1020304050SE +/- 0.34, N = 3SE +/- 0.08, N = 3SE +/- 0.52, N = 443.5038.8243.39MIN: 41.09 / MAX: 44.92MIN: 37.4 / MAX: 39.86MIN: 40.31 / MAX: 44.91

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 11.85, N = 3SE +/- 2.85, N = 3SE +/- 10.11, N = 3108161145110231

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 5.86, N = 3SE +/- 7.69, N = 3SE +/- 9.28, N = 3107331139610169

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP1020304050SE +/- 0.31, N = 3SE +/- 0.43, N = 3SE +/- 0.55, N = 343.5538.3843.85MIN: 41.56 / MAX: 44.75MIN: 36.82 / MAX: 39.67MIN: 41.39 / MAX: 45.93

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.2Encoder Mode: Preset 5 - Input: Bosphorus 4KAVX-512 256b DPAVX-512 OffAVX-512 512b DP1122334455SE +/- 0.20, N = 3SE +/- 0.13, N = 3SE +/- 0.02, N = 348.6046.2550.74-mavx2 -mavx512f -mavx512bw -mavx512dq-mavx2 -mavx512f -mavx512bw -mavx512dq1. (CXX) g++ options: -march=native -mno-avx

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 5BAVX-512 256b DPAVX-512 OffAVX-512 512b DP714212835SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 324.6031.2524.07

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.2Encoder Mode: Preset 13 - Input: Bosphorus 4KAVX-512 256b DPAVX-512 OffAVX-512 512b DP90180270360450SE +/- 6.38, N = 15SE +/- 0.63, N = 6SE +/- 6.48, N = 15388.54378.43400.90-mavx2 -mavx512f -mavx512bw -mavx512dq-mavx2 -mavx512f -mavx512bw -mavx512dq1. (CXX) g++ options: -march=native -mno-avx

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0b6Input: STMV with 1,066,628 AtomsAVX-512 256b DPAVX-512 OffAVX-512 512b DP1.04082.08163.12244.16325.204SE +/- 0.00881, N = 4SE +/- 0.00213, N = 3SE +/- 0.00436, N = 44.170262.280974.62565

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50AVX-512 256b DPAVX-512 OffAVX-512 512b DP1224364860SE +/- 0.12, N = 3SE +/- 0.23, N = 3SE +/- 0.38, N = 351.6345.0552.33MIN: 48.86 / MAX: 52.93MIN: 43.23 / MAX: 46.3MIN: 49.93 / MAX: 54.1

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.2Encoder Mode: Preset 8 - Input: Bosphorus 4KAVX-512 256b DPAVX-512 OffAVX-512 512b DP306090120150SE +/- 0.47, N = 3SE +/- 0.40, N = 3SE +/- 0.52, N = 3108.12101.84114.98-mavx2 -mavx512f -mavx512bw -mavx512dq-mavx2 -mavx512f -mavx512bw -mavx512dq1. (CXX) g++ options: -march=native -mno-avx

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 256b DPAVX-512 OffAVX-512 512b DP80160240320400SE +/- 0.70, N = 4SE +/- 0.64, N = 3SE +/- 4.22, N = 4316.66191.61387.051. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 17.43, N = 4SE +/- 15.88, N = 3SE +/- 105.52, N = 47916.414790.139676.251. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2024Implementation: MPI CPU - Input: water_GMX50_bareAVX-512 256b DPAVX-512 OffAVX-512 512b DP510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 319.5418.3522.841. (CXX) g++ options: -O3 -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon ObjAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.11, N = 5SE +/- 0.17, N = 5SE +/- 0.08, N = 5190.12178.48191.64MIN: 186.67 / MAX: 194.33MIN: 174.73 / MAX: 183.09MIN: 188.18 / MAX: 196.08

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1BAVX-512 256b DPAVX-512 OffAVX-512 512b DP246810SE +/- 0.005, N = 5SE +/- 0.014, N = 5SE +/- 0.012, N = 57.7518.0207.789

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAVX-512 256b DPAVX-512 OffAVX-512 512b DP612182430SE +/- 0.38, N = 6SE +/- 0.26, N = 6SE +/- 0.01, N = 626.8926.5126.491. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAVX-512 256b DPAVX-512 OffAVX-512 512b DP7K14K21K28K35KSE +/- 20.30, N = 6SE +/- 19.92, N = 6SE +/- 23.52, N = 634422.4132099.0334394.981. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0b6Input: ATPase with 327,506 AtomsAVX-512 256b DPAVX-512 OffAVX-512 512b DP48121620SE +/- 0.02496, N = 7SE +/- 0.01866, N = 3SE +/- 0.06451, N = 713.310027.0006814.18293

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeAVX-512 256b DPAVX-512 OffAVX-512 512b DP51015202519.9820.4220.251. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP0.70691.41382.12072.82763.5345SE +/- 0.000924, N = 5SE +/- 0.003218, N = 5SE +/- 0.000544, N = 50.3232213.1419800.322331MIN: 3.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP0.08410.16820.25230.33640.4205SE +/- 0.000363, N = 7SE +/- 0.000549, N = 7SE +/- 0.000532, N = 70.3500340.3739250.257632MIN: 0.33MIN: 0.35MIN: 0.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownAVX-512 256b DPAVX-512 OffAVX-512 512b DP4080120160200SE +/- 0.12, N = 8SE +/- 0.18, N = 8SE +/- 0.09, N = 8178.13167.00179.68MIN: 173.22 / MAX: 184.71MIN: 162.73 / MAX: 173.25MIN: 174.97 / MAX: 186.29

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonAVX-512 256b DPAVX-512 OffAVX-512 512b DP50100150200250SE +/- 0.05, N = 8SE +/- 0.08, N = 8SE +/- 0.09, N = 8221.38205.95223.14MIN: 217.86 / MAX: 225.89MIN: 202.63 / MAX: 210.88MIN: 218.68 / MAX: 229

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUAVX-512 256b DPAVX-512 OffAVX-512 512b DP0.29520.59040.88561.18081.476SE +/- 0.000342, N = 9SE +/- 0.001041, N = 9SE +/- 0.000468, N = 90.7003791.3121400.509019MIN: 0.68MIN: 1.28MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 256b DPAVX-512 OffAVX-512 512b DP90180270360450SE +/- 0.13, N = 10SE +/- 0.12, N = 8SE +/- 0.11, N = 11328.87195.31395.371. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 256b DPAVX-512 OffAVX-512 512b DP2K4K6K8K10KSE +/- 3.23, N = 10SE +/- 3.11, N = 8SE +/- 2.65, N = 118221.744882.819884.251. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringAVX-512 256b DPAVX-512 OffAVX-512 512b DP1326395265Min: 25.13 / Avg: 49.34 / Max: 63.75Min: 26.13 / Avg: 49.06 / Max: 63.5Min: 23.88 / Avg: 50.93 / Max: 66

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringAVX-512 256b DPAVX-512 OffAVX-512 512b DP90180270360450Min: 22.32 / Avg: 305.93 / Max: 503.55Min: 22.25 / Avg: 297.71 / Max: 505.2Min: 22.21 / Avg: 292.98 / Max: 502.06

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringAVX-512 256b DPAVX-512 OffAVX-512 512b DP8001600240032004000Min: 2172 / Avg: 3712.06 / Max: 4195Min: 2294 / Avg: 3647.31 / Max: 4647Min: 1886 / Avg: 3621.72 / Max: 4224

92 Results Shown

OpenVKL
Xmrig
TensorFlow
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
simdjson
SVT-AV1
TensorFlow
Numpy Benchmark
Mobile Neural Network:
  resnet-v2-50
  mobilenetV3
simdjson
OpenFOAM
simdjson
OSPRay Studio:
  2 - 4K - 32 - Path Tracer - CPU
  1 - 4K - 32 - Path Tracer - CPU
oneDNN
simdjson
ONNX Runtime:
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
simdjson
OSPRay
OSPRay Studio:
  3 - 4K - 16 - Path Tracer - CPU
  3 - 4K - 32 - Path Tracer - CPU
libxsmm
OSPRay
Y-Cruncher
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
OSPRay
OpenVINO:
  Noise Suppression Poconet-Like FP16 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Person Re-Identification Retail FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
ONNX Runtime:
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
OSPRay Studio
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
OSPRay Studio:
  1 - 4K - 1 - Path Tracer - CPU
  2 - 4K - 1 - Path Tracer - CPU
PyTorch
OSPRay Studio:
  2 - 4K - 16 - Path Tracer - CPU
  1 - 4K - 16 - Path Tracer - CPU
PyTorch
SVT-AV1
Y-Cruncher
SVT-AV1
NAMD
PyTorch
SVT-AV1
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
GROMACS
Embree
Y-Cruncher
SMHasher:
  FarmHash32 x86_64 AVX:
    cycles/hash
    MiB/sec
NAMD
OpenFOAM
oneDNN:
  IP Shapes 3D - CPU
  Convolution Batch Shapes Auto - CPU
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
oneDNN
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
CPU Temperature Monitor:
  Phoronix Test Suite System Monitoring:
    Celsius
    Watts
    Megahertz