7713 2P

Tests for a future article. 2 x AMD EPYC 7303 16-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310231-NE-77132P99738
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 03 2023
  50 Minutes
b
October 03 2023
  50 Minutes
c
October 03 2023
  50 Minutes
AMD EPYC 7303 16-Core
October 08 2023
  1 Hour, 37 Minutes
d
October 18 2023
  2 Hours, 29 Minutes
e
October 23 2023
  1 Hour, 53 Minutes
f
October 23 2023
  1 Hour, 53 Minutes
g
October 23 2023
  1 Hour, 53 Minutes
Invert Behavior (Only Show Selected Data)
  1 Hour, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


7713 2PProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabcAMD EPYC 7303 16-Coredefg2 x AMD EPYC 7713 64-Core @ 2.00GHz (128 Cores / 256 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse256GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVE2282 x Mellanox MT27710Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41920x1080AMD EPYC 7303 16-Core @ 2.40GHz (16 Cores / 32 Threads)2 x AMD EPYC 7203 8-Core @ 2.80GHz (16 Cores / 32 Threads)512GB1024x7682 x AMD EPYC 7303 16-Core @ 2.40GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcAMD EPYC 7303 16-CoredefgResult OverviewPhoronix Test Suite100%225%350%474%599%Blender7-Zip CompressionTimed Linux Kernel CompilationRemhosOpenVINOlibxsmmOpenRadiossLaghoslibavif avifenc

7713 2Popenvkl: vklBenchmarkCPU Scalaropenvkl: vklBenchmarkCPU ISPCblender: Barbershop - CPU-Onlybuild-linux-kernel: allmodconfigopenradioss: Chrysler Neon 1Mopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenradioss: Bird Strike on Windshieldblender: Pabellon Barcelona - CPU-Onlylaghos: Sedov Blast Wave, ube_922_hex.mesheasywave: e2Asean Grid + BengkuluSept2007 Source - 2400openradioss: Rubber O-Ring Seal Installationblender: Classroom - CPU-Onlyopenradioss: Bumper Beamavifenc: 0onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlylaghos: Triple Point Problemopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyavifenc: 2openradioss: Cell Phone Drop Testblender: BMW27 - CPU-Onlyembree: Pathtracer ISPC - Asian Dragon Objbuild-linux-kernel: defconfigembree: Pathtracer - Asian Dragon Objcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Crownembree: Pathtracer - Asian Dragonremhos: Sample Remap Exampleonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUlibxsmm: 64onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUlibxsmm: 32onednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUavifenc: 6, Losslessonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUavifenc: 10, Losslesseasywave: e2Asean Grid + BengkuluSept2007 Source - 240avifenc: 6onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUabcAMD EPYC 7303 16-Coredefg153.78182.491174.49132.37147.6449.66282.55130.440.28115.8975.603164.19623.951.12157.75202.68141.33226.1511.892687.1923.811342.68142.22899.150.8598494.419.663308.2823.985333.243.349543.2320.4441.42426.5416.0823.87564382350761111.284980.5540.36.7865.4743.248152.94179.986173.59129.61146.1749.32282.08134.6140.21116.7375.778162.65623.6451.17157.4203.14141.24226.2712.032655.0923.851340.29142.42897.770.85101883.139.683302.8323.985334.873.349544.9520.7340.83726.3516.0823.97464799550503611.02923.94716.8815.4853.331152.5179.539174.45132.8146.3148.96279.20131.2540.04116.375.836167.28623.651.2156.74203.96143.55222.7311.862694.2723.861340.12142.29898.690.86101985.219.673303.5923.975335.153.359527.7620.4542.4126.5316.1923.93967876549934711.709920.8467.36.9295.5353.309871.17815.395551.24357.27202.93280.03148.49102.8226.06118.33120.85132.65921.148.66189.9242.11168.347.4615.34520.8933.62237.79102.82155.50.6823110.214570.5818.63858.214.551754.67109.2160.1965.5589.4567.40811610213589237.307406.9273.29.1865.7795.282144267974.94830.492489.75429.22235.98296.74132.46149.203129.36237.87134.32125.9183037.723019.313028.361466.251445.861451.20.28114.18478.928.35107.8137.0692.5343.1810.8369.720.44195.593.16171.6165.7330.6623725.578.49470.4618.65856.633.241232.95116.6664.14663.7194.8613.844568.67515.74391068261133840.560.5614.399115.870216.125317.478439.715.541361.94173442.52.345651.6509356.54.513330.76844910.1334.273382.677236.1863.2915.7714.631851.92811273495505.43462.138304.03302.03192.24155.52234.135345612140.297118.07116.68105.4492.4473379.253494.513216.131137.111161.81114.670.48182.23463.5717.2109.8972.7193.4585.519.13873.3419.25415.32105.22303.855.3360.6746823.737.291096.2518.461727.342.533153.962.5248.72441.0550.2126.069843.4129.61952145112106200.950.9526.523329.980530.102832.969623.4635.670181.93352727.21.894051.44749484.73.849161.053247.8014.104121.742445.8552.8313.9653.143711.16856275497504.18462.599306.37305.55192.62157.08233.54130.447120.33117.05105.6891.9133281.243341.933462.91128.31153.451139.540.48181.55462.7817.27109.1673.2393.585.469.21865.2819.25415.12105.23303.8360.0490.745126.457.291095.2718.671708.452.533148.8262.1848.84441.0150.2526.273743.47729.52062162582121390.950.9526.611230.131430.337833.039223.4885.664351.9522737.31.770641.92348482.94.189741.143797.8243.839951.698175.8172.6723.9763.061111.16545274495506.55462.696303.58307.24192.49157.59233.047063651133.177122.81118.34107.0492.5283556.443451.093272.361149.151145.621129.540.48180.93462.5717.28110.7172.1494.1784.869.11875.6419.25415.15105.33303.5956.3370.6647926.347.291095.6818.481725.372.533148.9762.3148.73140.8550.0926.157443.31829.59082184172098870.950.9526.398630.026130.255232.966223.7235.633191.95024723.42.12072.28469470.93.947331.086617.8254.057681.846525.8152.7663.9623.141051.17023OpenBenchmarking.org

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalargfed60120180240300274275273144MIN: 21 / MAX: 5053MIN: 21 / MAX: 5065MIN: 21 / MAX: 5037MIN: 11 / MAX: 2667

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCgfed110220330440550495497495267MIN: 43 / MAX: 6166MIN: 43 / MAX: 6162MIN: 44 / MAX: 6142MIN: 23 / MAX: 3404

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlygfedcbaAMD EPYC 7303 16-Core2004006008001000506.55504.18505.43974.94152.50152.94153.78871.17

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfiggfedcbaAMD EPYC 7303 16-Core2004006008001000462.70462.60462.14830.49179.54179.99182.49815.40

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MgfedcbaAMD EPYC 7303 16-Core120240360480600303.58306.37304.03489.75174.45173.59174.49551.24

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainergfedcbaAMD EPYC 7303 16-Core90180270360450307.24305.55302.03429.22132.80129.61132.37357.27

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldgfedcbaAMD EPYC 7303 16-Core50100150200250192.49192.62192.24235.98146.31146.17147.64202.93

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlygfedcbaAMD EPYC 7303 16-Core60120180240300157.59157.08155.52296.7448.9649.3249.66280.03

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshgfedcbaAMD EPYC 7303 16-Core60120180240300233.05233.54234.14132.46279.20282.08282.55148.491. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400gfed306090120150133.18130.45140.30149.201. (CXX) g++ options: -O3 -fopenmp

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationgfedcbaAMD EPYC 7303 16-Core306090120150122.81120.33118.07129.36131.25134.61130.40102.80

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlygfedcbaAMD EPYC 7303 16-Core50100150200250118.34117.05116.68237.8740.0440.2140.28226.06

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamgfedcbaAMD EPYC 7303 16-Core306090120150107.04105.68105.44134.32116.30116.73115.89118.33

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0gfedcbaAMD EPYC 7303 16-Core30609012015092.5391.9192.45125.9275.8475.7875.60120.851. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUgfed80016002400320040003556.443281.243379.253037.72MIN: 3506.87MIN: 3185.12MIN: 3265.73MIN: 2984.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUgfed70014002100280035003451.093341.933494.513019.31MIN: 3397.7MIN: 3231.53MIN: 3430.83MIN: 2971.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUgfed70014002100280035003272.363462.903216.133028.36MIN: 3161.82MIN: 3392.2MIN: 3111.44MIN: 2984.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUgfed300600900120015001149.151128.301137.111466.25MIN: 1097.47MIN: 1073.3MIN: 1093.13MIN: 1422.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUgfed300600900120015001145.621153.451161.801445.86MIN: 1102.84MIN: 1089.33MIN: 1098.59MIN: 1399.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUgfed300600900120015001129.541139.541114.671451.20MIN: 1079.67MIN: 1087.78MIN: 1074.44MIN: 1412.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfed0.1080.2160.3240.4320.540.480.480.480.28

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point ProblemgfedcbaAMD EPYC 7303 16-Core4080120160200180.93181.55182.23114.18167.28162.65164.19132.651. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core2004006008001000462.57462.78463.57478.92623.60623.64623.90921.14MIN: 455.99 / MAX: 481.91MIN: 456.11 / MAX: 479.6MIN: 456.35 / MAX: 480.41MIN: 474.19 / MAX: 500.74MIN: 599.33 / MAX: 668.6MIN: 593.36 / MAX: 672.04MIN: 594.87 / MAX: 669.12MIN: 883.65 / MAX: 932.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core122436486017.2817.2717.208.3551.2051.1751.128.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core4080120160200110.71109.16109.89107.81156.74157.40157.75189.92MIN: 98.56 / MAX: 144.8MIN: 99.46 / MAX: 143.57MIN: 93.05 / MAX: 140.85MIN: 102.18 / MAX: 124.72MIN: 131.77 / MAX: 297.92MIN: 128.42 / MAX: 281.86MIN: 108.19 / MAX: 274.81MIN: 169.75 / MAX: 200.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core408012016020072.1473.2372.7137.06203.96203.14202.6842.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core408012016020094.1793.5093.4592.53143.55141.24141.33168.30MIN: 85.17 / MAX: 160.36MIN: 82.68 / MAX: 155.42MIN: 77.59 / MAX: 139.64MIN: 87.83 / MAX: 184.88MIN: 114.18 / MAX: 525.63MIN: 114.93 / MAX: 488.84MIN: 113.19 / MAX: 547.49MIN: 141.17 / MAX: 184.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core5010015020025084.8685.4685.5143.18222.73226.27226.1547.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core481216209.119.219.1310.8011.8612.0311.8915.34MIN: 8.35 / MAX: 22.58MIN: 8.24 / MAX: 23.34MIN: 8.44 / MAX: 24.1MIN: 9.8 / MAX: 19.62MIN: 9.49 / MAX: 57.72MIN: 10.4 / MAX: 55.41MIN: 10.03 / MAX: 55.71MIN: 9.22 / MAX: 27.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcbaAMD EPYC 7303 16-Core6001200180024003000875.64865.28873.34369.702694.272655.092687.19520.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core81624324019.2519.2519.2520.4423.8623.8523.8133.62MIN: 17.49 / MAX: 27.05MIN: 17.46 / MAX: 28.16MIN: 17.55 / MAX: 32.52MIN: 18.84 / MAX: 37.99MIN: 20.96 / MAX: 60.47MIN: 20.69 / MAX: 64.82MIN: 20.24 / MAX: 65.85MIN: 25.37 / MAX: 42.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core30060090012001500415.15415.12415.32195.501340.121340.291342.68237.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core306090120150105.33105.23105.2293.16142.29142.42142.22102.82MIN: 94.87 / MAX: 111.06MIN: 92.98 / MAX: 112.15MIN: 91.67 / MAX: 111.86MIN: 86.16 / MAX: 102.67MIN: 110.33 / MAX: 173.87MIN: 106.02 / MAX: 184.44MIN: 111.28 / MAX: 182.41MIN: 69.31 / MAX: 117.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core2004006008001000303.59303.83303.80171.61898.69897.77899.15155.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200gfed153045607556.3460.0555.3465.731. (CXX) g++ options: -O3 -fopenmp

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core0.19350.3870.58050.7740.96750.660.700.670.660.860.850.850.68MIN: 0.61 / MAX: 10.25MIN: 0.61 / MAX: 10.2MIN: 0.61 / MAX: 10.58MIN: 0.61 / MAX: 6.63MIN: 0.71 / MAX: 40.39MIN: 0.72 / MAX: 21.71MIN: 0.72 / MAX: 27.72MIN: 0.4 / MAX: 11.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core20K40K60K80K100K47926.3445126.4546823.7323725.57101985.21101883.1398494.4123110.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core481216207.297.297.298.499.679.689.6614.00MIN: 7.15 / MAX: 16.28MIN: 7.14 / MAX: 16.05MIN: 7.14 / MAX: 16.48MIN: 7.96 / MAX: 14.96MIN: 7.95 / MAX: 41.68MIN: 7.94 / MAX: 44.33MIN: 7.98 / MAX: 45.57MIN: 7.54 / MAX: 24.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core70014002100280035001095.681095.271096.25470.463303.593302.833308.28570.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core61218243018.4818.6718.4618.6523.9723.9823.9818.63MIN: 17.82 / MAX: 33.34MIN: 17.84 / MAX: 33.76MIN: 17.85 / MAX: 32.35MIN: 17.81 / MAX: 33.21MIN: 21.4 / MAX: 33.21MIN: 21.31 / MAX: 35.8MIN: 21.19 / MAX: 37.06MIN: 8.99 / MAX: 30.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core110022003300440055001725.371708.451727.34856.635335.155334.875333.24858.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core1.02382.04763.07144.09525.1192.532.532.533.243.353.343.344.55MIN: 2.47 / MAX: 10.12MIN: 2.48 / MAX: 10.15MIN: 2.48 / MAX: 9.57MIN: 2.94 / MAX: 8.15MIN: 2.81 / MAX: 23.74MIN: 2.79 / MAX: 25.65MIN: 2.82 / MAX: 23.64MIN: 2.67 / MAX: 16.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcbaAMD EPYC 7303 16-Core2K4K6K8K10K3148.973148.823153.901232.959527.769544.959543.231754.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlygfedcbaAMD EPYC 7303 16-Core30609012015062.3162.1862.52116.6620.4520.7320.44109.21

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2gfedcbaAMD EPYC 7303 16-Core142842567048.7348.8448.7264.1542.4140.8441.4260.191. (CXX) g++ options: -O3 -fPIC -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestgfedcbaAMD EPYC 7303 16-Core153045607540.8541.0141.0563.7126.5326.3526.5465.55

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlygfedcbaAMD EPYC 7303 16-Core2040608010050.0950.2550.2194.8616.1916.0816.0889.45

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objgfed61218243026.1626.2726.0713.84MIN: 25.98 / MAX: 26.75MIN: 26.05 / MAX: 26.62MIN: 25.87 / MAX: 26.79MIN: 13.7 / MAX: 14.06

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfiggfedcbaAMD EPYC 7303 16-Core153045607543.3243.4843.4168.6823.9423.9723.8867.41

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objgfed71421283529.5929.5229.6215.74MIN: 29.36 / MAX: 30.31MIN: 29.27 / MAX: 30.16MIN: 29.37 / MAX: 30.21MIN: 15.64 / MAX: 15.92

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatinggfedcbaAMD EPYC 7303 16-Core150K300K450K600K750K2184172162582145111068266787656479956438231161021. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatinggfedcbaAMD EPYC 7303 16-Core110K220K330K440K550K2098872121392106201133844993475050365076111358921. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.21380.42760.64140.85521.0690.950.950.950.56

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.21380.42760.64140.85521.0690.950.950.950.56

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowngfed61218243026.4026.6126.5214.40MIN: 26.05 / MAX: 27.08MIN: 26.29 / MAX: 27.11MIN: 26.18 / MAX: 27.13MIN: 14.25 / MAX: 14.69

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragongfed71421283530.0330.1329.9815.87MIN: 29.84 / MAX: 30.34MIN: 29.94 / MAX: 30.64MIN: 29.79 / MAX: 30.37MIN: 15.72 / MAX: 16.2

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crowngfed71421283530.2630.3430.1016.13MIN: 29.92 / MAX: 30.97MIN: 30.01 / MAX: 30.87MIN: 29.78 / MAX: 30.77MIN: 15.95 / MAX: 16.45

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragongfed81624324032.9733.0432.9717.48MIN: 32.78 / MAX: 33.29MIN: 32.83 / MAX: 33.53MIN: 32.77 / MAX: 33.49MIN: 17.31 / MAX: 17.65

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap ExamplegfedcbaAMD EPYC 7303 16-Core91827364523.7223.4923.4639.7111.7111.0211.2837.311. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUgfed1.27582.55163.82745.10326.3795.633195.664355.670185.54136MIN: 4.7MIN: 4.73MIN: 4.65MIN: 4.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUgfed0.43920.87841.31761.75682.1961.950241.952201.933521.94173MIN: 1.78MIN: 1.79MIN: 1.72MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64gfedcbaAMD EPYC 7303 16-Core2004006008001000723.4737.3727.2442.5920.8923.9980.5406.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUgfed0.52781.05561.58342.11122.6392.120701.770641.894052.34565MIN: 1.65MIN: 1.42MIN: 1.51MIN: 2.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUgfed0.51411.02821.54232.05642.57052.284691.923481.447491.65090MIN: 1.48MIN: 1.39MIN: 1.01MIN: 1.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32gfedcbaAMD EPYC 7303 16-Core120240360480600470.9482.9484.7356.5467.3471.0540.3273.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUgfed1.01552.0313.04654.0625.07753.947334.189743.849164.51333MIN: 3.55MIN: 3.58MIN: 3.54MIN: 2.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUgfed0.25740.51480.77221.02961.2871.0866101.1437901.0532400.768449MIN: 0.86MIN: 0.83MIN: 0.81MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, LosslessgfedcbaAMD EPYC 7303 16-Core36912157.8257.8247.80110.1336.9296.8816.7869.1861. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUgfed0.96151.9232.88453.8464.80754.057683.839954.104124.27338MIN: 2.86MIN: 3.1MIN: 3.26MIN: 3.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUgfed0.60241.20481.80722.40963.0121.846521.698171.742442.67723MIN: 1.33MIN: 1.41MIN: 1.45MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, LosslessgfedcbaAMD EPYC 7303 16-Core2468105.8155.8175.8556.1865.5355.4855.4745.7791. (CXX) g++ options: -O3 -fPIC -lm

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240gfed0.74051.4812.22152.9623.70252.7662.6722.8313.2911. (CXX) g++ options: -O3 -fopenmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6gfedcbaAMD EPYC 7303 16-Core1.29852.5973.89555.1946.49253.9623.9763.9655.7713.3093.3313.2485.2821. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUgfed1.04222.08443.12664.16885.2113.141053.061113.143714.63185MIN: 2.86MIN: 2.86MIN: 2.85MIN: 4.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUgfed0.43380.86761.30141.73522.1691.170231.165451.168561.92811MIN: 1.1MIN: 1.11MIN: 1.1MIN: 1.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

75 Results Shown

OpenVKL:
  vklBenchmarkCPU Scalar
  vklBenchmarkCPU ISPC
Blender
Timed Linux Kernel Compilation
OpenRadioss:
  Chrysler Neon 1M
  INIVOL and Fluid Structure Interaction Drop Container
  Bird Strike on Windshield
Blender
Laghos
easyWave
OpenRadioss
Blender
OpenRadioss
libavif avifenc
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Intel Open Image Denoise
Laghos
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
easyWave
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
Blender
libavif avifenc
OpenRadioss
Blender
Embree
Timed Linux Kernel Compilation
Embree
7-Zip Compression:
  Decompression Rating
  Compression Rating
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Crown
  Pathtracer - Asian Dragon
Remhos
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
libxsmm
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
libxsmm
oneDNN:
  IP Shapes 3D - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
libavif avifenc
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
libavif avifenc
easyWave
libavif avifenc
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU