7713 2P

Tests for a future article. 2 x AMD EPYC 7303 16-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310231-NE-77132P99738
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Creator Workloads 7 Tests
Game Development 3 Tests
HPC - High Performance Computing 3 Tests
Machine Learning 2 Tests
Multi-Core 9 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 3 Tests
Server CPU Tests 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 03 2023
  50 Minutes
b
October 03 2023
  50 Minutes
c
October 03 2023
  50 Minutes
AMD EPYC 7303 16-Core
October 08 2023
  1 Hour, 37 Minutes
d
October 18 2023
  2 Hours, 29 Minutes
e
October 23 2023
  1 Hour, 53 Minutes
f
October 23 2023
  1 Hour, 53 Minutes
g
October 23 2023
  1 Hour, 53 Minutes
Invert Hiding All Results Option
  1 Hour, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


7713 2PProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabcAMD EPYC 7303 16-Coredefg2 x AMD EPYC 7713 64-Core @ 2.00GHz (128 Cores / 256 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse256GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVE2282 x Mellanox MT27710Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41920x1080AMD EPYC 7303 16-Core @ 2.40GHz (16 Cores / 32 Threads)2 x AMD EPYC 7203 8-Core @ 2.80GHz (16 Cores / 32 Threads)512GB1024x7682 x AMD EPYC 7303 16-Core @ 2.40GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcAMD EPYC 7303 16-CoredefgResult OverviewPhoronix Test Suite100%225%350%474%599%Blender7-Zip CompressionTimed Linux Kernel CompilationRemhosOpenVINOlibxsmmOpenRadiossLaghoslibavif avifenc

7713 2Popenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUblender: Barbershop - CPU-Onlycompress-7zip: Decompression Ratingopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyopenvino: Handwritten English Recognition FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUbuild-linux-kernel: allmodconfigcompress-7zip: Compression Ratingopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUremhos: Sample Remap Exampleopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenradioss: Chrysler Neon 1Mbuild-linux-kernel: defconfigopenradioss: Cell Phone Drop Testlibxsmm: 64laghos: Sedov Blast Wave, ube_922_hex.meshopenvino: Face Detection FP16-INT8 - CPUlibxsmm: 32openvino: Vehicle Detection FP16-INT8 - CPUopenvkl: vklBenchmarkCPU Scalarembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragonembree: Pathtracer - Crownembree: Pathtracer - Asian Dragon Objopenvkl: vklBenchmarkCPU ISPCembree: Pathtracer ISPC - Crownopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUavifenc: 6openvino: Person Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyopenvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 0onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUopenradioss: Bird Strike on Windshieldlaghos: Triple Point Problemonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUavifenc: 2openvino: Handwritten English Recognition FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUavifenc: 6, Losslessonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUopenradioss: Rubber O-Ring Seal Installationopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenradioss: Bumper Beamonednn: Recurrent Neural Network Inference - u8s8f32 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200onednn: IP Shapes 3D - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUavifenc: 10, Losslessonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUabcAMD EPYC 7303 16-Coredefg9543.232687.193308.281342.68153.786438235333.2451.1249.6640.2816.08899.1520.44202.68226.15182.49150761198494.4111.284132.37174.4923.87526.54980.5282.55623.9540.39.66141.333.343.248157.7523.8111.8975.603147.64164.1941.424142.226.786130.40.8523.98115.895.4749544.952655.093302.831340.29152.946479955334.8751.1749.3240.2116.08897.7720.73203.14226.27179.986505036101883.1311.02129.61173.5923.97426.35923.9282.08623.644719.68141.243.343.331157.423.8512.0375.778146.17162.6540.837142.426.881134.610.8523.98116.735.4859527.762694.273303.591340.12152.56787655335.1551.248.9640.0416.19898.6920.45203.96222.73179.539499347101985.2111.709132.8174.4523.93926.53920.8279.20623.6467.39.67143.553.353.309156.7423.8611.8675.836146.31167.2842.41142.296.929131.250.8623.97116.35.5351754.67520.89570.58237.79871.17116102858.218.66280.03226.0689.45155.5109.2142.1147.46815.39513589223110.237.307357.27551.2467.40865.55406.9148.49921.14273.214168.34.555.282189.9233.6215.34120.85202.93132.6560.19102.829.186102.80.6818.63118.335.7791232.95369.7470.46195.5974.94106826856.638.35296.74237.8794.86171.61116.6637.0643.18830.49211338423725.5739.71429.22489.7568.67563.71442.5132.46478.92356.58.4914415.870213.844517.478416.125315.743926714.399192.533.245.771107.8120.440.280.560.5610.8125.9181.92811235.98114.181.65092.6772364.14693.164.6318510.1330.7684492.34565129.360.661451.21466.2518.65134.321445.863.29165.7334.513333037.723019.31149.2033028.366.1864.273385.541361.941733153.9873.341096.25415.32505.432145111727.3417.2155.52116.6850.21303.862.5272.7185.51462.13821062046823.7323.463302.03304.0343.4141.05727.2234.135345612463.57484.77.2927329.980526.069832.969630.102829.619549526.523393.452.533.965109.8919.250.480.950.959.1392.4471.16856192.24182.231.447491.7424448.724105.223.143717.8011.053241.89405118.070.671114.671137.1118.46105.441161.82.83155.3363.849163379.253494.51140.2973216.135.8554.104125.670181.933523148.82865.281095.27415.12504.182162581708.4517.27157.08117.0550.25303.8362.1873.2385.46462.59921213945126.4523.488305.55306.3743.47741.01737.3233.54462.78482.97.2927530.131426.273733.039230.337829.520649726.611293.52.533.976109.1619.250.480.950.959.2191.9131.16545192.62181.551.923481.6981748.844105.233.061117.8241.143791.77064120.330.71139.541128.318.67105.681153.452.67260.0494.189743281.243341.93130.4473462.95.8173.839955.664351.95223148.97875.641095.68415.15506.552184171725.3717.28157.59118.3450.09303.5962.3172.1484.86462.69620988747926.3423.723307.24303.5843.31840.85723.4233.047063651462.57470.97.2927430.026126.157432.966230.255229.590849526.398694.172.533.962110.7119.250.480.950.959.1192.5281.17023192.49180.932.284691.8465248.731105.333.141057.8251.086612.1207122.810.661129.541149.1518.48107.041145.622.76656.3373.947333556.443451.09133.1773272.365.8154.057685.633191.95024OpenBenchmarking.org

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba2K4K6K8K10K3148.973148.823153.901232.951754.679527.769544.959543.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba6001200180024003000875.64865.28873.34369.70520.892694.272655.092687.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba70014002100280035001095.681095.271096.25470.46570.583303.593302.833308.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba30060090012001500415.15415.12415.32195.50237.791340.121340.291342.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlygfedAMD EPYC 7303 16-Corecba2004006008001000506.55504.18505.43974.94871.17152.50152.94153.78

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatinggfedAMD EPYC 7303 16-Corecba150K300K450K600K750K2184172162582145111068261161026787656479956438231. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba110022003300440055001725.371708.451727.34856.63858.215335.155334.875333.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba122436486017.2817.2717.208.358.6651.2051.1751.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlygfedAMD EPYC 7303 16-Corecba60120180240300157.59157.08155.52296.74280.0348.9649.3249.66

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlygfedAMD EPYC 7303 16-Corecba50100150200250118.34117.05116.68237.87226.0640.0440.2140.28

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlygfedAMD EPYC 7303 16-Corecba2040608010050.0950.2550.2194.8689.4516.1916.0816.08

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba2004006008001000303.59303.83303.80171.61155.50898.69897.77899.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlygfedAMD EPYC 7303 16-Corecba30609012015062.3162.1862.52116.66109.2120.4520.7320.44

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba408012016020072.1473.2372.7137.0642.11203.96203.14202.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba5010015020025084.8685.4685.5143.1847.46222.73226.27226.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfiggfedAMD EPYC 7303 16-Corecba2004006008001000462.70462.60462.14830.49815.40179.54179.99182.49

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatinggfedAMD EPYC 7303 16-Corecba110K220K330K440K550K2098872121392106201133841358924993475050365076111. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba20K40K60K80K100K47926.3445126.4546823.7323725.5723110.20101985.21101883.1398494.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap ExamplegfedAMD EPYC 7303 16-Corecba91827364523.7223.4923.4639.7137.3111.7111.0211.281. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainergfedAMD EPYC 7303 16-Corecba90180270360450307.24305.55302.03429.22357.27132.80129.61132.37

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MgfedAMD EPYC 7303 16-Corecba120240360480600303.58306.37304.03489.75551.24174.45173.59174.49

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfiggfedAMD EPYC 7303 16-Corecba153045607543.3243.4843.4168.6867.4123.9423.9723.88

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestgfedAMD EPYC 7303 16-Corecba153045607540.8541.0141.0563.7165.5526.5326.3526.54

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64gfedAMD EPYC 7303 16-Corecba2004006008001000723.4737.3727.2442.5406.9920.8923.9980.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshgfedAMD EPYC 7303 16-Corecba60120180240300233.05233.54234.14132.46148.49279.20282.08282.551. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba2004006008001000462.57462.78463.57478.92921.14623.60623.64623.90MIN: 455.99 / MAX: 481.91MIN: 456.11 / MAX: 479.6MIN: 456.35 / MAX: 480.41MIN: 474.19 / MAX: 500.74MIN: 883.65 / MAX: 932.87MIN: 599.33 / MAX: 668.6MIN: 593.36 / MAX: 672.04MIN: 594.87 / MAX: 669.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32gfedAMD EPYC 7303 16-Corecba120240360480600470.9482.9484.7356.5273.2467.3471.0540.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba481216207.297.297.298.4914.009.679.689.66MIN: 7.15 / MAX: 16.28MIN: 7.14 / MAX: 16.05MIN: 7.14 / MAX: 16.48MIN: 7.96 / MAX: 14.96MIN: 7.54 / MAX: 24.09MIN: 7.95 / MAX: 41.68MIN: 7.94 / MAX: 44.33MIN: 7.98 / MAX: 45.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalargfed60120180240300274275273144MIN: 21 / MAX: 5053MIN: 21 / MAX: 5065MIN: 21 / MAX: 5037MIN: 11 / MAX: 2667

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragongfed71421283530.0330.1329.9815.87MIN: 29.84 / MAX: 30.34MIN: 29.94 / MAX: 30.64MIN: 29.79 / MAX: 30.37MIN: 15.72 / MAX: 16.2

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objgfed61218243026.1626.2726.0713.84MIN: 25.98 / MAX: 26.75MIN: 26.05 / MAX: 26.62MIN: 25.87 / MAX: 26.79MIN: 13.7 / MAX: 14.06

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragongfed81624324032.9733.0432.9717.48MIN: 32.78 / MAX: 33.29MIN: 32.83 / MAX: 33.53MIN: 32.77 / MAX: 33.49MIN: 17.31 / MAX: 17.65

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crowngfed71421283530.2630.3430.1016.13MIN: 29.92 / MAX: 30.97MIN: 30.01 / MAX: 30.87MIN: 29.78 / MAX: 30.77MIN: 15.95 / MAX: 16.45

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objgfed71421283529.5929.5229.6215.74MIN: 29.36 / MAX: 30.31MIN: 29.27 / MAX: 30.16MIN: 29.37 / MAX: 30.21MIN: 15.64 / MAX: 15.92

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCgfed110220330440550495497495267MIN: 43 / MAX: 6166MIN: 43 / MAX: 6162MIN: 44 / MAX: 6142MIN: 23 / MAX: 3404

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowngfed61218243026.4026.6126.5214.40MIN: 26.05 / MAX: 27.08MIN: 26.29 / MAX: 27.11MIN: 26.18 / MAX: 27.13MIN: 14.25 / MAX: 14.69

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba408012016020094.1793.5093.4592.53168.30143.55141.24141.33MIN: 85.17 / MAX: 160.36MIN: 82.68 / MAX: 155.42MIN: 77.59 / MAX: 139.64MIN: 87.83 / MAX: 184.88MIN: 141.17 / MAX: 184.72MIN: 114.18 / MAX: 525.63MIN: 114.93 / MAX: 488.84MIN: 113.19 / MAX: 547.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba1.02382.04763.07144.09525.1192.532.532.533.244.553.353.343.34MIN: 2.47 / MAX: 10.12MIN: 2.48 / MAX: 10.15MIN: 2.48 / MAX: 9.57MIN: 2.94 / MAX: 8.15MIN: 2.67 / MAX: 16.04MIN: 2.81 / MAX: 23.74MIN: 2.79 / MAX: 25.65MIN: 2.82 / MAX: 23.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6gfedAMD EPYC 7303 16-Corecba1.29852.5973.89555.1946.49253.9623.9763.9655.7715.2823.3093.3313.2481. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba4080120160200110.71109.16109.89107.81189.92156.74157.40157.75MIN: 98.56 / MAX: 144.8MIN: 99.46 / MAX: 143.57MIN: 93.05 / MAX: 140.85MIN: 102.18 / MAX: 124.72MIN: 169.75 / MAX: 200.94MIN: 131.77 / MAX: 297.92MIN: 128.42 / MAX: 281.86MIN: 108.19 / MAX: 274.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba81624324019.2519.2519.2520.4433.6223.8623.8523.81MIN: 17.49 / MAX: 27.05MIN: 17.46 / MAX: 28.16MIN: 17.55 / MAX: 32.52MIN: 18.84 / MAX: 37.99MIN: 25.37 / MAX: 42.95MIN: 20.96 / MAX: 60.47MIN: 20.69 / MAX: 64.82MIN: 20.24 / MAX: 65.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfed0.1080.2160.3240.4320.540.480.480.480.28

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.21380.42760.64140.85521.0690.950.950.950.56

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.21380.42760.64140.85521.0690.950.950.950.56

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedAMD EPYC 7303 16-Corecba481216209.119.219.1310.8015.3411.8612.0311.89MIN: 8.35 / MAX: 22.58MIN: 8.24 / MAX: 23.34MIN: 8.44 / MAX: 24.1MIN: 9.8 / MAX: 19.62MIN: 9.22 / MAX: 27.56MIN: 9.49 / MAX: 57.72MIN: 10.4 / MAX: 55.41MIN: 10.03 / MAX: 55.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0gfedAMD EPYC 7303 16-Corecba30609012015092.5391.9192.45125.92120.8575.8475.7875.601. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUgfed0.43380.86761.30141.73522.1691.170231.165451.168561.92811MIN: 1.1MIN: 1.11MIN: 1.1MIN: 1.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldgfedAMD EPYC 7303 16-Corecba50100150200250192.49192.62192.24235.98202.93146.31146.17147.64

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point ProblemgfedAMD EPYC 7303 16-Corecba4080120160200180.93181.55182.23114.18132.65167.28162.65164.191. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUgfed0.51411.02821.54232.05642.57052.284691.923481.447491.65090MIN: 1.48MIN: 1.39MIN: 1.01MIN: 1.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUgfed0.60241.20481.80722.40963.0121.846521.698171.742442.67723MIN: 1.33MIN: 1.41MIN: 1.45MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2gfedAMD EPYC 7303 16-Corecba142842567048.7348.8448.7264.1560.1942.4140.8441.421. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba306090120150105.33105.23105.2293.16102.82142.29142.42142.22MIN: 94.87 / MAX: 111.06MIN: 92.98 / MAX: 112.15MIN: 91.67 / MAX: 111.86MIN: 86.16 / MAX: 102.67MIN: 69.31 / MAX: 117.98MIN: 110.33 / MAX: 173.87MIN: 106.02 / MAX: 184.44MIN: 111.28 / MAX: 182.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUgfed1.04222.08443.12664.16885.2113.141053.061113.143714.63185MIN: 2.86MIN: 2.86MIN: 2.85MIN: 4.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, LosslessgfedAMD EPYC 7303 16-Corecba36912157.8257.8247.80110.1339.1866.9296.8816.7861. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUgfed0.25740.51480.77221.02961.2871.0866101.1437901.0532400.768449MIN: 0.86MIN: 0.83MIN: 0.81MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUgfed0.52781.05561.58342.11122.6392.120701.770641.894052.34565MIN: 1.65MIN: 1.42MIN: 1.51MIN: 2.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationgfedAMD EPYC 7303 16-Corecba306090120150122.81120.33118.07129.36102.80131.25134.61130.40

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba0.19350.3870.58050.7740.96750.660.700.670.660.680.860.850.85MIN: 0.61 / MAX: 10.25MIN: 0.61 / MAX: 10.2MIN: 0.61 / MAX: 10.58MIN: 0.61 / MAX: 6.63MIN: 0.4 / MAX: 11.17MIN: 0.71 / MAX: 40.39MIN: 0.72 / MAX: 21.71MIN: 0.72 / MAX: 27.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUgfed300600900120015001129.541139.541114.671451.20MIN: 1079.67MIN: 1087.78MIN: 1074.44MIN: 1412.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUgfed300600900120015001149.151128.301137.111466.25MIN: 1097.47MIN: 1073.3MIN: 1093.13MIN: 1422.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedAMD EPYC 7303 16-Corecba61218243018.4818.6718.4618.6518.6323.9723.9823.98MIN: 17.82 / MAX: 33.34MIN: 17.84 / MAX: 33.76MIN: 17.85 / MAX: 32.35MIN: 17.81 / MAX: 33.21MIN: 8.99 / MAX: 30.48MIN: 21.4 / MAX: 33.21MIN: 21.31 / MAX: 35.8MIN: 21.19 / MAX: 37.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamgfedAMD EPYC 7303 16-Corecba306090120150107.04105.68105.44134.32118.33116.30116.73115.89

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUgfed300600900120015001145.621153.451161.801445.86MIN: 1102.84MIN: 1089.33MIN: 1098.59MIN: 1399.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240gfed0.74051.4812.22152.9623.70252.7662.6722.8313.2911. (CXX) g++ options: -O3 -fopenmp

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200gfed153045607556.3460.0555.3465.731. (CXX) g++ options: -O3 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUgfed1.01552.0313.04654.0625.07753.947334.189743.849164.51333MIN: 3.55MIN: 3.58MIN: 3.54MIN: 2.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUgfed80016002400320040003556.443281.243379.253037.72MIN: 3506.87MIN: 3185.12MIN: 3265.73MIN: 2984.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUgfed70014002100280035003451.093341.933494.513019.31MIN: 3397.7MIN: 3231.53MIN: 3430.83MIN: 2971.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400gfed306090120150133.18130.45140.30149.201. (CXX) g++ options: -O3 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUgfed70014002100280035003272.363462.903216.133028.36MIN: 3161.82MIN: 3392.2MIN: 3111.44MIN: 2984.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, LosslessgfedAMD EPYC 7303 16-Corecba2468105.8155.8175.8556.1865.7795.5355.4855.4741. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUgfed0.96151.9232.88453.8464.80754.057683.839954.104124.27338MIN: 2.86MIN: 3.1MIN: 3.26MIN: 3.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUgfed1.27582.55163.82745.10326.3795.633195.664355.670185.54136MIN: 4.7MIN: 4.73MIN: 4.65MIN: 4.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUgfed0.43920.87841.31761.75682.1961.950241.952201.933521.94173MIN: 1.78MIN: 1.79MIN: 1.72MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

d: The test run did not produce a result.

e: The test run did not produce a result.

f: The test run did not produce a result.

g: The test run did not produce a result.

75 Results Shown

OpenVINO:
  Face Detection Retail FP16-INT8 - CPU
  Person Vehicle Bike Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Road Segmentation ADAS FP16-INT8 - CPU
Blender
7-Zip Compression
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
Blender:
  Pabellon Barcelona - CPU-Only
  Classroom - CPU-Only
  BMW27 - CPU-Only
OpenVINO
Blender
OpenVINO:
  Person Detection FP16 - CPU
  Machine Translation EN To DE FP16 - CPU
Timed Linux Kernel Compilation
7-Zip Compression
OpenVINO
Remhos
OpenRadioss:
  INIVOL and Fluid Structure Interaction Drop Container
  Chrysler Neon 1M
Timed Linux Kernel Compilation
OpenRadioss
libxsmm
Laghos
OpenVINO
libxsmm
OpenVINO
OpenVKL
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer - Asian Dragon
  Pathtracer - Crown
  Pathtracer - Asian Dragon Obj
OpenVKL
Embree
OpenVINO:
  Machine Translation EN To DE FP16 - CPU
  Face Detection Retail FP16-INT8 - CPU
libavif avifenc
OpenVINO:
  Person Detection FP16 - CPU
  Road Segmentation ADAS FP16-INT8 - CPU
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
OpenVINO
libavif avifenc
oneDNN
OpenRadioss
Laghos
oneDNN:
  IP Shapes 1D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
libavif avifenc
OpenVINO
oneDNN
libavif avifenc
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 1D - f32 - CPU
OpenRadioss
OpenVINO
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
OpenVINO
OpenRadioss
oneDNN
easyWave:
  e2Asean Grid + BengkuluSept2007 Source - 240
  e2Asean Grid + BengkuluSept2007 Source - 1200
oneDNN:
  IP Shapes 3D - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
easyWave
oneDNN
libavif avifenc
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU