AMD EPYC 9334 32-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads), Motherboard: Supermicro H13DSH (1.5 BIOS), Memory: 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1024x768
d Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
e f g Changed Processor to AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) .
Changed Motherboard to Supermicro H13SSW (1.1 BIOS) .
Changed Memory to 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N .
h i j Processor: AMD EPYC 9334 32-Core @ 2.70GHz (32 Cores / 64 Threads) , Motherboard: Supermicro H13SSW (1.1 BIOS), Memory: 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb, Monitor: DELL E207WFP
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1680x1050
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Model: Bumper Beam
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Chrysler Neon 1M
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Cell Phone Drop Test
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Bird Strike on Windshield
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Rubber O-Ring Seal Installation
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: INIVOL and Fluid Structure Interaction Drop Container
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Remhos Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example c a b i j h f g d e 7 14 21 28 35 16.24 16.35 16.79 20.30 20.36 20.44 30.73 30.75 30.76 30.85 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens a b c h j i d e f g 7 14 21 28 35 11.02 11.32 11.33 15.03 15.10 15.19 26.74 26.80 26.87 27.70 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a c b i j h g e f d 16 32 48 64 80 26.89 27.49 28.65 39.83 39.87 40.33 69.96 70.19 70.54 71.61 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model c b a i j h f d e g 7 14 21 28 35 12.04 12.10 12.31 15.59 15.84 15.99 26.97 27.33 27.46 27.75 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace b c a i j h e g f d 8 16 24 32 40 14.46 14.81 15.11 19.62 19.73 19.96 35.03 35.38 35.54 35.57 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a c b h i j f e d g 14 28 42 56 70 26.99 27.06 29.46 36.45 37.51 37.81 61.28 62.33 62.44 62.81 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw b a c g d e f i j h 2000M 4000M 6000M 8000M 10000M 11240300000 11106900000 10826700000 10500600000 10318900000 10264000000 9976450000 9269890000 9242080000 9145900000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic g f d e j i a h b c 2000M 4000M 6000M 8000M 10000M 7964910000 7955790000 7934570000 7931010000 6835360000 6768070000 6767710000 6761270000 6757360000 6754170000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown c b a j i h f g d e 12 24 36 48 60 55.40 55.39 54.90 43.31 43.27 42.89 21.59 21.58 21.48 21.44 MIN: 53.71 / MAX: 58.99 MIN: 54.02 / MAX: 57.64 MIN: 53.27 / MAX: 57.28 MIN: 42.83 / MAX: 44.44 MIN: 42.82 / MAX: 44.23 MIN: 42.47 / MAX: 43.88 MIN: 21.45 / MAX: 21.84 MIN: 21.43 / MAX: 21.89 MIN: 21.32 / MAX: 21.8 MIN: 21.3 / MAX: 21.78
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown c b a h i j g f d e 13 26 39 52 65 56.81 56.46 56.09 45.48 45.40 45.32 22.77 22.66 22.59 22.57 MIN: 55.27 / MAX: 59.91 MIN: 54.53 / MAX: 59.89 MIN: 54.05 / MAX: 59.82 MIN: 44.92 / MAX: 46.64 MIN: 44.87 / MAX: 46.45 MIN: 44.74 / MAX: 46.66 MIN: 22.57 / MAX: 23.16 MIN: 22.45 / MAX: 22.99 MIN: 22.39 / MAX: 22.98 MIN: 22.39 / MAX: 22.93
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon a b c i h j g e f d 13 26 39 52 65 60.14 59.91 59.79 48.19 48.12 48.10 24.82 24.73 24.70 24.69 MIN: 58.97 / MAX: 62 MIN: 58.66 / MAX: 61.96 MIN: 58.46 / MAX: 62.03 MIN: 47.96 / MAX: 48.63 MIN: 47.91 / MAX: 48.91 MIN: 47.86 / MAX: 48.8 MIN: 24.74 / MAX: 25 MIN: 24.67 / MAX: 24.86 MIN: 24.63 / MAX: 24.84 MIN: 24.62 / MAX: 24.84
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj b c a i h j d g e f 12 24 36 48 60 53.81 53.69 53.57 43.57 43.51 43.42 22.26 22.19 22.16 22.15 MIN: 52.72 / MAX: 55.86 MIN: 52.63 / MAX: 55.24 MIN: 52.17 / MAX: 55.38 MIN: 43.34 / MAX: 44.03 MIN: 43.26 / MAX: 44.02 MIN: 43.14 / MAX: 43.89 MIN: 22.18 / MAX: 22.42 MIN: 22.12 / MAX: 22.33 MIN: 22.08 / MAX: 22.35 MIN: 22.07 / MAX: 22.32
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon c a b j i h g d f e 15 30 45 60 75 67.50 67.34 67.20 54.52 54.51 54.39 28.48 28.36 28.32 28.31 MIN: 65.64 / MAX: 71.17 MIN: 65.61 / MAX: 70.54 MIN: 65.48 / MAX: 70.41 MIN: 54.24 / MAX: 55.1 MIN: 54.22 / MAX: 55.08 MIN: 54.12 / MAX: 55.15 MIN: 28.37 / MAX: 28.69 MIN: 28.26 / MAX: 28.59 MIN: 28.23 / MAX: 28.55 MIN: 28.21 / MAX: 28.56
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj c b a j h i e f g d 13 26 39 52 65 56.93 56.69 56.49 46.43 46.38 46.36 23.94 23.94 23.88 23.87 MIN: 55.56 / MAX: 59.67 MIN: 55.42 / MAX: 58.97 MIN: 55.29 / MAX: 58.38 MIN: 46.17 / MAX: 47.23 MIN: 46.09 / MAX: 47.08 MIN: 46.13 / MAX: 46.98 MIN: 23.84 / MAX: 24.18 MIN: 23.84 / MAX: 24.16 MIN: 23.79 / MAX: 24.08 MIN: 23.78 / MAX: 24.08
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a j b i h c g f e d 1.1707 2.3414 3.5121 4.6828 5.8535 5.203 5.160 5.149 5.079 5.075 5.049 4.143 4.138 4.114 4.107 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K j h i b a c g e f d 20 40 60 80 100 99.35 99.10 98.57 91.32 90.81 90.42 67.81 67.72 67.39 66.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K h i j b a d c e f g 50 100 150 200 250 230.03 227.87 224.41 166.38 163.46 163.19 163.06 162.61 161.85 160.32 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K j i h b a e d c g f 50 100 150 200 250 228.77 227.21 223.42 166.69 163.01 162.05 161.85 161.50 161.32 160.80 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p c b a i h j g e d f 3 6 9 12 15 12.62 12.59 12.48 12.26 12.23 12.18 11.02 10.98 10.91 10.74 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p h j i c a b e d f g 30 60 90 120 150 151.44 149.45 149.05 143.55 141.22 138.34 119.31 118.95 118.49 118.48 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p i j h g d e f c b a 130 260 390 520 650 591.32 584.12 580.16 528.53 526.22 525.17 521.52 431.90 427.69 422.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p i h j d e g f b c a 160 320 480 640 800 728.54 726.89 726.50 604.99 597.01 586.75 585.37 542.61 516.91 510.36 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time c a b h i j g d f e 4 8 12 16 20 15.98720 15.98600 15.97850 10.79880 10.74300 10.69320 5.57553 5.57469 5.57320 5.54107
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time b c a i h j d g e f 4 8 12 16 20 15.98880 15.97780 15.95280 10.79040 10.78320 10.76640 5.57001 5.56539 5.56353 5.55581
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time a c b j h i d f g e 50 100 150 200 250 215.10 214.14 214.07 192.70 192.65 192.40 151.91 151.78 151.68 151.51
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b c i j h g e f d 4 8 12 16 20 14.23690 14.17830 14.13990 10.88290 10.87430 10.85240 5.62278 5.62040 5.61454 5.60747
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a c b h i j g e d f 4 8 12 16 20 13.87390 13.83170 13.76660 10.59130 10.58480 10.57400 5.47725 5.46153 5.45329 5.45227
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time c b a h j i g f d e 4 8 12 16 20 16.53500 16.43650 16.34680 12.54380 12.51040 12.49870 6.60085 6.59563 6.58745 6.58270
Build: allmodconfig
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
d: The test quit with a non-zero exit status.
e: The test quit with a non-zero exit status.
f: The test quit with a non-zero exit status.
g: The test quit with a non-zero exit status.
h: The test quit with a non-zero exit status.
i: The test quit with a non-zero exit status.
j: The test quit with a non-zero exit status.
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b c h i j e f g d 8M 16M 24M 32M 40M 39499000 39486000 39453000 37145000 37141000 37120000 35315000 35271000 35236000 35228000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b c i j f g e d h 13M 26M 39M 52M 65M 59401000 59296000 57519000 55841000 55715000 52879000 52854000 52827000 52665000 51443000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b c i h j f e g d 17M 34M 51M 68M 85M 77181000 77019000 76924000 72491000 72468000 72400000 68861000 68846000 68678000 67054000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 c a b j i f d e h g 30M 60M 90M 120M 150M 118550000 117490000 114010000 111370000 109700000 105740000 105650000 105480000 105280000 104800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 a b c i h j e d f g 30M 60M 90M 120M 150M 153850000 153690000 153670000 145920000 145820000 145740000 138620000 138600000 138580000 138460000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 h j i b a c e g f d 40M 80M 120M 160M 200M 200930000 200120000 198640000 196590000 196220000 194510000 191230000 190750000 189880000 188930000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a c b h j i d e g f 70M 140M 210M 280M 350M 307540000 306760000 305110000 292620000 292490000 292430000 278030000 277780000 277410000 276390000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 j h i a c b d e g f 80M 160M 240M 320M 400M 381660000 378330000 377250000 369430000 366990000 366930000 363310000 357990000 357810000 350450000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 c b a i h j d f e g 3M 6M 9M 12M 15M 14225000 14021000 13909000 13363000 12909000 12899000 12683000 12681000 12366000 12256000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 c b a h j i d e f g 130M 260M 390M 520M 650M 603650000 602470000 594230000 585850000 585670000 583870000 545360000 545140000 545020000 543050000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 h j i a f e b d g c 160M 320M 480M 640M 800M 741420000 737120000 735800000 699740000 693340000 692920000 692760000 689150000 682070000 674930000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 c a b j i h e f d g 6M 12M 18M 24M 30M 28227000 27901000 27736000 26378000 25910000 25648000 25207000 25199000 24627000 22727000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 b c a i h j g d e f 300M 600M 900M 1200M 1500M 1190300000 1184800000 1183500000 1172900000 1172400000 1169900000 1047100000 1047100000 1046600000 1041900000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 i j h c b a d g e f 300M 600M 900M 1200M 1500M 1394100000 1369800000 1352900000 1254800000 1214200000 1192100000 1035000000 1033400000 1032000000 1024600000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 b c a h i e d f j g 12M 24M 36M 48M 60M 55588000 55165000 52911000 52890000 52129000 50380000 50258000 49977000 49781000 49556000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 b a c h j i d e f g 500M 1000M 1500M 2000M 2500M 2212100000 2207700000 2206800000 2057500000 2056500000 2052600000 1059500000 1057500000 1057100000 1056200000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 c b a j h i g e f d 400M 800M 1200M 1600M 2000M 2010300000 2001900000 1994400000 1922300000 1916800000 1899700000 1099300000 1095400000 1094600000 1093300000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a c b i h j g d f e 20M 40M 60M 80M 100M 109870000 109140000 108080000 104780000 104440000 104220000 100170000 99594000 99441000 97005000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 a c b j i h g f d e 600M 1200M 1800M 2400M 3000M 3005800000 2999800000 2995400000 2071000000 2069900000 2068300000 1065700000 1065300000 1065200000 1065100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 b c a j i h d f g e 600M 1200M 1800M 2400M 3000M 2571100000 2564900000 2559800000 1999500000 1997000000 1995500000 1120800000 1120500000 1118200000 1117800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 b a c i j h e g f d 50M 100M 150M 200M 250M 216150000 216080000 214910000 209720000 209490000 207590000 196040000 194670000 194500000 193850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 b a c j i h g d e f 90M 180M 270M 360M 450M 429620000 425810000 424400000 393270000 393200000 391680000 274070000 273760000 273480000 273390000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 c a b i h j f d e g 130M 260M 390M 520M 650M 622630000 622560000 610950000 512310000 512020000 511070000 283030000 282920000 281830000 281730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 b c a i h j g d f e 150M 300M 450M 600M 750M 718140000 715030000 711640000 520140000 519700000 519440000 286530000 286250000 285920000 285880000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 i h j f e g a b c 700 1400 2100 2800 3500 3485 3480 3479 3218 3209 3195 2540 2510 2485
Test: oltp_read_write - Threads: 1
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 h i b a c d g f e 20K 40K 60K 80K 100K 95579 94261 80183 79090 78469 55334 55301 54956 53893
Test: oltp_read_write - Threads: 64
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 i h e f d c b a 1300 2600 3900 5200 6500 6165 6125 5976 5954 5898 4471 4405 4331
Test: oltp_point_select - Threads: 1
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 j i h b a f e g d 20K 40K 60K 80K 100K 105802 104620 104180 89099 85757 60310 60145 59944 59727
Test: oltp_read_write - Threads: 128
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 h j i e f g d a c 400 800 1200 1600 2000 1666 1660 1656 1490 1483 1481 1479 1212 1189
Test: oltp_update_index - Threads: 1
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 j h i e f g b c 20K 40K 60K 80K 100K 87412 87218 86471 70250 70105 69923 67515 65406
Test: oltp_point_select - Threads: 16
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 h i j b a d f e g 30K 60K 90K 120K 150K 138538 138173 137618 106180 104627 98149 97368 96907 96840
Test: oltp_point_select - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 j i b a f e g d 40K 80K 120K 160K 200K 180581 180179 130802 127567 119092 118657 118549 115675
Test: oltp_point_select - Threads: 64
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
h: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 j h i f c g d e a 4K 8K 12K 16K 20K 16972 16965 16817 12692 12681 12627 12622 12567 12558
Test: oltp_update_index - Threads: 16
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 j h i a b d c g e 5K 10K 15K 20K 25K 24366 24286 23773 18361 17817 17612 17565 17135 17117
Test: oltp_update_index - Threads: 32
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 h j i b c e d f 7K 14K 21K 28K 35K 31332 30638 30522 24371 23324 21271 21108 21067
Test: oltp_update_index - Threads: 64
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 h i j b a c f e d 40K 80K 120K 160K 200K 200327 198137 197738 159728 159242 149962 130389 129904 129492
Test: oltp_point_select - Threads: 128
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 h j i b a c f e g 8K 16K 24K 32K 40K 37126 36644 36141 27464 27087 26546 24830 24611 24574
Test: oltp_update_index - Threads: 128
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 h i e g f d c a b 400 800 1200 1600 2000 1861 1848 1708 1705 1697 1693 1381 1328 1312
Test: oltp_update_non_index - Threads: 1
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 h j i g d e a b 5K 10K 15K 20K 25K 23794 23543 23541 18735 18563 18557 18095 18068
Test: oltp_update_non_index - Threads: 16
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 h j i b a f e d 8K 16K 24K 32K 40K 36041 35655 35650 28914 28735 26695 26285 26273
Test: oltp_update_non_index - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 h j c a e g f 14K 28K 42K 56K 70K 65816 64066 52865 51105 42138 41695 41424
Test: oltp_update_non_index - Threads: 128
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
i: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c j h i f g d e 9 18 27 36 45 39.50 39.47 39.45 25.73 25.71 25.63 13.09 13.07 13.07 12.94
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d g f e i h j 130 260 390 520 650 605.04 605.73 605.92 606.10 607.16 607.82 607.91 612.98 613.05 614.23
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream c a b h j i e g f d 300 600 900 1200 1500 1418.90 1417.07 1403.07 1003.52 1002.81 999.46 511.41 509.14 508.21 508.09
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream e g f d h j i c a b 4 8 12 16 20 15.62 15.69 15.72 15.72 15.92 15.94 15.99 16.89 16.91 17.07
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c i h j e f g d 150 300 450 600 750 672.46 672.37 671.26 508.36 507.63 507.36 257.89 257.50 257.28 257.27
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream e f d g i h j a b c 8 16 24 32 40 30.99 31.03 31.05 31.06 31.44 31.49 31.50 35.63 35.64 35.68
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream c a b h i j e d f g 40 80 120 160 200 201.54 201.39 201.25 136.96 136.68 136.57 71.27 71.14 71.04 70.93
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream e d f g h j i a c b 30 60 90 120 150 112.06 112.25 112.41 112.48 116.51 116.85 116.89 118.75 118.78 118.95
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c b a i h j g f d e 1100 2200 3300 4400 5500 5153.66 5138.83 5137.01 3327.19 3324.82 3324.70 1602.52 1600.53 1599.21 1599.15
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c b a h i j g e f d 1.1241 2.2482 3.3723 4.4964 5.6205 4.6348 4.6476 4.6508 4.8005 4.8015 4.8016 4.9787 4.9859 4.9877 4.9960
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b d j f e i h g c 110 220 330 440 550 485.72 487.36 493.60 494.01 494.22 494.26 494.50 495.22 495.60 507.48
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream b a c h j i d g e f 110 220 330 440 550 489.45 489.12 487.05 326.93 326.90 326.78 163.56 163.23 162.93 162.90
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c a i h j g e f d 50 100 150 200 250 219.53 218.52 218.15 145.65 145.48 145.37 72.69 72.66 72.57 72.46
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c i h a j g e f d 20 40 60 80 100 109.23 109.58 109.74 109.79 109.80 109.86 109.90 109.97 110.00 110.11
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a c b j h i g e f d 70 140 210 280 350 322.25 321.51 321.18 216.51 216.24 216.22 109.22 109.09 109.09 108.91
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream f g e d j i h b c a 80 160 240 320 400 324.96 325.51 325.74 325.88 336.68 336.77 337.96 347.22 347.37 347.66
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c h j i d e f g 160 320 480 640 800 718.92 717.97 716.14 488.06 487.23 486.71 240.55 240.23 240.16 239.52
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream c b a i j h d f e g 40 80 120 160 200 164.61 159.06 158.92 109.46 109.41 109.03 55.61 55.54 55.46 55.43
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream f d e g c i j h a b 30 60 90 120 150 143.69 143.76 144.10 144.11 145.26 145.83 145.90 146.12 150.59 150.61
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a c h j i d e f g 9 18 27 36 45 39.45 39.44 39.42 25.79 25.77 25.70 13.13 13.12 13.09 13.06
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a c d b e f g j h i 130 260 390 520 650 605.76 605.88 606.58 606.67 606.76 606.79 608.72 612.52 613.16 613.64
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only c a b h j i e f d g 16 32 48 64 80 26.12 26.20 26.24 38.40 38.51 38.57 71.44 71.96 72.00 72.01
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only a b c j i h f e d g 40 80 120 160 200 66.42 66.64 66.72 99.29 99.35 99.54 181.70 182.56 182.99 183.29
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only c b a i j h d f e g 20 40 60 80 100 33.03 33.17 33.22 48.69 48.82 49.10 90.03 90.26 90.31 90.63
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only c a b j i h f g e d 140 280 420 560 700 254.72 254.88 255.30 351.38 351.66 352.40 667.87 669.09 670.64 670.87
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only c a b j h i f e g d 50 100 150 200 250 80.41 80.54 80.76 119.04 119.30 119.42 223.95 224.10 224.12 224.15
OpenVINO OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU b c a j i h g f e d 7 14 21 28 35 30.44 30.43 30.41 19.84 19.83 19.82 10.48 10.48 10.47 10.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU b c a g f e d h i j 200 400 600 800 1000 393.23 393.37 393.60 759.92 760.57 761.16 761.59 804.58 804.75 805.38 MIN: 360.87 / MAX: 433.13 MIN: 362.57 / MAX: 433.51 MIN: 363.29 / MAX: 431.61 MIN: 737.63 / MAX: 771.07 MIN: 741.4 / MAX: 770.88 MIN: 741.99 / MAX: 776.56 MIN: 738.34 / MAX: 772.36 MIN: 772.52 / MAX: 820.63 MIN: 776.93 / MAX: 819.19 MIN: 783.22 / MAX: 819.23 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU b c a h j i f e g d 60 120 180 240 300 284.22 282.67 282.55 197.94 194.82 193.80 107.39 107.27 107.04 107.02 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU b c a f e d g h j i 20 40 60 80 100 42.20 42.43 42.44 74.43 74.50 74.71 74.71 80.77 82.09 82.49 MIN: 36.84 / MAX: 61.97 MIN: 36.31 / MAX: 62.36 MIN: 36.14 / MAX: 61.98 MIN: 65.68 / MAX: 83.49 MIN: 66.5 / MAX: 80.32 MIN: 66.12 / MAX: 81.09 MIN: 66.29 / MAX: 79.68 MIN: 69.54 / MAX: 95.42 MIN: 68.73 / MAX: 91.87 MIN: 70.77 / MAX: 94.62 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU b c a i j h g e d f 60 120 180 240 300 284.99 284.31 283.97 197.66 196.26 196.07 107.24 107.24 106.90 106.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU b c a e g d f i j h 20 40 60 80 100 42.09 42.19 42.24 74.54 74.58 74.81 74.87 80.88 81.50 81.58 MIN: 37.13 / MAX: 58.71 MIN: 36.21 / MAX: 65.64 MIN: 36.59 / MAX: 61.56 MIN: 65.97 / MAX: 82.9 MIN: 67.63 / MAX: 78.73 MIN: 66.88 / MAX: 80.7 MIN: 66.72 / MAX: 80.96 MIN: 39.72 / MAX: 92.54 MIN: 68.9 / MAX: 92.66 MIN: 68.74 / MAX: 95.81 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a c b i j h d g e f 400 800 1200 1600 2000 2033.17 2029.79 2028.01 1488.04 1483.25 1481.71 797.64 793.90 793.75 791.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a c b d e g f i j h 3 6 9 12 15 5.89 5.90 5.91 10.01 10.06 10.06 10.09 10.74 10.77 10.78 MIN: 4.67 / MAX: 18.4 MIN: 4.83 / MAX: 13.4 MIN: 4.84 / MAX: 12.9 MIN: 5.7 / MAX: 19.52 MIN: 5.29 / MAX: 19.07 MIN: 5.2 / MAX: 19.38 MIN: 5.4 / MAX: 19.17 MIN: 5.92 / MAX: 24.44 MIN: 6 / MAX: 18.16 MIN: 5.59 / MAX: 21.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU b c a h i j g d f e 13 26 39 52 65 56.06 56.02 56.01 37.82 37.57 37.48 20.05 20.03 20.01 20.00 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU b c a g d e f h i j 90 180 270 360 450 213.62 213.79 213.94 398.13 398.52 398.91 399.24 421.80 425.68 425.88 MIN: 197.2 / MAX: 235.23 MIN: 197.29 / MAX: 236.32 MIN: 201.64 / MAX: 242.71 MIN: 379.09 / MAX: 404.71 MIN: 382.1 / MAX: 404.98 MIN: 386.2 / MAX: 407.29 MIN: 387.9 / MAX: 408.93 MIN: 269.94 / MAX: 598.22 MIN: 402.91 / MAX: 432.03 MIN: 404.76 / MAX: 434.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a c b j i h d e g f 1300 2600 3900 5200 6500 5882.91 5840.53 5836.27 4892.27 4848.42 4803.65 2564.78 2562.54 2557.66 2539.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e g f j i h 0.747 1.494 2.241 2.988 3.735 2.03 2.05 2.05 3.11 3.11 3.12 3.14 3.26 3.29 3.32 MIN: 1.66 / MAX: 7.51 MIN: 1.6 / MAX: 7 MIN: 1.62 / MAX: 6.96 MIN: 1.94 / MAX: 11.57 MIN: 1.93 / MAX: 9.72 MIN: 1.88 / MAX: 11.92 MIN: 1.93 / MAX: 11.65 MIN: 2.1 / MAX: 14.3 MIN: 1.89 / MAX: 12.75 MIN: 2.11 / MAX: 12.58 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU c b a j h i d f e g 160 320 480 640 800 757.38 750.49 748.44 648.27 643.80 642.90 344.67 343.49 342.81 341.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU c b a d f e g j h i 6 12 18 24 30 15.83 15.98 16.02 23.20 23.28 23.32 23.42 24.67 24.84 24.87 MIN: 12.38 / MAX: 32.97 MIN: 12.74 / MAX: 33.34 MIN: 12.5 / MAX: 33.94 MIN: 15.1 / MAX: 31.6 MIN: 15.73 / MAX: 30.77 MIN: 19.49 / MAX: 30.99 MIN: 20.46 / MAX: 32.43 MIN: 20.15 / MAX: 37.89 MIN: 16.93 / MAX: 33.96 MIN: 17 / MAX: 33.34 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU c b a i j h f d g e 600 1200 1800 2400 3000 2881.14 2880.58 2873.24 2264.93 2254.72 2252.70 1180.85 1175.67 1175.58 1174.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU b c a f d g e i j h 2 4 6 8 10 4.16 4.16 4.17 6.76 6.79 6.79 6.80 7.05 7.08 7.09 MIN: 3.42 / MAX: 11.2 MIN: 3.43 / MAX: 10.26 MIN: 3.39 / MAX: 10.07 MIN: 4.04 / MAX: 15.47 MIN: 3.8 / MAX: 15.48 MIN: 3.79 / MAX: 15.41 MIN: 4.04 / MAX: 15.37 MIN: 4.44 / MAX: 16.57 MIN: 4.35 / MAX: 16.86 MIN: 4.43 / MAX: 16.88 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU c b a j i h e d g f 600 1200 1800 2400 3000 2987.33 2986.46 2945.26 1964.71 1964.61 1963.90 1039.82 1039.61 1039.37 1038.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU d e g f b c a h i j 4 8 12 16 20 15.36 15.36 15.37 15.38 16.02 16.02 16.26 16.27 16.27 16.27 MIN: 8.08 / MAX: 24.34 MIN: 8.02 / MAX: 23.81 MIN: 7.99 / MAX: 23.98 MIN: 7.99 / MAX: 24 MIN: 14.41 / MAX: 30.55 MIN: 14.63 / MAX: 33.79 MIN: 14.71 / MAX: 28.14 MIN: 8.92 / MAX: 25.52 MIN: 8.5 / MAX: 25.86 MIN: 8.44 / MAX: 25.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU b c a h i j f e d g 2K 4K 6K 8K 10K 9849.07 9845.27 9837.58 6653.86 6646.91 6638.24 3548.78 3544.18 3540.88 3533.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU f d e g h i j b a c 1.0935 2.187 3.2805 4.374 5.4675 4.50 4.51 4.51 4.52 4.80 4.81 4.81 4.85 4.86 4.86 MIN: 2.98 / MAX: 13.86 MIN: 2.98 / MAX: 13.05 MIN: 2.96 / MAX: 16.06 MIN: 2.77 / MAX: 13.57 MIN: 3.23 / MAX: 14.95 MIN: 3.23 / MAX: 15.04 MIN: 3.23 / MAX: 14.45 MIN: 4.25 / MAX: 12.86 MIN: 4.23 / MAX: 12.81 MIN: 4.34 / MAX: 12.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b c a h j i e g d f 200 400 600 800 1000 854.51 849.30 842.91 710.69 709.75 709.73 373.64 372.26 370.57 369.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b c a e g d f h i j 5 10 15 20 25 14.03 14.12 14.23 21.40 21.47 21.57 21.65 22.50 22.53 22.53 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 26.04 MIN: 11.51 / MAX: 25.86 MIN: 19.07 / MAX: 25.3 MIN: 17.62 / MAX: 28.13 MIN: 19.5 / MAX: 24.76 MIN: 19.48 / MAX: 24.27 MIN: 13.76 / MAX: 30.22 MIN: 19.09 / MAX: 30.15 MIN: 18.74 / MAX: 31.08 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU c b a h j i f d e g 70 140 210 280 350 317.33 317.28 317.22 234.18 233.88 233.65 124.30 124.12 123.61 123.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU b c a f d e g h j i 15 30 45 60 75 37.79 37.79 37.80 64.31 64.41 64.68 64.77 68.27 68.35 68.42 MIN: 32.97 / MAX: 53.7 MIN: 33.29 / MAX: 54.88 MIN: 33.35 / MAX: 56.45 MIN: 50.85 / MAX: 70.77 MIN: 37.44 / MAX: 73.04 MIN: 38.02 / MAX: 72.52 MIN: 55.8 / MAX: 69.46 MIN: 56.41 / MAX: 79.96 MIN: 55.82 / MAX: 74.84 MIN: 56.13 / MAX: 75.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU c b a j h i d e g f 1200 2400 3600 4800 6000 5802.65 5780.44 5776.94 3783.65 3780.80 3777.75 2013.77 2007.53 2006.09 2004.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU d e g f c b a j h i 2 4 6 8 10 7.93 7.96 7.96 7.97 8.24 8.27 8.28 8.45 8.46 8.46 MIN: 4.2 / MAX: 16.92 MIN: 4.19 / MAX: 16.59 MIN: 4.19 / MAX: 14.2 MIN: 4.37 / MAX: 16.86 MIN: 7.62 / MAX: 23.32 MIN: 7.37 / MAX: 25.18 MIN: 7.44 / MAX: 23.35 MIN: 4.46 / MAX: 17.31 MIN: 4.67 / MAX: 18 MIN: 4.49 / MAX: 17.8 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU c a b i j h f d g e 500 1000 1500 2000 2500 2455.51 2454.09 2450.26 2224.08 2194.36 2180.83 1041.87 1036.99 1031.60 1028.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a c b i j h f d g e 2 4 6 8 10 4.88 4.88 4.89 7.18 7.28 7.33 7.67 7.70 7.74 7.77 MIN: 3.95 / MAX: 16.05 MIN: 3.9 / MAX: 14.94 MIN: 3.93 / MAX: 13.44 MIN: 4.98 / MAX: 16.11 MIN: 5.53 / MAX: 15.78 MIN: 5.45 / MAX: 15.94 MIN: 5.32 / MAX: 16.6 MIN: 5.51 / MAX: 16.06 MIN: 6.06 / MAX: 12.66 MIN: 5.42 / MAX: 16.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a c b i h j g f d e 300 600 900 1200 1500 1560.03 1551.63 1546.02 1036.11 1034.35 1012.98 538.01 533.74 532.59 530.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU g f d e a i c h b j 7 14 21 28 35 29.72 29.95 30.02 30.10 30.72 30.87 30.89 30.92 31.00 31.57 MIN: 19.46 / MAX: 38.99 MIN: 19.01 / MAX: 38.08 MIN: 18.78 / MAX: 38.72 MIN: 22.61 / MAX: 39.15 MIN: 29.51 / MAX: 35.07 MIN: 20.13 / MAX: 42.34 MIN: 29.48 / MAX: 36.29 MIN: 25.94 / MAX: 41.77 MIN: 29.59 / MAX: 36.33 MIN: 20.39 / MAX: 39.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b a c i h j e g d f 20K 40K 60K 80K 100K 87359.23 86884.64 86789.80 59654.98 59615.88 59505.78 32032.06 32008.03 32002.62 31951.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU d e f g h i j a b c 0.1215 0.243 0.3645 0.486 0.6075 0.49 0.49 0.49 0.49 0.53 0.53 0.53 0.54 0.54 0.54 MIN: 0.3 / MAX: 9.28 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 8.84 MIN: 0.31 / MAX: 10.06 MIN: 0.31 / MAX: 10.06 MIN: 0.31 / MAX: 7.18 MIN: 0.45 / MAX: 7.64 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 5.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c i j h e g f d 300 600 900 1200 1500 1244.69 1239.67 1237.29 815.13 812.76 810.71 432.32 432.20 431.94 395.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU e g f a b c i j h d 9 18 27 36 45 36.98 36.98 37.01 38.50 38.66 38.75 39.23 39.34 39.44 40.40 MIN: 32.02 / MAX: 44.78 MIN: 32.61 / MAX: 41.91 MIN: 32.25 / MAX: 43.6 MIN: 36.77 / MAX: 44.23 MIN: 37.22 / MAX: 43.52 MIN: 37.46 / MAX: 43.52 MIN: 34.71 / MAX: 47.63 MIN: 25.19 / MAX: 46.89 MIN: 33.14 / MAX: 45.28 MIN: 26.93 / MAX: 74.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c b a i h j g f d e 30K 60K 90K 120K 150K 123484.28 120728.22 120606.38 68945.32 68931.35 68895.50 45097.99 44968.43 44958.07 44933.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g h i j 0.0788 0.1576 0.2364 0.3152 0.394 0.34 0.34 0.34 0.35 0.35 0.35 0.35 0.35 0.35 0.35 MIN: 0.29 / MAX: 7.33 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.09 MIN: 0.23 / MAX: 9.09 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.63 MIN: 0.21 / MAX: 8.91 MIN: 0.22 / MAX: 8.62 MIN: 0.22 / MAX: 8.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 i d h f g a b c j e 120K 240K 360K 480K 600K 555556 529101 526316 523560 460829 420168 404858 403226 370370 294985
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 j h f a b c g d i e 300K 600K 900K 1200K 1500K 1251564 1233046 1221001 1126126 1020408 683995 654022 278319 253678 251004
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 f d e g i h j a c b 300K 600K 900K 1200K 1500K 1303781 1248439 1204819 1107420 382995 323729 289101 215332 185874 173822
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 j h e f d g i a b c 20K 40K 60K 80K 100K 113572 113404 113327 111198 111012 110828 110436 98932 97314 90147
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 i g e d h f j c a b 20K 40K 60K 80K 100K 114692 113895 113225 112613 111782 110803 110693 97031 90114 86715
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 h i b j f c d g a e 200K 400K 600K 800K 1000K 925926 869565 862069 751880 709220 657895 632911 561798 529101 389105
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 i c j e h d a g f b 200K 400K 600K 800K 1000K 847458 729927 684932 613497 595238 591716 515464 487805 478469 458716
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 a g j b i d f h e c 500K 1000K 1500K 2000K 2500K 2173913 2036660 1941748 1941748 1930502 1818182 1795332 426439 320924 284252
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 i h g f c a d j e b 500K 1000K 1500K 2000K 2500K 2506266 2352941 2049180 1964637 1893939 1886792 600601 558036 235627 161970
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 h i j d g f e 80M 160M 240M 320M 400M 354808000 350151200 349019800 240994500 237175700 236591000 236243900 1. (CXX) g++ options: -O3 -fopenmp -ldl
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c h i j d e f g 170K 340K 510K 680K 850K 772162 768517 762529 572500 570458 569066 298064 296125 295603 295522 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 j h i g e d f 0.3728 0.7456 1.1184 1.4912 1.864 1.245 1.284 1.288 1.648 1.654 1.657 1.657 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 j h i g f e d 9 18 27 36 45 25.56 26.11 26.39 37.95 38.02 38.07 38.11 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 i j h g f d e 20 40 60 80 100 68.24 68.52 68.57 97.53 97.99 98.98 99.42 1. (CXX) g++ options: -O3 -fopenmp
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon h i j g f d e 11 22 33 44 55 48.91 48.72 48.71 24.96 24.89 24.85 24.83 MIN: 48.64 / MAX: 49.47 MIN: 48.48 / MAX: 49.3 MIN: 48.45 / MAX: 49.47 MIN: 24.9 / MAX: 25.13 MIN: 24.81 / MAX: 25.06 MIN: 24.78 / MAX: 25 MIN: 24.76 / MAX: 24.96
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj j h i d e f g 10 20 30 40 50 43.90 43.84 43.77 22.35 22.29 22.27 22.26 MIN: 43.66 / MAX: 44.27 MIN: 43.64 / MAX: 44.38 MIN: 43.51 / MAX: 44.16 MIN: 22.28 / MAX: 22.5 MIN: 22.22 / MAX: 22.46 MIN: 22.2 / MAX: 22.44 MIN: 22.18 / MAX: 22.43
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown j i h e d g f 10 20 30 40 50 43.93 43.81 43.57 21.99 21.89 21.83 21.77 MIN: 43.46 / MAX: 45.01 MIN: 43.36 / MAX: 45.05 MIN: 43.11 / MAX: 44.65 MIN: 21.84 / MAX: 22.32 MIN: 21.74 / MAX: 22.23 MIN: 21.69 / MAX: 22.17 MIN: 21.63 / MAX: 22.18
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon h i j g e f d 12 24 36 48 60 54.22 54.19 54.15 27.91 27.83 27.83 27.74 MIN: 53.93 / MAX: 54.77 MIN: 53.91 / MAX: 54.97 MIN: 53.87 / MAX: 54.79 MIN: 27.81 / MAX: 28.17 MIN: 27.72 / MAX: 28.1 MIN: 27.73 / MAX: 28.13 MIN: 27.64 / MAX: 27.98
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj i j h g e f d 10 20 30 40 50 46.09 46.02 45.92 23.71 23.53 23.50 23.35 MIN: 45.82 / MAX: 46.6 MIN: 45.75 / MAX: 46.57 MIN: 45.64 / MAX: 46.53 MIN: 23.61 / MAX: 23.93 MIN: 23.43 / MAX: 23.73 MIN: 23.4 / MAX: 23.74 MIN: 23.26 / MAX: 23.57
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown i j h f g d e 10 20 30 40 50 45.71 45.40 45.19 22.44 22.42 22.39 22.34 MIN: 45.11 / MAX: 47.49 MIN: 44.88 / MAX: 46.68 MIN: 44.65 / MAX: 46.39 MIN: 22.25 / MAX: 22.78 MIN: 22.22 / MAX: 22.85 MIN: 22.2 / MAX: 22.85 MIN: 22.15 / MAX: 22.75
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar j i h g f d e 80 160 240 320 400 363 363 363 191 191 191 190 MIN: 24 / MAX: 6613 MIN: 24 / MAX: 6577 MIN: 24 / MAX: 6610 MIN: 13 / MAX: 3483 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3471 MIN: 13 / MAX: 3484
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC h j i g f e d 200 400 600 800 1000 926 922 922 489 488 487 487 MIN: 67 / MAX: 12416 MIN: 67 / MAX: 12356 MIN: 67 / MAX: 12374 MIN: 36 / MAX: 6969 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6949
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU h i j g e f d 0.48 0.96 1.44 1.92 2.4 1.14749 1.15012 1.15578 2.11813 2.12570 2.13062 2.13332 MIN: 1.01 MIN: 1 MIN: 1.03 MIN: 1.99 MIN: 2.01 MIN: 1.97 MIN: 2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU j h i e g d f 0.3539 0.7078 1.0617 1.4156 1.7695 0.768314 0.778543 0.798540 1.549110 1.551180 1.558240 1.572820 MIN: 0.71 MIN: 0.71 MIN: 0.7 MIN: 1.51 MIN: 1.52 MIN: 1.51 MIN: 1.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU j h i g d e f 0.3019 0.6038 0.9057 1.2076 1.5095 0.731618 0.734461 0.735094 1.335640 1.337890 1.338610 1.341830 MIN: 0.66 MIN: 0.66 MIN: 0.66 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU i j h d f g e 0.8649 1.7298 2.5947 3.4596 4.3245 3.65907 3.68087 3.72247 3.81576 3.81823 3.82381 3.84421 MIN: 2.81 MIN: 2.85 MIN: 2.83 MIN: 3.26 MIN: 3.25 MIN: 3.29 MIN: 3.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU h i j d g f e 0.1426 0.2852 0.4278 0.5704 0.713 0.426426 0.427512 0.430270 0.628236 0.629108 0.630325 0.633975 MIN: 0.38 MIN: 0.39 MIN: 0.38 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU h j i g f d e 0.6893 1.3786 2.0679 2.7572 3.4465 1.78170 1.78691 1.78876 3.05458 3.05674 3.05991 3.06370 MIN: 1.64 MIN: 1.66 MIN: 1.65 MIN: 2.97 MIN: 2.97 MIN: 2.96 MIN: 2.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU h i j d f g e 0.7615 1.523 2.2845 3.046 3.8075 1.73381 1.73499 1.73501 3.37782 3.37956 3.38156 3.38436 MIN: 1.64 MIN: 1.65 MIN: 1.64 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU h j i g e d f 0.1914 0.3828 0.5742 0.7656 0.957 0.440006 0.440156 0.440368 0.843492 0.844434 0.847805 0.850691 MIN: 0.41 MIN: 0.41 MIN: 0.41 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU h i j g d f e 0.4315 0.863 1.2945 1.726 2.1575 1.04100 1.04312 1.04333 1.91274 1.91374 1.91422 1.91781 MIN: 0.94 MIN: 0.94 MIN: 0.94 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU i h j d f g e 0.5772 1.1544 1.7316 2.3088 2.886 1.64478 1.74203 1.75453 2.49408 2.49714 2.51441 2.56522 MIN: 1.42 MIN: 1.51 MIN: 1.52 MIN: 2.3 MIN: 2.26 MIN: 2.3 MIN: 2.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU g d f e j h i 0.4369 0.8738 1.3107 1.7476 2.1845 0.647700 0.652259 0.653182 0.657610 0.880016 0.892701 1.941900 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.78 MIN: 0.79 MIN: 0.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU f d g e i j h 0.4722 0.9444 1.4166 1.8888 2.361 1.00136 1.03749 1.12723 1.14432 1.24308 1.94941 2.09880 MIN: 0.92 MIN: 0.92 MIN: 0.93 MIN: 1.07 MIN: 1.04 MIN: 1.26 MIN: 1.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU h i j f d g e 0.2881 0.5762 0.8643 1.1524 1.4405 0.926283 0.931793 0.936001 1.206530 1.257580 1.279180 1.280430 MIN: 0.85 MIN: 0.86 MIN: 0.86 MIN: 1.18 MIN: 1.21 MIN: 1.24 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU i h j e f d g 0.1378 0.2756 0.4134 0.5512 0.689 0.301535 0.302460 0.309278 0.575794 0.600834 0.603950 0.612320 MIN: 0.28 MIN: 0.27 MIN: 0.28 MIN: 0.52 MIN: 0.53 MIN: 0.53 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU i h j d g e f 0.2388 0.4776 0.7164 0.9552 1.194 0.644252 0.704550 0.714970 1.028750 1.045670 1.054250 1.061440 MIN: 0.61 MIN: 0.66 MIN: 0.67 MIN: 0.96 MIN: 0.98 MIN: 0.97 MIN: 0.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU h j i f g e d 400 800 1200 1600 2000 986.96 988.59 991.07 1636.76 1637.37 1641.00 1641.92 MIN: 949.02 MIN: 950.96 MIN: 953.96 MIN: 1585.98 MIN: 1584.58 MIN: 1595.55 MIN: 1584.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU i h j g f e d 400 800 1200 1600 2000 987.36 993.56 994.61 1631.99 1636.44 1639.36 1642.51 MIN: 952.16 MIN: 955.42 MIN: 960.2 MIN: 1581.62 MIN: 1585.81 MIN: 1581.93 MIN: 1593.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU i h j g f e d 400 800 1200 1600 2000 985.74 987.84 991.12 1641.40 1642.35 1643.97 1643.99 MIN: 949.93 MIN: 952.67 MIN: 954.92 MIN: 1589.91 MIN: 1586.17 MIN: 1590.89 MIN: 1588.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU h i j d g e f 200 400 600 800 1000 564.12 566.18 569.80 838.52 848.03 849.71 851.49 MIN: 545.13 MIN: 544.56 MIN: 548.08 MIN: 796.3 MIN: 807.34 MIN: 805.98 MIN: 807.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU j i h g d f e 200 400 600 800 1000 563.58 564.45 568.75 837.60 849.16 849.34 851.66 MIN: 543.65 MIN: 545.57 MIN: 546.79 MIN: 796.61 MIN: 806.44 MIN: 805.8 MIN: 809.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU i j h e f d g 200 400 600 800 1000 563.33 566.39 568.65 841.08 845.31 847.38 847.42 MIN: 544.04 MIN: 542.7 MIN: 547.26 MIN: 798.46 MIN: 803.78 MIN: 806.33 MIN: 806.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl