AMD EPYC 9334 32-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads), Motherboard: Supermicro H13DSH (1.5 BIOS), Memory: 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1024x768
d Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
e f g Changed Processor to AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) .
Changed Motherboard to Supermicro H13SSW (1.1 BIOS) .
Changed Memory to 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N .
h i j Processor: AMD EPYC 9334 32-Core @ 2.70GHz (32 Cores / 64 Threads) , Motherboard: Supermicro H13SSW (1.1 BIOS), Memory: 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb, Monitor: DELL E207WFP
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1680x1050
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Model: Bumper Beam
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Chrysler Neon 1M
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Cell Phone Drop Test
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Bird Strike on Windshield
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Rubber O-Ring Seal Installation
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: INIVOL and Fluid Structure Interaction Drop Container
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Remhos Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example a b c d e f g h i j 7 14 21 28 35 16.35 16.79 16.24 30.76 30.85 30.73 30.75 20.44 20.30 20.36 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens a b c d e f g h i j 7 14 21 28 35 11.02 11.32 11.33 26.74 26.80 26.87 27.70 15.03 15.19 15.10 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a b c d e f g h i j 16 32 48 64 80 26.89 28.65 27.49 71.61 70.19 70.54 69.96 40.33 39.83 39.87 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model a b c d e f g h i j 7 14 21 28 35 12.31 12.10 12.04 27.33 27.46 26.97 27.75 15.99 15.59 15.84 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace a b c d e f g h i j 8 16 24 32 40 15.11 14.46 14.81 35.57 35.03 35.54 35.38 19.96 19.62 19.73 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a b c d e f g h i j 14 28 42 56 70 26.99 29.46 27.06 62.44 62.33 61.28 62.81 36.45 37.51 37.81 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw a b c d e f g h i j 2000M 4000M 6000M 8000M 10000M 11106900000 11240300000 10826700000 10318900000 10264000000 9976450000 10500600000 9145900000 9269890000 9242080000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic a b c d e f g h i j 2000M 4000M 6000M 8000M 10000M 6767710000 6757360000 6754170000 7934570000 7931010000 7955790000 7964910000 6761270000 6768070000 6835360000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown a b c d e f g h i j 12 24 36 48 60 54.90 55.39 55.40 21.48 21.44 21.59 21.58 42.89 43.27 43.31 MIN: 53.27 / MAX: 57.28 MIN: 54.02 / MAX: 57.64 MIN: 53.71 / MAX: 58.99 MIN: 21.32 / MAX: 21.8 MIN: 21.3 / MAX: 21.78 MIN: 21.45 / MAX: 21.84 MIN: 21.43 / MAX: 21.89 MIN: 42.47 / MAX: 43.88 MIN: 42.82 / MAX: 44.23 MIN: 42.83 / MAX: 44.44
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown a b c d e f g h i j 13 26 39 52 65 56.09 56.46 56.81 22.59 22.57 22.66 22.77 45.48 45.40 45.32 MIN: 54.05 / MAX: 59.82 MIN: 54.53 / MAX: 59.89 MIN: 55.27 / MAX: 59.91 MIN: 22.39 / MAX: 22.98 MIN: 22.39 / MAX: 22.93 MIN: 22.45 / MAX: 22.99 MIN: 22.57 / MAX: 23.16 MIN: 44.92 / MAX: 46.64 MIN: 44.87 / MAX: 46.45 MIN: 44.74 / MAX: 46.66
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon a b c d e f g h i j 13 26 39 52 65 60.14 59.91 59.79 24.69 24.73 24.70 24.82 48.12 48.19 48.10 MIN: 58.97 / MAX: 62 MIN: 58.66 / MAX: 61.96 MIN: 58.46 / MAX: 62.03 MIN: 24.62 / MAX: 24.84 MIN: 24.67 / MAX: 24.86 MIN: 24.63 / MAX: 24.84 MIN: 24.74 / MAX: 25 MIN: 47.91 / MAX: 48.91 MIN: 47.96 / MAX: 48.63 MIN: 47.86 / MAX: 48.8
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj a b c d e f g h i j 12 24 36 48 60 53.57 53.81 53.69 22.26 22.16 22.15 22.19 43.51 43.57 43.42 MIN: 52.17 / MAX: 55.38 MIN: 52.72 / MAX: 55.86 MIN: 52.63 / MAX: 55.24 MIN: 22.18 / MAX: 22.42 MIN: 22.08 / MAX: 22.35 MIN: 22.07 / MAX: 22.32 MIN: 22.12 / MAX: 22.33 MIN: 43.26 / MAX: 44.02 MIN: 43.34 / MAX: 44.03 MIN: 43.14 / MAX: 43.89
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon a b c d e f g h i j 15 30 45 60 75 67.34 67.20 67.50 28.36 28.31 28.32 28.48 54.39 54.51 54.52 MIN: 65.61 / MAX: 70.54 MIN: 65.48 / MAX: 70.41 MIN: 65.64 / MAX: 71.17 MIN: 28.26 / MAX: 28.59 MIN: 28.21 / MAX: 28.56 MIN: 28.23 / MAX: 28.55 MIN: 28.37 / MAX: 28.69 MIN: 54.12 / MAX: 55.15 MIN: 54.22 / MAX: 55.08 MIN: 54.24 / MAX: 55.1
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c d e f g h i j 13 26 39 52 65 56.49 56.69 56.93 23.87 23.94 23.94 23.88 46.38 46.36 46.43 MIN: 55.29 / MAX: 58.38 MIN: 55.42 / MAX: 58.97 MIN: 55.56 / MAX: 59.67 MIN: 23.78 / MAX: 24.08 MIN: 23.84 / MAX: 24.18 MIN: 23.84 / MAX: 24.16 MIN: 23.79 / MAX: 24.08 MIN: 46.09 / MAX: 47.08 MIN: 46.13 / MAX: 46.98 MIN: 46.17 / MAX: 47.23
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c d e f g h i j 1.1707 2.3414 3.5121 4.6828 5.8535 5.203 5.149 5.049 4.107 4.114 4.138 4.143 5.075 5.079 5.160 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b c d e f g h i j 20 40 60 80 100 90.81 91.32 90.42 66.99 67.72 67.39 67.81 99.10 98.57 99.35 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b c d e f g h i j 50 100 150 200 250 163.46 166.38 163.06 163.19 162.61 161.85 160.32 230.03 227.87 224.41 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b c d e f g h i j 50 100 150 200 250 163.01 166.69 161.50 161.85 162.05 160.80 161.32 223.42 227.21 228.77 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b c d e f g h i j 3 6 9 12 15 12.48 12.59 12.62 10.91 10.98 10.74 11.02 12.23 12.26 12.18 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b c d e f g h i j 30 60 90 120 150 141.22 138.34 143.55 118.95 119.31 118.49 118.48 151.44 149.05 149.45 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b c d e f g h i j 130 260 390 520 650 422.99 427.69 431.90 526.22 525.17 521.52 528.53 580.16 591.32 584.12 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b c d e f g h i j 160 320 480 640 800 510.36 542.61 516.91 604.99 597.01 585.37 586.75 726.89 728.54 726.50 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time a b c d e f g h i j 4 8 12 16 20 15.98600 15.97850 15.98720 5.57469 5.54107 5.57320 5.57553 10.79880 10.74300 10.69320
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time a b c d e f g h i j 4 8 12 16 20 15.95280 15.98880 15.97780 5.57001 5.56353 5.55581 5.56539 10.78320 10.79040 10.76640
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time a b c d e f g h i j 50 100 150 200 250 215.10 214.07 214.14 151.91 151.51 151.78 151.68 192.65 192.40 192.70
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b c d e f g h i j 4 8 12 16 20 14.23690 14.17830 14.13990 5.60747 5.62040 5.61454 5.62278 10.85240 10.88290 10.87430
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b c d e f g h i j 4 8 12 16 20 13.87390 13.76660 13.83170 5.45329 5.46153 5.45227 5.47725 10.59130 10.58480 10.57400
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b c d e f g h i j 4 8 12 16 20 16.34680 16.43650 16.53500 6.58745 6.58270 6.59563 6.60085 12.54380 12.49870 12.51040
Build: allmodconfig
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
d: The test quit with a non-zero exit status.
e: The test quit with a non-zero exit status.
f: The test quit with a non-zero exit status.
g: The test quit with a non-zero exit status.
h: The test quit with a non-zero exit status.
i: The test quit with a non-zero exit status.
j: The test quit with a non-zero exit status.
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 8M 16M 24M 32M 40M 39499000 39486000 39453000 35228000 35315000 35271000 35236000 37145000 37141000 37120000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 13M 26M 39M 52M 65M 59401000 59296000 57519000 52665000 52827000 52879000 52854000 51443000 55841000 55715000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 17M 34M 51M 68M 85M 77181000 77019000 76924000 67054000 68846000 68861000 68678000 72468000 72491000 72400000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 30M 60M 90M 120M 150M 117490000 114010000 118550000 105650000 105480000 105740000 104800000 105280000 109700000 111370000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 30M 60M 90M 120M 150M 153850000 153690000 153670000 138600000 138620000 138580000 138460000 145820000 145920000 145740000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 40M 80M 120M 160M 200M 196220000 196590000 194510000 188930000 191230000 189880000 190750000 200930000 198640000 200120000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 70M 140M 210M 280M 350M 307540000 305110000 306760000 278030000 277780000 276390000 277410000 292620000 292430000 292490000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 80M 160M 240M 320M 400M 369430000 366930000 366990000 363310000 357990000 350450000 357810000 378330000 377250000 381660000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 3M 6M 9M 12M 15M 13909000 14021000 14225000 12683000 12366000 12681000 12256000 12909000 13363000 12899000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 130M 260M 390M 520M 650M 594230000 602470000 603650000 545360000 545140000 545020000 543050000 585850000 583870000 585670000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 160M 320M 480M 640M 800M 699740000 692760000 674930000 689150000 692920000 693340000 682070000 741420000 735800000 737120000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 6M 12M 18M 24M 30M 27901000 27736000 28227000 24627000 25207000 25199000 22727000 25648000 25910000 26378000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 300M 600M 900M 1200M 1500M 1183500000 1190300000 1184800000 1047100000 1046600000 1041900000 1047100000 1172400000 1172900000 1169900000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 300M 600M 900M 1200M 1500M 1192100000 1214200000 1254800000 1035000000 1032000000 1024600000 1033400000 1352900000 1394100000 1369800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 12M 24M 36M 48M 60M 52911000 55588000 55165000 50258000 50380000 49977000 49556000 52890000 52129000 49781000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 500M 1000M 1500M 2000M 2500M 2207700000 2212100000 2206800000 1059500000 1057500000 1057100000 1056200000 2057500000 2052600000 2056500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 400M 800M 1200M 1600M 2000M 1994400000 2001900000 2010300000 1093300000 1095400000 1094600000 1099300000 1916800000 1899700000 1922300000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 20M 40M 60M 80M 100M 109870000 108080000 109140000 99594000 97005000 99441000 100170000 104440000 104780000 104220000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 a b c d e f g h i j 600M 1200M 1800M 2400M 3000M 3005800000 2995400000 2999800000 1065200000 1065100000 1065300000 1065700000 2068300000 2069900000 2071000000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 a b c d e f g h i j 600M 1200M 1800M 2400M 3000M 2559800000 2571100000 2564900000 1120800000 1117800000 1120500000 1118200000 1995500000 1997000000 1999500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 50M 100M 150M 200M 250M 216080000 216150000 214910000 193850000 196040000 194500000 194670000 207590000 209720000 209490000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 90M 180M 270M 360M 450M 425810000 429620000 424400000 273760000 273480000 273390000 274070000 391680000 393200000 393270000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 130M 260M 390M 520M 650M 622560000 610950000 622630000 282920000 281830000 283030000 281730000 512020000 512310000 511070000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 a b c d e f g h i j 150M 300M 450M 600M 750M 711640000 718140000 715030000 286250000 285880000 285920000 286530000 519700000 520140000 519440000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 a b c e f g h i j 700 1400 2100 2800 3500 2540 2510 2485 3209 3218 3195 3480 3485 3479
Test: oltp_read_write - Threads: 1
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 a b c d e f g h i 20K 40K 60K 80K 100K 79090 80183 78469 55334 53893 54956 55301 95579 94261
Test: oltp_read_write - Threads: 64
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 a b c d e f h i 1300 2600 3900 5200 6500 4331 4405 4471 5898 5976 5954 6125 6165
Test: oltp_point_select - Threads: 1
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 a b d e f g h i j 20K 40K 60K 80K 100K 85757 89099 59727 60145 60310 59944 104180 104620 105802
Test: oltp_read_write - Threads: 128
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 a c d e f g h i j 400 800 1200 1600 2000 1212 1189 1479 1490 1483 1481 1666 1656 1660
Test: oltp_update_index - Threads: 1
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 b c e f g h i j 20K 40K 60K 80K 100K 67515 65406 70250 70105 69923 87218 86471 87412
Test: oltp_point_select - Threads: 16
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 a b d e f g h i j 30K 60K 90K 120K 150K 104627 106180 98149 96907 97368 96840 138538 138173 137618
Test: oltp_point_select - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 a b d e f g i j 40K 80K 120K 160K 200K 127567 130802 115675 118657 119092 118549 180179 180581
Test: oltp_point_select - Threads: 64
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
h: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 a c d e f g h i j 4K 8K 12K 16K 20K 12558 12681 12622 12567 12692 12627 16965 16817 16972
Test: oltp_update_index - Threads: 16
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 a b c d e g h i j 5K 10K 15K 20K 25K 18361 17817 17565 17612 17117 17135 24286 23773 24366
Test: oltp_update_index - Threads: 32
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 b c d e f h i j 7K 14K 21K 28K 35K 24371 23324 21108 21271 21067 31332 30522 30638
Test: oltp_update_index - Threads: 64
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 a b c d e f h i j 40K 80K 120K 160K 200K 159242 159728 149962 129492 129904 130389 200327 198137 197738
Test: oltp_point_select - Threads: 128
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 a b c e f g h i j 8K 16K 24K 32K 40K 27087 27464 26546 24611 24830 24574 37126 36141 36644
Test: oltp_update_index - Threads: 128
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 a b c d e f g h i 400 800 1200 1600 2000 1328 1312 1381 1693 1708 1697 1705 1861 1848
Test: oltp_update_non_index - Threads: 1
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 a b d e g h i j 5K 10K 15K 20K 25K 18095 18068 18563 18557 18735 23794 23541 23543
Test: oltp_update_non_index - Threads: 16
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 a b d e f h i j 8K 16K 24K 32K 40K 28735 28914 26273 26285 26695 36041 35650 35655
Test: oltp_update_non_index - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 a c e f g h j 14K 28K 42K 56K 70K 51105 52865 42138 41424 41695 65816 64066
Test: oltp_update_non_index - Threads: 128
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
i: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 9 18 27 36 45 39.50 39.47 39.45 13.07 12.94 13.09 13.07 25.71 25.63 25.73
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 130 260 390 520 650 605.04 605.73 605.92 606.10 607.91 607.82 607.16 613.05 612.98 614.23
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 300 600 900 1200 1500 1417.07 1403.07 1418.90 508.09 511.41 508.21 509.14 1003.52 999.46 1002.81
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 4 8 12 16 20 16.91 17.07 16.89 15.72 15.62 15.72 15.69 15.92 15.99 15.94
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 150 300 450 600 750 672.46 672.37 671.26 257.27 257.89 257.50 257.28 507.63 508.36 507.36
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 8 16 24 32 40 35.63 35.64 35.68 31.05 30.99 31.03 31.06 31.49 31.44 31.50
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 40 80 120 160 200 201.39 201.25 201.54 71.14 71.27 71.04 70.93 136.96 136.68 136.57
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 30 60 90 120 150 118.75 118.95 118.78 112.25 112.06 112.41 112.48 116.51 116.89 116.85
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 1100 2200 3300 4400 5500 5137.01 5138.83 5153.66 1599.21 1599.15 1600.53 1602.52 3324.82 3327.19 3324.70
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 1.1241 2.2482 3.3723 4.4964 5.6205 4.6508 4.6476 4.6348 4.9960 4.9859 4.9877 4.9787 4.8005 4.8015 4.8016
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 110 220 330 440 550 485.72 487.36 507.48 493.60 494.26 494.22 495.60 495.22 494.50 494.01
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 110 220 330 440 550 489.12 489.45 487.05 163.56 162.93 162.90 163.23 326.93 326.78 326.90
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 50 100 150 200 250 218.15 219.53 218.52 72.46 72.66 72.57 72.69 145.48 145.65 145.37
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 20 40 60 80 100 109.80 109.23 109.58 110.11 109.97 110.00 109.90 109.79 109.74 109.86
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 70 140 210 280 350 322.25 321.18 321.51 108.91 109.09 109.09 109.22 216.24 216.22 216.51
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 80 160 240 320 400 347.66 347.22 347.37 325.88 325.74 324.96 325.51 337.96 336.77 336.68
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 160 320 480 640 800 718.92 717.97 716.14 240.55 240.23 240.16 239.52 488.06 486.71 487.23
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 40 80 120 160 200 158.92 159.06 164.61 55.61 55.46 55.54 55.43 109.03 109.46 109.41
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 30 60 90 120 150 150.59 150.61 145.26 143.76 144.10 143.69 144.11 146.12 145.83 145.90
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 9 18 27 36 45 39.44 39.45 39.42 13.13 13.12 13.09 13.06 25.79 25.70 25.77
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e f g h i j 130 260 390 520 650 605.76 606.67 605.88 606.58 606.76 606.79 608.72 613.16 613.64 612.52
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only a b c d e f g h i j 16 32 48 64 80 26.20 26.24 26.12 72.00 71.44 71.96 72.01 38.40 38.57 38.51
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only a b c d e f g h i j 40 80 120 160 200 66.42 66.64 66.72 182.99 182.56 181.70 183.29 99.54 99.35 99.29
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only a b c d e f g h i j 20 40 60 80 100 33.22 33.17 33.03 90.03 90.31 90.26 90.63 49.10 48.69 48.82
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only a b c d e f g h i j 140 280 420 560 700 254.88 255.30 254.72 670.87 670.64 667.87 669.09 352.40 351.66 351.38
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only a b c d e f g h i j 50 100 150 200 250 80.54 80.76 80.41 224.15 224.10 223.95 224.12 119.30 119.42 119.04
OpenVINO OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU a b c d e f g h i j 7 14 21 28 35 30.41 30.44 30.43 10.47 10.47 10.48 10.48 19.82 19.83 19.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU a b c d e f g h i j 200 400 600 800 1000 393.60 393.23 393.37 761.59 761.16 760.57 759.92 804.58 804.75 805.38 MIN: 363.29 / MAX: 431.61 MIN: 360.87 / MAX: 433.13 MIN: 362.57 / MAX: 433.51 MIN: 738.34 / MAX: 772.36 MIN: 741.99 / MAX: 776.56 MIN: 741.4 / MAX: 770.88 MIN: 737.63 / MAX: 771.07 MIN: 772.52 / MAX: 820.63 MIN: 776.93 / MAX: 819.19 MIN: 783.22 / MAX: 819.23 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU a b c d e f g h i j 60 120 180 240 300 282.55 284.22 282.67 107.02 107.27 107.39 107.04 197.94 193.80 194.82 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU a b c d e f g h i j 20 40 60 80 100 42.44 42.20 42.43 74.71 74.50 74.43 74.71 80.77 82.49 82.09 MIN: 36.14 / MAX: 61.98 MIN: 36.84 / MAX: 61.97 MIN: 36.31 / MAX: 62.36 MIN: 66.12 / MAX: 81.09 MIN: 66.5 / MAX: 80.32 MIN: 65.68 / MAX: 83.49 MIN: 66.29 / MAX: 79.68 MIN: 69.54 / MAX: 95.42 MIN: 70.77 / MAX: 94.62 MIN: 68.73 / MAX: 91.87 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU a b c d e f g h i j 60 120 180 240 300 283.97 284.99 284.31 106.90 107.24 106.76 107.24 196.07 197.66 196.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU a b c d e f g h i j 20 40 60 80 100 42.24 42.09 42.19 74.81 74.54 74.87 74.58 81.58 80.88 81.50 MIN: 36.59 / MAX: 61.56 MIN: 37.13 / MAX: 58.71 MIN: 36.21 / MAX: 65.64 MIN: 66.88 / MAX: 80.7 MIN: 65.97 / MAX: 82.9 MIN: 66.72 / MAX: 80.96 MIN: 67.63 / MAX: 78.73 MIN: 68.74 / MAX: 95.81 MIN: 39.72 / MAX: 92.54 MIN: 68.9 / MAX: 92.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a b c d e f g h i j 400 800 1200 1600 2000 2033.17 2028.01 2029.79 797.64 793.75 791.74 793.90 1481.71 1488.04 1483.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a b c d e f g h i j 3 6 9 12 15 5.89 5.91 5.90 10.01 10.06 10.09 10.06 10.78 10.74 10.77 MIN: 4.67 / MAX: 18.4 MIN: 4.84 / MAX: 12.9 MIN: 4.83 / MAX: 13.4 MIN: 5.7 / MAX: 19.52 MIN: 5.29 / MAX: 19.07 MIN: 5.4 / MAX: 19.17 MIN: 5.2 / MAX: 19.38 MIN: 5.59 / MAX: 21.13 MIN: 5.92 / MAX: 24.44 MIN: 6 / MAX: 18.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU a b c d e f g h i j 13 26 39 52 65 56.01 56.06 56.02 20.03 20.00 20.01 20.05 37.82 37.57 37.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU a b c d e f g h i j 90 180 270 360 450 213.94 213.62 213.79 398.52 398.91 399.24 398.13 421.80 425.68 425.88 MIN: 201.64 / MAX: 242.71 MIN: 197.2 / MAX: 235.23 MIN: 197.29 / MAX: 236.32 MIN: 382.1 / MAX: 404.98 MIN: 386.2 / MAX: 407.29 MIN: 387.9 / MAX: 408.93 MIN: 379.09 / MAX: 404.71 MIN: 269.94 / MAX: 598.22 MIN: 402.91 / MAX: 432.03 MIN: 404.76 / MAX: 434.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e f g h i j 1300 2600 3900 5200 6500 5882.91 5836.27 5840.53 2564.78 2562.54 2539.97 2557.66 4803.65 4848.42 4892.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e f g h i j 0.747 1.494 2.241 2.988 3.735 2.03 2.05 2.05 3.11 3.11 3.14 3.12 3.32 3.29 3.26 MIN: 1.66 / MAX: 7.51 MIN: 1.6 / MAX: 7 MIN: 1.62 / MAX: 6.96 MIN: 1.94 / MAX: 11.57 MIN: 1.93 / MAX: 9.72 MIN: 1.93 / MAX: 11.65 MIN: 1.88 / MAX: 11.92 MIN: 2.11 / MAX: 12.58 MIN: 1.89 / MAX: 12.75 MIN: 2.1 / MAX: 14.3 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU a b c d e f g h i j 160 320 480 640 800 748.44 750.49 757.38 344.67 342.81 343.49 341.36 643.80 642.90 648.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU a b c d e f g h i j 6 12 18 24 30 16.02 15.98 15.83 23.20 23.32 23.28 23.42 24.84 24.87 24.67 MIN: 12.5 / MAX: 33.94 MIN: 12.74 / MAX: 33.34 MIN: 12.38 / MAX: 32.97 MIN: 15.1 / MAX: 31.6 MIN: 19.49 / MAX: 30.99 MIN: 15.73 / MAX: 30.77 MIN: 20.46 / MAX: 32.43 MIN: 16.93 / MAX: 33.96 MIN: 17 / MAX: 33.34 MIN: 20.15 / MAX: 37.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d e f g h i j 600 1200 1800 2400 3000 2873.24 2880.58 2881.14 1175.67 1174.60 1180.85 1175.58 2252.70 2264.93 2254.72 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d e f g h i j 2 4 6 8 10 4.17 4.16 4.16 6.79 6.80 6.76 6.79 7.09 7.05 7.08 MIN: 3.39 / MAX: 10.07 MIN: 3.42 / MAX: 11.2 MIN: 3.43 / MAX: 10.26 MIN: 3.8 / MAX: 15.48 MIN: 4.04 / MAX: 15.37 MIN: 4.04 / MAX: 15.47 MIN: 3.79 / MAX: 15.41 MIN: 4.43 / MAX: 16.88 MIN: 4.44 / MAX: 16.57 MIN: 4.35 / MAX: 16.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU a b c d e f g h i j 600 1200 1800 2400 3000 2945.26 2986.46 2987.33 1039.61 1039.82 1038.47 1039.37 1963.90 1964.61 1964.71 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU a b c d e f g h i j 4 8 12 16 20 16.26 16.02 16.02 15.36 15.36 15.38 15.37 16.27 16.27 16.27 MIN: 14.71 / MAX: 28.14 MIN: 14.41 / MAX: 30.55 MIN: 14.63 / MAX: 33.79 MIN: 8.08 / MAX: 24.34 MIN: 8.02 / MAX: 23.81 MIN: 7.99 / MAX: 24 MIN: 7.99 / MAX: 23.98 MIN: 8.92 / MAX: 25.52 MIN: 8.5 / MAX: 25.86 MIN: 8.44 / MAX: 25.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d e f g h i j 2K 4K 6K 8K 10K 9837.58 9849.07 9845.27 3540.88 3544.18 3548.78 3533.64 6653.86 6646.91 6638.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d e f g h i j 1.0935 2.187 3.2805 4.374 5.4675 4.86 4.85 4.86 4.51 4.51 4.50 4.52 4.80 4.81 4.81 MIN: 4.23 / MAX: 12.81 MIN: 4.25 / MAX: 12.86 MIN: 4.34 / MAX: 12.27 MIN: 2.98 / MAX: 13.05 MIN: 2.96 / MAX: 16.06 MIN: 2.98 / MAX: 13.86 MIN: 2.77 / MAX: 13.57 MIN: 3.23 / MAX: 14.95 MIN: 3.23 / MAX: 15.04 MIN: 3.23 / MAX: 14.45 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d e f g h i j 200 400 600 800 1000 842.91 854.51 849.30 370.57 373.64 369.26 372.26 710.69 709.73 709.75 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d e f g h i j 5 10 15 20 25 14.23 14.03 14.12 21.57 21.40 21.65 21.47 22.50 22.53 22.53 MIN: 11.51 / MAX: 25.86 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 26.04 MIN: 19.5 / MAX: 24.76 MIN: 19.07 / MAX: 25.3 MIN: 19.48 / MAX: 24.27 MIN: 17.62 / MAX: 28.13 MIN: 13.76 / MAX: 30.22 MIN: 19.09 / MAX: 30.15 MIN: 18.74 / MAX: 31.08 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU a b c d e f g h i j 70 140 210 280 350 317.22 317.28 317.33 124.12 123.61 124.30 123.41 234.18 233.65 233.88 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU a b c d e f g h i j 15 30 45 60 75 37.80 37.79 37.79 64.41 64.68 64.31 64.77 68.27 68.42 68.35 MIN: 33.35 / MAX: 56.45 MIN: 32.97 / MAX: 53.7 MIN: 33.29 / MAX: 54.88 MIN: 37.44 / MAX: 73.04 MIN: 38.02 / MAX: 72.52 MIN: 50.85 / MAX: 70.77 MIN: 55.8 / MAX: 69.46 MIN: 56.41 / MAX: 79.96 MIN: 56.13 / MAX: 75.77 MIN: 55.82 / MAX: 74.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d e f g h i j 1200 2400 3600 4800 6000 5776.94 5780.44 5802.65 2013.77 2007.53 2004.76 2006.09 3780.80 3777.75 3783.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d e f g h i j 2 4 6 8 10 8.28 8.27 8.24 7.93 7.96 7.97 7.96 8.46 8.46 8.45 MIN: 7.44 / MAX: 23.35 MIN: 7.37 / MAX: 25.18 MIN: 7.62 / MAX: 23.32 MIN: 4.2 / MAX: 16.92 MIN: 4.19 / MAX: 16.59 MIN: 4.37 / MAX: 16.86 MIN: 4.19 / MAX: 14.2 MIN: 4.67 / MAX: 18 MIN: 4.49 / MAX: 17.8 MIN: 4.46 / MAX: 17.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d e f g h i j 500 1000 1500 2000 2500 2454.09 2450.26 2455.51 1036.99 1028.64 1041.87 1031.60 2180.83 2224.08 2194.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d e f g h i j 2 4 6 8 10 4.88 4.89 4.88 7.70 7.77 7.67 7.74 7.33 7.18 7.28 MIN: 3.95 / MAX: 16.05 MIN: 3.93 / MAX: 13.44 MIN: 3.9 / MAX: 14.94 MIN: 5.51 / MAX: 16.06 MIN: 5.42 / MAX: 16.35 MIN: 5.32 / MAX: 16.6 MIN: 6.06 / MAX: 12.66 MIN: 5.45 / MAX: 15.94 MIN: 4.98 / MAX: 16.11 MIN: 5.53 / MAX: 15.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a b c d e f g h i j 300 600 900 1200 1500 1560.03 1546.02 1551.63 532.59 530.99 533.74 538.01 1034.35 1036.11 1012.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a b c d e f g h i j 7 14 21 28 35 30.72 31.00 30.89 30.02 30.10 29.95 29.72 30.92 30.87 31.57 MIN: 29.51 / MAX: 35.07 MIN: 29.59 / MAX: 36.33 MIN: 29.48 / MAX: 36.29 MIN: 18.78 / MAX: 38.72 MIN: 22.61 / MAX: 39.15 MIN: 19.01 / MAX: 38.08 MIN: 19.46 / MAX: 38.99 MIN: 25.94 / MAX: 41.77 MIN: 20.13 / MAX: 42.34 MIN: 20.39 / MAX: 39.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d e f g h i j 20K 40K 60K 80K 100K 86884.64 87359.23 86789.80 32002.62 32032.06 31951.64 32008.03 59615.88 59654.98 59505.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d e f g h i j 0.1215 0.243 0.3645 0.486 0.6075 0.54 0.54 0.54 0.49 0.49 0.49 0.49 0.53 0.53 0.53 MIN: 0.45 / MAX: 7.64 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 5.03 MIN: 0.3 / MAX: 9.28 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 8.84 MIN: 0.31 / MAX: 10.06 MIN: 0.31 / MAX: 10.06 MIN: 0.31 / MAX: 7.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d e f g h i j 300 600 900 1200 1500 1244.69 1239.67 1237.29 395.66 432.32 431.94 432.20 810.71 815.13 812.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d e f g h i j 9 18 27 36 45 38.50 38.66 38.75 40.40 36.98 37.01 36.98 39.44 39.23 39.34 MIN: 36.77 / MAX: 44.23 MIN: 37.22 / MAX: 43.52 MIN: 37.46 / MAX: 43.52 MIN: 26.93 / MAX: 74.83 MIN: 32.02 / MAX: 44.78 MIN: 32.25 / MAX: 43.6 MIN: 32.61 / MAX: 41.91 MIN: 33.14 / MAX: 45.28 MIN: 34.71 / MAX: 47.63 MIN: 25.19 / MAX: 46.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g h i j 30K 60K 90K 120K 150K 120606.38 120728.22 123484.28 44958.07 44933.27 44968.43 45097.99 68931.35 68945.32 68895.50 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g h i j 0.0788 0.1576 0.2364 0.3152 0.394 0.34 0.34 0.34 0.35 0.35 0.35 0.35 0.35 0.35 0.35 MIN: 0.29 / MAX: 7.33 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.09 MIN: 0.23 / MAX: 9.09 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.63 MIN: 0.21 / MAX: 8.91 MIN: 0.22 / MAX: 8.62 MIN: 0.22 / MAX: 8.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 a b c d e f g h i j 120K 240K 360K 480K 600K 420168 404858 403226 529101 294985 523560 460829 526316 555556 370370
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 a b c d e f g h i j 300K 600K 900K 1200K 1500K 1126126 1020408 683995 278319 251004 1221001 654022 1233046 253678 1251564
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 a b c d e f g h i j 300K 600K 900K 1200K 1500K 215332 173822 185874 1248439 1204819 1303781 1107420 323729 382995 289101
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 a b c d e f g h i j 20K 40K 60K 80K 100K 98932 97314 90147 111012 113327 111198 110828 113404 110436 113572
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 a b c d e f g h i j 20K 40K 60K 80K 100K 90114 86715 97031 112613 113225 110803 113895 111782 114692 110693
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 a b c d e f g h i j 200K 400K 600K 800K 1000K 529101 862069 657895 632911 389105 709220 561798 925926 869565 751880
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 a b c d e f g h i j 200K 400K 600K 800K 1000K 515464 458716 729927 591716 613497 478469 487805 595238 847458 684932
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 a b c d e f g h i j 500K 1000K 1500K 2000K 2500K 2173913 1941748 284252 1818182 320924 1795332 2036660 426439 1930502 1941748
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 a b c d e f g h i j 500K 1000K 1500K 2000K 2500K 1886792 161970 1893939 600601 235627 1964637 2049180 2352941 2506266 558036
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 d e f g h i j 80M 160M 240M 320M 400M 240994500 236243900 236591000 237175700 354808000 350151200 349019800 1. (CXX) g++ options: -O3 -fopenmp -ldl
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c d e f g h i j 170K 340K 510K 680K 850K 772162 768517 762529 298064 296125 295603 295522 572500 570458 569066 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 d e f g h i j 0.3728 0.7456 1.1184 1.4912 1.864 1.657 1.654 1.657 1.648 1.284 1.288 1.245 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 d e f g h i j 9 18 27 36 45 38.11 38.07 38.02 37.95 26.11 26.39 25.56 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 d e f g h i j 20 40 60 80 100 98.98 99.42 97.99 97.53 68.57 68.24 68.52 1. (CXX) g++ options: -O3 -fopenmp
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon d e f g h i j 11 22 33 44 55 24.85 24.83 24.89 24.96 48.91 48.72 48.71 MIN: 24.78 / MAX: 25 MIN: 24.76 / MAX: 24.96 MIN: 24.81 / MAX: 25.06 MIN: 24.9 / MAX: 25.13 MIN: 48.64 / MAX: 49.47 MIN: 48.48 / MAX: 49.3 MIN: 48.45 / MAX: 49.47
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj d e f g h i j 10 20 30 40 50 22.35 22.29 22.27 22.26 43.84 43.77 43.90 MIN: 22.28 / MAX: 22.5 MIN: 22.22 / MAX: 22.46 MIN: 22.2 / MAX: 22.44 MIN: 22.18 / MAX: 22.43 MIN: 43.64 / MAX: 44.38 MIN: 43.51 / MAX: 44.16 MIN: 43.66 / MAX: 44.27
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown d e f g h i j 10 20 30 40 50 21.89 21.99 21.77 21.83 43.57 43.81 43.93 MIN: 21.74 / MAX: 22.23 MIN: 21.84 / MAX: 22.32 MIN: 21.63 / MAX: 22.18 MIN: 21.69 / MAX: 22.17 MIN: 43.11 / MAX: 44.65 MIN: 43.36 / MAX: 45.05 MIN: 43.46 / MAX: 45.01
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon d e f g h i j 12 24 36 48 60 27.74 27.83 27.83 27.91 54.22 54.19 54.15 MIN: 27.64 / MAX: 27.98 MIN: 27.72 / MAX: 28.1 MIN: 27.73 / MAX: 28.13 MIN: 27.81 / MAX: 28.17 MIN: 53.93 / MAX: 54.77 MIN: 53.91 / MAX: 54.97 MIN: 53.87 / MAX: 54.79
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj d e f g h i j 10 20 30 40 50 23.35 23.53 23.50 23.71 45.92 46.09 46.02 MIN: 23.26 / MAX: 23.57 MIN: 23.43 / MAX: 23.73 MIN: 23.4 / MAX: 23.74 MIN: 23.61 / MAX: 23.93 MIN: 45.64 / MAX: 46.53 MIN: 45.82 / MAX: 46.6 MIN: 45.75 / MAX: 46.57
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown d e f g h i j 10 20 30 40 50 22.39 22.34 22.44 22.42 45.19 45.71 45.40 MIN: 22.2 / MAX: 22.85 MIN: 22.15 / MAX: 22.75 MIN: 22.25 / MAX: 22.78 MIN: 22.22 / MAX: 22.85 MIN: 44.65 / MAX: 46.39 MIN: 45.11 / MAX: 47.49 MIN: 44.88 / MAX: 46.68
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar d e f g h i j 80 160 240 320 400 191 190 191 191 363 363 363 MIN: 13 / MAX: 3471 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3483 MIN: 24 / MAX: 6610 MIN: 24 / MAX: 6577 MIN: 24 / MAX: 6613
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC d e f g h i j 200 400 600 800 1000 487 487 488 489 926 922 922 MIN: 36 / MAX: 6949 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6969 MIN: 67 / MAX: 12416 MIN: 67 / MAX: 12374 MIN: 67 / MAX: 12356
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU d e f g h i j 0.48 0.96 1.44 1.92 2.4 2.13332 2.12570 2.13062 2.11813 1.14749 1.15012 1.15578 MIN: 2 MIN: 2.01 MIN: 1.97 MIN: 1.99 MIN: 1.01 MIN: 1 MIN: 1.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU d e f g h i j 0.3539 0.7078 1.0617 1.4156 1.7695 1.558240 1.549110 1.572820 1.551180 0.778543 0.798540 0.768314 MIN: 1.51 MIN: 1.51 MIN: 1.53 MIN: 1.52 MIN: 0.71 MIN: 0.7 MIN: 0.71 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 0.3019 0.6038 0.9057 1.2076 1.5095 1.337890 1.338610 1.341830 1.335640 0.734461 0.735094 0.731618 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 0.66 MIN: 0.66 MIN: 0.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU d e f g h i j 0.8649 1.7298 2.5947 3.4596 4.3245 3.81576 3.84421 3.81823 3.82381 3.72247 3.65907 3.68087 MIN: 3.26 MIN: 3.27 MIN: 3.25 MIN: 3.29 MIN: 2.83 MIN: 2.81 MIN: 2.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU d e f g h i j 0.1426 0.2852 0.4278 0.5704 0.713 0.628236 0.633975 0.630325 0.629108 0.426426 0.427512 0.430270 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.38 MIN: 0.39 MIN: 0.38 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 0.6893 1.3786 2.0679 2.7572 3.4465 3.05991 3.06370 3.05674 3.05458 1.78170 1.78876 1.78691 MIN: 2.96 MIN: 2.97 MIN: 2.97 MIN: 2.97 MIN: 1.64 MIN: 1.65 MIN: 1.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU d e f g h i j 0.7615 1.523 2.2845 3.046 3.8075 3.37782 3.38436 3.37956 3.38156 1.73381 1.73499 1.73501 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 1.64 MIN: 1.65 MIN: 1.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU d e f g h i j 0.1914 0.3828 0.5742 0.7656 0.957 0.847805 0.844434 0.850691 0.843492 0.440006 0.440368 0.440156 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.41 MIN: 0.41 MIN: 0.41 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 0.4315 0.863 1.2945 1.726 2.1575 1.91374 1.91781 1.91422 1.91274 1.04100 1.04312 1.04333 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 0.94 MIN: 0.94 MIN: 0.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU d e f g h i j 0.5772 1.1544 1.7316 2.3088 2.886 2.49408 2.56522 2.49714 2.51441 1.74203 1.64478 1.75453 MIN: 2.3 MIN: 2.32 MIN: 2.26 MIN: 2.3 MIN: 1.51 MIN: 1.42 MIN: 1.52 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU d e f g h i j 0.4369 0.8738 1.3107 1.7476 2.1845 0.652259 0.657610 0.653182 0.647700 0.892701 1.941900 0.880016 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.79 MIN: 0.87 MIN: 0.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 0.4722 0.9444 1.4166 1.8888 2.361 1.03749 1.14432 1.00136 1.12723 2.09880 1.24308 1.94941 MIN: 0.92 MIN: 1.07 MIN: 0.92 MIN: 0.93 MIN: 1.29 MIN: 1.04 MIN: 1.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU d e f g h i j 0.2881 0.5762 0.8643 1.1524 1.4405 1.257580 1.280430 1.206530 1.279180 0.926283 0.931793 0.936001 MIN: 1.21 MIN: 1.24 MIN: 1.18 MIN: 1.24 MIN: 0.85 MIN: 0.86 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU d e f g h i j 0.1378 0.2756 0.4134 0.5512 0.689 0.603950 0.575794 0.600834 0.612320 0.302460 0.301535 0.309278 MIN: 0.53 MIN: 0.52 MIN: 0.53 MIN: 0.53 MIN: 0.27 MIN: 0.28 MIN: 0.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 0.2388 0.4776 0.7164 0.9552 1.194 1.028750 1.054250 1.061440 1.045670 0.704550 0.644252 0.714970 MIN: 0.96 MIN: 0.97 MIN: 0.98 MIN: 0.98 MIN: 0.66 MIN: 0.61 MIN: 0.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU d e f g h i j 400 800 1200 1600 2000 1641.92 1641.00 1636.76 1637.37 986.96 991.07 988.59 MIN: 1584.81 MIN: 1595.55 MIN: 1585.98 MIN: 1584.58 MIN: 949.02 MIN: 953.96 MIN: 950.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU d e f g h i j 400 800 1200 1600 2000 1642.51 1639.36 1636.44 1631.99 993.56 987.36 994.61 MIN: 1593.16 MIN: 1581.93 MIN: 1585.81 MIN: 1581.62 MIN: 955.42 MIN: 952.16 MIN: 960.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 400 800 1200 1600 2000 1643.99 1643.97 1642.35 1641.40 987.84 985.74 991.12 MIN: 1588.03 MIN: 1590.89 MIN: 1586.17 MIN: 1589.91 MIN: 952.67 MIN: 949.93 MIN: 954.92 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU d e f g h i j 200 400 600 800 1000 838.52 849.71 851.49 848.03 564.12 566.18 569.80 MIN: 796.3 MIN: 805.98 MIN: 807.97 MIN: 807.34 MIN: 545.13 MIN: 544.56 MIN: 548.08 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU d e f g h i j 200 400 600 800 1000 849.16 851.66 849.34 837.60 568.75 564.45 563.58 MIN: 806.44 MIN: 809.45 MIN: 805.8 MIN: 796.61 MIN: 546.79 MIN: 545.57 MIN: 543.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU d e f g h i j 200 400 600 800 1000 847.38 841.08 845.31 847.42 568.65 563.33 566.39 MIN: 806.33 MIN: 798.46 MIN: 803.78 MIN: 806.72 MIN: 547.26 MIN: 544.04 MIN: 542.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl