AMD EPYC 9334 32-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads), Motherboard: Supermicro H13DSH (1.5 BIOS), Memory: 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1024x768
d Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Notes: OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Notes: Python 3.9.16Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
e f g Changed Processor to AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) .
Changed Motherboard to Supermicro H13SSW (1.1 BIOS) .
Changed Memory to 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N .
h i j Processor: AMD EPYC 9334 32-Core @ 2.70GHz (32 Cores / 64 Threads) , Motherboard: Supermicro H13SSW (1.1 BIOS), Memory: 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N, Disk: 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07, Graphics: astdrmfb, Monitor: DELL E207WFP
OS: AlmaLinux 9.2, Kernel: 5.14.0-284.25.1.el9_2.x86_64 (x86_64), Compiler: GCC 11.3.1 20221121, File-System: ext4, Screen Resolution: 1680x1050
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Model: Bumper Beam
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Chrysler Neon 1M
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Cell Phone Drop Test
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Bird Strike on Windshield
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: Rubber O-Ring Seal Installation
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Model: INIVOL and Fluid Structure Interaction Drop Container
a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory
Remhos Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example j i h g f e d c b a 7 14 21 28 35 20.36 20.30 20.44 30.75 30.73 30.85 30.76 16.24 16.79 16.35 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens j i h g f e d c b a 7 14 21 28 35 15.10 15.19 15.03 27.70 26.87 26.80 26.74 11.33 11.32 11.02 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace j i h g f e d c b a 16 32 48 64 80 39.87 39.83 40.33 69.96 70.54 70.19 71.61 27.49 28.65 26.89 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model j i h g f e d c b a 7 14 21 28 35 15.84 15.59 15.99 27.75 26.97 27.46 27.33 12.04 12.10 12.31 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace j i h g f e d c b a 8 16 24 32 40 19.73 19.62 19.96 35.38 35.54 35.03 35.57 14.81 14.46 15.11 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace j i h g f e d c b a 14 28 42 56 70 37.81 37.51 36.45 62.81 61.28 62.33 62.44 27.06 29.46 26.99 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw j i h g f e d c b a 2000M 4000M 6000M 8000M 10000M 9242080000 9269890000 9145900000 10500600000 9976450000 10264000000 10318900000 10826700000 11240300000 11106900000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic j i h g f e d c b a 2000M 4000M 6000M 8000M 10000M 6835360000 6768070000 6761270000 7964910000 7955790000 7931010000 7934570000 6754170000 6757360000 6767710000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown j i h g f e d c b a 12 24 36 48 60 43.31 43.27 42.89 21.58 21.59 21.44 21.48 55.40 55.39 54.90 MIN: 42.83 / MAX: 44.44 MIN: 42.82 / MAX: 44.23 MIN: 42.47 / MAX: 43.88 MIN: 21.43 / MAX: 21.89 MIN: 21.45 / MAX: 21.84 MIN: 21.3 / MAX: 21.78 MIN: 21.32 / MAX: 21.8 MIN: 53.71 / MAX: 58.99 MIN: 54.02 / MAX: 57.64 MIN: 53.27 / MAX: 57.28
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown j i h g f e d c b a 13 26 39 52 65 45.32 45.40 45.48 22.77 22.66 22.57 22.59 56.81 56.46 56.09 MIN: 44.74 / MAX: 46.66 MIN: 44.87 / MAX: 46.45 MIN: 44.92 / MAX: 46.64 MIN: 22.57 / MAX: 23.16 MIN: 22.45 / MAX: 22.99 MIN: 22.39 / MAX: 22.93 MIN: 22.39 / MAX: 22.98 MIN: 55.27 / MAX: 59.91 MIN: 54.53 / MAX: 59.89 MIN: 54.05 / MAX: 59.82
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon j i h g f e d c b a 13 26 39 52 65 48.10 48.19 48.12 24.82 24.70 24.73 24.69 59.79 59.91 60.14 MIN: 47.86 / MAX: 48.8 MIN: 47.96 / MAX: 48.63 MIN: 47.91 / MAX: 48.91 MIN: 24.74 / MAX: 25 MIN: 24.63 / MAX: 24.84 MIN: 24.67 / MAX: 24.86 MIN: 24.62 / MAX: 24.84 MIN: 58.46 / MAX: 62.03 MIN: 58.66 / MAX: 61.96 MIN: 58.97 / MAX: 62
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj j i h g f e d c b a 12 24 36 48 60 43.42 43.57 43.51 22.19 22.15 22.16 22.26 53.69 53.81 53.57 MIN: 43.14 / MAX: 43.89 MIN: 43.34 / MAX: 44.03 MIN: 43.26 / MAX: 44.02 MIN: 22.12 / MAX: 22.33 MIN: 22.07 / MAX: 22.32 MIN: 22.08 / MAX: 22.35 MIN: 22.18 / MAX: 22.42 MIN: 52.63 / MAX: 55.24 MIN: 52.72 / MAX: 55.86 MIN: 52.17 / MAX: 55.38
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon j i h g f e d c b a 15 30 45 60 75 54.52 54.51 54.39 28.48 28.32 28.31 28.36 67.50 67.20 67.34 MIN: 54.24 / MAX: 55.1 MIN: 54.22 / MAX: 55.08 MIN: 54.12 / MAX: 55.15 MIN: 28.37 / MAX: 28.69 MIN: 28.23 / MAX: 28.55 MIN: 28.21 / MAX: 28.56 MIN: 28.26 / MAX: 28.59 MIN: 65.64 / MAX: 71.17 MIN: 65.48 / MAX: 70.41 MIN: 65.61 / MAX: 70.54
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj j i h g f e d c b a 13 26 39 52 65 46.43 46.36 46.38 23.88 23.94 23.94 23.87 56.93 56.69 56.49 MIN: 46.17 / MAX: 47.23 MIN: 46.13 / MAX: 46.98 MIN: 46.09 / MAX: 47.08 MIN: 23.79 / MAX: 24.08 MIN: 23.84 / MAX: 24.16 MIN: 23.84 / MAX: 24.18 MIN: 23.78 / MAX: 24.08 MIN: 55.56 / MAX: 59.67 MIN: 55.42 / MAX: 58.97 MIN: 55.29 / MAX: 58.38
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K j i h g f e d c b a 1.1707 2.3414 3.5121 4.6828 5.8535 5.160 5.079 5.075 4.143 4.138 4.114 4.107 5.049 5.149 5.203 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K j i h g f e d c b a 20 40 60 80 100 99.35 98.57 99.10 67.81 67.39 67.72 66.99 90.42 91.32 90.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K j i h g f e d c b a 50 100 150 200 250 224.41 227.87 230.03 160.32 161.85 162.61 163.19 163.06 166.38 163.46 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K j i h g f e d c b a 50 100 150 200 250 228.77 227.21 223.42 161.32 160.80 162.05 161.85 161.50 166.69 163.01 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p j i h g f e d c b a 3 6 9 12 15 12.18 12.26 12.23 11.02 10.74 10.98 10.91 12.62 12.59 12.48 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p j i h g f e d c b a 30 60 90 120 150 149.45 149.05 151.44 118.48 118.49 119.31 118.95 143.55 138.34 141.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p j i h g f e d c b a 130 260 390 520 650 584.12 591.32 580.16 528.53 521.52 525.17 526.22 431.90 427.69 422.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p j i h g f e d c b a 160 320 480 640 800 726.50 728.54 726.89 586.75 585.37 597.01 604.99 516.91 542.61 510.36 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time j i h g f e d c b a 4 8 12 16 20 10.69320 10.74300 10.79880 5.57553 5.57320 5.54107 5.57469 15.98720 15.97850 15.98600
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time j i h g f e d c b a 4 8 12 16 20 10.76640 10.79040 10.78320 5.56539 5.55581 5.56353 5.57001 15.97780 15.98880 15.95280
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time j i h g f e d c b a 50 100 150 200 250 192.70 192.40 192.65 151.68 151.78 151.51 151.91 214.14 214.07 215.10
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time j i h g f e d c b a 4 8 12 16 20 10.87430 10.88290 10.85240 5.62278 5.61454 5.62040 5.60747 14.13990 14.17830 14.23690
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time j i h g f e d c b a 4 8 12 16 20 10.57400 10.58480 10.59130 5.47725 5.45227 5.46153 5.45329 13.83170 13.76660 13.87390
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time j i h g f e d c b a 4 8 12 16 20 12.51040 12.49870 12.54380 6.60085 6.59563 6.58270 6.58745 16.53500 16.43650 16.34680
Build: allmodconfig
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
d: The test quit with a non-zero exit status.
e: The test quit with a non-zero exit status.
f: The test quit with a non-zero exit status.
g: The test quit with a non-zero exit status.
h: The test quit with a non-zero exit status.
i: The test quit with a non-zero exit status.
j: The test quit with a non-zero exit status.
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 8M 16M 24M 32M 40M 37120000 37141000 37145000 35236000 35271000 35315000 35228000 39453000 39486000 39499000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 13M 26M 39M 52M 65M 55715000 55841000 51443000 52854000 52879000 52827000 52665000 57519000 59296000 59401000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 17M 34M 51M 68M 85M 72400000 72491000 72468000 68678000 68861000 68846000 67054000 76924000 77019000 77181000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 30M 60M 90M 120M 150M 111370000 109700000 105280000 104800000 105740000 105480000 105650000 118550000 114010000 117490000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 30M 60M 90M 120M 150M 145740000 145920000 145820000 138460000 138580000 138620000 138600000 153670000 153690000 153850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 40M 80M 120M 160M 200M 200120000 198640000 200930000 190750000 189880000 191230000 188930000 194510000 196590000 196220000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 70M 140M 210M 280M 350M 292490000 292430000 292620000 277410000 276390000 277780000 278030000 306760000 305110000 307540000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 80M 160M 240M 320M 400M 381660000 377250000 378330000 357810000 350450000 357990000 363310000 366990000 366930000 369430000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 3M 6M 9M 12M 15M 12899000 13363000 12909000 12256000 12681000 12366000 12683000 14225000 14021000 13909000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 130M 260M 390M 520M 650M 585670000 583870000 585850000 543050000 545020000 545140000 545360000 603650000 602470000 594230000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 160M 320M 480M 640M 800M 737120000 735800000 741420000 682070000 693340000 692920000 689150000 674930000 692760000 699740000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 6M 12M 18M 24M 30M 26378000 25910000 25648000 22727000 25199000 25207000 24627000 28227000 27736000 27901000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 300M 600M 900M 1200M 1500M 1169900000 1172900000 1172400000 1047100000 1041900000 1046600000 1047100000 1184800000 1190300000 1183500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 300M 600M 900M 1200M 1500M 1369800000 1394100000 1352900000 1033400000 1024600000 1032000000 1035000000 1254800000 1214200000 1192100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 12M 24M 36M 48M 60M 49781000 52129000 52890000 49556000 49977000 50380000 50258000 55165000 55588000 52911000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 500M 1000M 1500M 2000M 2500M 2056500000 2052600000 2057500000 1056200000 1057100000 1057500000 1059500000 2206800000 2212100000 2207700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 400M 800M 1200M 1600M 2000M 1922300000 1899700000 1916800000 1099300000 1094600000 1095400000 1093300000 2010300000 2001900000 1994400000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 20M 40M 60M 80M 100M 104220000 104780000 104440000 100170000 99441000 97005000 99594000 109140000 108080000 109870000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 j i h g f e d c b a 600M 1200M 1800M 2400M 3000M 2071000000 2069900000 2068300000 1065700000 1065300000 1065100000 1065200000 2999800000 2995400000 3005800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 j i h g f e d c b a 600M 1200M 1800M 2400M 3000M 1999500000 1997000000 1995500000 1118200000 1120500000 1117800000 1120800000 2564900000 2571100000 2559800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 50M 100M 150M 200M 250M 209490000 209720000 207590000 194670000 194500000 196040000 193850000 214910000 216150000 216080000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 90M 180M 270M 360M 450M 393270000 393200000 391680000 274070000 273390000 273480000 273760000 424400000 429620000 425810000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 130M 260M 390M 520M 650M 511070000 512310000 512020000 281730000 283030000 281830000 282920000 622630000 610950000 622560000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 j i h g f e d c b a 150M 300M 450M 600M 750M 519440000 520140000 519700000 286530000 285920000 285880000 286250000 715030000 718140000 711640000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 j i h g f e c b a 700 1400 2100 2800 3500 3479 3485 3480 3195 3218 3209 2485 2510 2540
Test: oltp_read_write - Threads: 1
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 i h g f e d c b a 20K 40K 60K 80K 100K 94261 95579 55301 54956 53893 55334 78469 80183 79090
Test: oltp_read_write - Threads: 64
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 i h f e d c b a 1300 2600 3900 5200 6500 6165 6125 5954 5976 5898 4471 4405 4331
Test: oltp_point_select - Threads: 1
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 j i h g f e d b a 20K 40K 60K 80K 100K 105802 104620 104180 59944 60310 60145 59727 89099 85757
Test: oltp_read_write - Threads: 128
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 j i h g f e d c a 400 800 1200 1600 2000 1660 1656 1666 1481 1483 1490 1479 1189 1212
Test: oltp_update_index - Threads: 1
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 j i h g f e c b 20K 40K 60K 80K 100K 87412 86471 87218 69923 70105 70250 65406 67515
Test: oltp_point_select - Threads: 16
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 j i h g f e d b a 30K 60K 90K 120K 150K 137618 138173 138538 96840 97368 96907 98149 106180 104627
Test: oltp_point_select - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 j i g f e d b a 40K 80K 120K 160K 200K 180581 180179 118549 119092 118657 115675 130802 127567
Test: oltp_point_select - Threads: 64
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
h: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 j i h g f e d c a 4K 8K 12K 16K 20K 16972 16817 16965 12627 12692 12567 12622 12681 12558
Test: oltp_update_index - Threads: 16
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 j i h g e d c b a 5K 10K 15K 20K 25K 24366 23773 24286 17135 17117 17612 17565 17817 18361
Test: oltp_update_index - Threads: 32
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 j i h f e d c b 7K 14K 21K 28K 35K 30638 30522 31332 21067 21271 21108 23324 24371
Test: oltp_update_index - Threads: 64
a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 j i h f e d c b a 40K 80K 120K 160K 200K 197738 198137 200327 130389 129904 129492 149962 159728 159242
Test: oltp_point_select - Threads: 128
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 j i h g f e c b a 8K 16K 24K 32K 40K 36644 36141 37126 24574 24830 24611 26546 27464 27087
Test: oltp_update_index - Threads: 128
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 i h g f e d c b a 400 800 1200 1600 2000 1848 1861 1705 1697 1708 1693 1381 1312 1328
Test: oltp_update_non_index - Threads: 1
j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 j i h g e d b a 5K 10K 15K 20K 25K 23543 23541 23794 18735 18557 18563 18068 18095
Test: oltp_update_non_index - Threads: 16
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 j i h f e d b a 8K 16K 24K 32K 40K 35655 35650 36041 26695 26285 26273 28914 28735
Test: oltp_update_non_index - Threads: 32
c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 j h g f e c a 14K 28K 42K 56K 70K 64066 65816 41695 41424 42138 52865 51105
Test: oltp_update_non_index - Threads: 128
b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
i: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 9 18 27 36 45 25.73 25.63 25.71 13.07 13.09 12.94 13.07 39.45 39.47 39.50
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 130 260 390 520 650 614.23 612.98 613.05 607.16 607.82 607.91 606.10 605.92 605.73 605.04
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 300 600 900 1200 1500 1002.81 999.46 1003.52 509.14 508.21 511.41 508.09 1418.90 1403.07 1417.07
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 4 8 12 16 20 15.94 15.99 15.92 15.69 15.72 15.62 15.72 16.89 17.07 16.91
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 150 300 450 600 750 507.36 508.36 507.63 257.28 257.50 257.89 257.27 671.26 672.37 672.46
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 8 16 24 32 40 31.50 31.44 31.49 31.06 31.03 30.99 31.05 35.68 35.64 35.63
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 40 80 120 160 200 136.57 136.68 136.96 70.93 71.04 71.27 71.14 201.54 201.25 201.39
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 30 60 90 120 150 116.85 116.89 116.51 112.48 112.41 112.06 112.25 118.78 118.95 118.75
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 1100 2200 3300 4400 5500 3324.70 3327.19 3324.82 1602.52 1600.53 1599.15 1599.21 5153.66 5138.83 5137.01
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 1.1241 2.2482 3.3723 4.4964 5.6205 4.8016 4.8015 4.8005 4.9787 4.9877 4.9859 4.9960 4.6348 4.6476 4.6508
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 110 220 330 440 550 494.01 494.50 495.22 495.60 494.22 494.26 493.60 507.48 487.36 485.72
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 110 220 330 440 550 326.90 326.78 326.93 163.23 162.90 162.93 163.56 487.05 489.45 489.12
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 50 100 150 200 250 145.37 145.65 145.48 72.69 72.57 72.66 72.46 218.52 219.53 218.15
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 20 40 60 80 100 109.86 109.74 109.79 109.90 110.00 109.97 110.11 109.58 109.23 109.80
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 70 140 210 280 350 216.51 216.22 216.24 109.22 109.09 109.09 108.91 321.51 321.18 322.25
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 80 160 240 320 400 336.68 336.77 337.96 325.51 324.96 325.74 325.88 347.37 347.22 347.66
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 160 320 480 640 800 487.23 486.71 488.06 239.52 240.16 240.23 240.55 716.14 717.97 718.92
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 40 80 120 160 200 109.41 109.46 109.03 55.43 55.54 55.46 55.61 164.61 159.06 158.92
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 30 60 90 120 150 145.90 145.83 146.12 144.11 143.69 144.10 143.76 145.26 150.61 150.59
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 9 18 27 36 45 25.77 25.70 25.79 13.06 13.09 13.12 13.13 39.42 39.45 39.44
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream j i h g f e d c b a 130 260 390 520 650 612.52 613.64 613.16 608.72 606.79 606.76 606.58 605.88 606.67 605.76
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only j i h g f e d c b a 16 32 48 64 80 38.51 38.57 38.40 72.01 71.96 71.44 72.00 26.12 26.24 26.20
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only j i h g f e d c b a 40 80 120 160 200 99.29 99.35 99.54 183.29 181.70 182.56 182.99 66.72 66.64 66.42
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only j i h g f e d c b a 20 40 60 80 100 48.82 48.69 49.10 90.63 90.26 90.31 90.03 33.03 33.17 33.22
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only j i h g f e d c b a 140 280 420 560 700 351.38 351.66 352.40 669.09 667.87 670.64 670.87 254.72 255.30 254.88
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only j i h g f e d c b a 50 100 150 200 250 119.04 119.42 119.30 224.12 223.95 224.10 224.15 80.41 80.76 80.54
OpenVINO OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU j i h g f e d c b a 7 14 21 28 35 19.84 19.83 19.82 10.48 10.48 10.47 10.47 30.43 30.44 30.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU j i h g f e d c b a 200 400 600 800 1000 805.38 804.75 804.58 759.92 760.57 761.16 761.59 393.37 393.23 393.60 MIN: 783.22 / MAX: 819.23 MIN: 776.93 / MAX: 819.19 MIN: 772.52 / MAX: 820.63 MIN: 737.63 / MAX: 771.07 MIN: 741.4 / MAX: 770.88 MIN: 741.99 / MAX: 776.56 MIN: 738.34 / MAX: 772.36 MIN: 362.57 / MAX: 433.51 MIN: 360.87 / MAX: 433.13 MIN: 363.29 / MAX: 431.61 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU j i h g f e d c b a 60 120 180 240 300 194.82 193.80 197.94 107.04 107.39 107.27 107.02 282.67 284.22 282.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU j i h g f e d c b a 20 40 60 80 100 82.09 82.49 80.77 74.71 74.43 74.50 74.71 42.43 42.20 42.44 MIN: 68.73 / MAX: 91.87 MIN: 70.77 / MAX: 94.62 MIN: 69.54 / MAX: 95.42 MIN: 66.29 / MAX: 79.68 MIN: 65.68 / MAX: 83.49 MIN: 66.5 / MAX: 80.32 MIN: 66.12 / MAX: 81.09 MIN: 36.31 / MAX: 62.36 MIN: 36.84 / MAX: 61.97 MIN: 36.14 / MAX: 61.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU j i h g f e d c b a 60 120 180 240 300 196.26 197.66 196.07 107.24 106.76 107.24 106.90 284.31 284.99 283.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU j i h g f e d c b a 20 40 60 80 100 81.50 80.88 81.58 74.58 74.87 74.54 74.81 42.19 42.09 42.24 MIN: 68.9 / MAX: 92.66 MIN: 39.72 / MAX: 92.54 MIN: 68.74 / MAX: 95.81 MIN: 67.63 / MAX: 78.73 MIN: 66.72 / MAX: 80.96 MIN: 65.97 / MAX: 82.9 MIN: 66.88 / MAX: 80.7 MIN: 36.21 / MAX: 65.64 MIN: 37.13 / MAX: 58.71 MIN: 36.59 / MAX: 61.56 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU j i h g f e d c b a 400 800 1200 1600 2000 1483.25 1488.04 1481.71 793.90 791.74 793.75 797.64 2029.79 2028.01 2033.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU j i h g f e d c b a 3 6 9 12 15 10.77 10.74 10.78 10.06 10.09 10.06 10.01 5.90 5.91 5.89 MIN: 6 / MAX: 18.16 MIN: 5.92 / MAX: 24.44 MIN: 5.59 / MAX: 21.13 MIN: 5.2 / MAX: 19.38 MIN: 5.4 / MAX: 19.17 MIN: 5.29 / MAX: 19.07 MIN: 5.7 / MAX: 19.52 MIN: 4.83 / MAX: 13.4 MIN: 4.84 / MAX: 12.9 MIN: 4.67 / MAX: 18.4 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU j i h g f e d c b a 13 26 39 52 65 37.48 37.57 37.82 20.05 20.01 20.00 20.03 56.02 56.06 56.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU j i h g f e d c b a 90 180 270 360 450 425.88 425.68 421.80 398.13 399.24 398.91 398.52 213.79 213.62 213.94 MIN: 404.76 / MAX: 434.06 MIN: 402.91 / MAX: 432.03 MIN: 269.94 / MAX: 598.22 MIN: 379.09 / MAX: 404.71 MIN: 387.9 / MAX: 408.93 MIN: 386.2 / MAX: 407.29 MIN: 382.1 / MAX: 404.98 MIN: 197.29 / MAX: 236.32 MIN: 197.2 / MAX: 235.23 MIN: 201.64 / MAX: 242.71 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU j i h g f e d c b a 1300 2600 3900 5200 6500 4892.27 4848.42 4803.65 2557.66 2539.97 2562.54 2564.78 5840.53 5836.27 5882.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU j i h g f e d c b a 0.747 1.494 2.241 2.988 3.735 3.26 3.29 3.32 3.12 3.14 3.11 3.11 2.05 2.05 2.03 MIN: 2.1 / MAX: 14.3 MIN: 1.89 / MAX: 12.75 MIN: 2.11 / MAX: 12.58 MIN: 1.88 / MAX: 11.92 MIN: 1.93 / MAX: 11.65 MIN: 1.93 / MAX: 9.72 MIN: 1.94 / MAX: 11.57 MIN: 1.62 / MAX: 6.96 MIN: 1.6 / MAX: 7 MIN: 1.66 / MAX: 7.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU j i h g f e d c b a 160 320 480 640 800 648.27 642.90 643.80 341.36 343.49 342.81 344.67 757.38 750.49 748.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU j i h g f e d c b a 6 12 18 24 30 24.67 24.87 24.84 23.42 23.28 23.32 23.20 15.83 15.98 16.02 MIN: 20.15 / MAX: 37.89 MIN: 17 / MAX: 33.34 MIN: 16.93 / MAX: 33.96 MIN: 20.46 / MAX: 32.43 MIN: 15.73 / MAX: 30.77 MIN: 19.49 / MAX: 30.99 MIN: 15.1 / MAX: 31.6 MIN: 12.38 / MAX: 32.97 MIN: 12.74 / MAX: 33.34 MIN: 12.5 / MAX: 33.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU j i h g f e d c b a 600 1200 1800 2400 3000 2254.72 2264.93 2252.70 1175.58 1180.85 1174.60 1175.67 2881.14 2880.58 2873.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU j i h g f e d c b a 2 4 6 8 10 7.08 7.05 7.09 6.79 6.76 6.80 6.79 4.16 4.16 4.17 MIN: 4.35 / MAX: 16.86 MIN: 4.44 / MAX: 16.57 MIN: 4.43 / MAX: 16.88 MIN: 3.79 / MAX: 15.41 MIN: 4.04 / MAX: 15.47 MIN: 4.04 / MAX: 15.37 MIN: 3.8 / MAX: 15.48 MIN: 3.43 / MAX: 10.26 MIN: 3.42 / MAX: 11.2 MIN: 3.39 / MAX: 10.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU j i h g f e d c b a 600 1200 1800 2400 3000 1964.71 1964.61 1963.90 1039.37 1038.47 1039.82 1039.61 2987.33 2986.46 2945.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU j i h g f e d c b a 4 8 12 16 20 16.27 16.27 16.27 15.37 15.38 15.36 15.36 16.02 16.02 16.26 MIN: 8.44 / MAX: 25.48 MIN: 8.5 / MAX: 25.86 MIN: 8.92 / MAX: 25.52 MIN: 7.99 / MAX: 23.98 MIN: 7.99 / MAX: 24 MIN: 8.02 / MAX: 23.81 MIN: 8.08 / MAX: 24.34 MIN: 14.63 / MAX: 33.79 MIN: 14.41 / MAX: 30.55 MIN: 14.71 / MAX: 28.14 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU j i h g f e d c b a 2K 4K 6K 8K 10K 6638.24 6646.91 6653.86 3533.64 3548.78 3544.18 3540.88 9845.27 9849.07 9837.58 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU j i h g f e d c b a 1.0935 2.187 3.2805 4.374 5.4675 4.81 4.81 4.80 4.52 4.50 4.51 4.51 4.86 4.85 4.86 MIN: 3.23 / MAX: 14.45 MIN: 3.23 / MAX: 15.04 MIN: 3.23 / MAX: 14.95 MIN: 2.77 / MAX: 13.57 MIN: 2.98 / MAX: 13.86 MIN: 2.96 / MAX: 16.06 MIN: 2.98 / MAX: 13.05 MIN: 4.34 / MAX: 12.27 MIN: 4.25 / MAX: 12.86 MIN: 4.23 / MAX: 12.81 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU j i h g f e d c b a 200 400 600 800 1000 709.75 709.73 710.69 372.26 369.26 373.64 370.57 849.30 854.51 842.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU j i h g f e d c b a 5 10 15 20 25 22.53 22.53 22.50 21.47 21.65 21.40 21.57 14.12 14.03 14.23 MIN: 18.74 / MAX: 31.08 MIN: 19.09 / MAX: 30.15 MIN: 13.76 / MAX: 30.22 MIN: 17.62 / MAX: 28.13 MIN: 19.48 / MAX: 24.27 MIN: 19.07 / MAX: 25.3 MIN: 19.5 / MAX: 24.76 MIN: 11.51 / MAX: 26.04 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 25.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU j i h g f e d c b a 70 140 210 280 350 233.88 233.65 234.18 123.41 124.30 123.61 124.12 317.33 317.28 317.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU j i h g f e d c b a 15 30 45 60 75 68.35 68.42 68.27 64.77 64.31 64.68 64.41 37.79 37.79 37.80 MIN: 55.82 / MAX: 74.84 MIN: 56.13 / MAX: 75.77 MIN: 56.41 / MAX: 79.96 MIN: 55.8 / MAX: 69.46 MIN: 50.85 / MAX: 70.77 MIN: 38.02 / MAX: 72.52 MIN: 37.44 / MAX: 73.04 MIN: 33.29 / MAX: 54.88 MIN: 32.97 / MAX: 53.7 MIN: 33.35 / MAX: 56.45 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU j i h g f e d c b a 1200 2400 3600 4800 6000 3783.65 3777.75 3780.80 2006.09 2004.76 2007.53 2013.77 5802.65 5780.44 5776.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU j i h g f e d c b a 2 4 6 8 10 8.45 8.46 8.46 7.96 7.97 7.96 7.93 8.24 8.27 8.28 MIN: 4.46 / MAX: 17.31 MIN: 4.49 / MAX: 17.8 MIN: 4.67 / MAX: 18 MIN: 4.19 / MAX: 14.2 MIN: 4.37 / MAX: 16.86 MIN: 4.19 / MAX: 16.59 MIN: 4.2 / MAX: 16.92 MIN: 7.62 / MAX: 23.32 MIN: 7.37 / MAX: 25.18 MIN: 7.44 / MAX: 23.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU j i h g f e d c b a 500 1000 1500 2000 2500 2194.36 2224.08 2180.83 1031.60 1041.87 1028.64 1036.99 2455.51 2450.26 2454.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU j i h g f e d c b a 2 4 6 8 10 7.28 7.18 7.33 7.74 7.67 7.77 7.70 4.88 4.89 4.88 MIN: 5.53 / MAX: 15.78 MIN: 4.98 / MAX: 16.11 MIN: 5.45 / MAX: 15.94 MIN: 6.06 / MAX: 12.66 MIN: 5.32 / MAX: 16.6 MIN: 5.42 / MAX: 16.35 MIN: 5.51 / MAX: 16.06 MIN: 3.9 / MAX: 14.94 MIN: 3.93 / MAX: 13.44 MIN: 3.95 / MAX: 16.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU j i h g f e d c b a 300 600 900 1200 1500 1012.98 1036.11 1034.35 538.01 533.74 530.99 532.59 1551.63 1546.02 1560.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU j i h g f e d c b a 7 14 21 28 35 31.57 30.87 30.92 29.72 29.95 30.10 30.02 30.89 31.00 30.72 MIN: 20.39 / MAX: 39.22 MIN: 20.13 / MAX: 42.34 MIN: 25.94 / MAX: 41.77 MIN: 19.46 / MAX: 38.99 MIN: 19.01 / MAX: 38.08 MIN: 22.61 / MAX: 39.15 MIN: 18.78 / MAX: 38.72 MIN: 29.48 / MAX: 36.29 MIN: 29.59 / MAX: 36.33 MIN: 29.51 / MAX: 35.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU j i h g f e d c b a 20K 40K 60K 80K 100K 59505.78 59654.98 59615.88 32008.03 31951.64 32032.06 32002.62 86789.80 87359.23 86884.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU j i h g f e d c b a 0.1215 0.243 0.3645 0.486 0.6075 0.53 0.53 0.53 0.49 0.49 0.49 0.49 0.54 0.54 0.54 MIN: 0.31 / MAX: 7.18 MIN: 0.31 / MAX: 10.06 MIN: 0.31 / MAX: 10.06 MIN: 0.3 / MAX: 8.84 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 9.28 MIN: 0.45 / MAX: 5.03 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 7.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU j i h g f e d c b a 300 600 900 1200 1500 812.76 815.13 810.71 432.20 431.94 432.32 395.66 1237.29 1239.67 1244.69 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU j i h g f e d c b a 9 18 27 36 45 39.34 39.23 39.44 36.98 37.01 36.98 40.40 38.75 38.66 38.50 MIN: 25.19 / MAX: 46.89 MIN: 34.71 / MAX: 47.63 MIN: 33.14 / MAX: 45.28 MIN: 32.61 / MAX: 41.91 MIN: 32.25 / MAX: 43.6 MIN: 32.02 / MAX: 44.78 MIN: 26.93 / MAX: 74.83 MIN: 37.46 / MAX: 43.52 MIN: 37.22 / MAX: 43.52 MIN: 36.77 / MAX: 44.23 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU j i h g f e d c b a 30K 60K 90K 120K 150K 68895.50 68945.32 68931.35 45097.99 44968.43 44933.27 44958.07 123484.28 120728.22 120606.38 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU j i h g f e d c b a 0.0788 0.1576 0.2364 0.3152 0.394 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.34 0.34 0.34 MIN: 0.22 / MAX: 8.35 MIN: 0.22 / MAX: 8.62 MIN: 0.21 / MAX: 8.91 MIN: 0.23 / MAX: 8.63 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.09 MIN: 0.29 / MAX: 7.09 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 j i h g f e d c b a 120K 240K 360K 480K 600K 370370 555556 526316 460829 523560 294985 529101 403226 404858 420168
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 j i h g f e d c b a 300K 600K 900K 1200K 1500K 1251564 253678 1233046 654022 1221001 251004 278319 683995 1020408 1126126
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 j i h g f e d c b a 300K 600K 900K 1200K 1500K 289101 382995 323729 1107420 1303781 1204819 1248439 185874 173822 215332
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 j i h g f e d c b a 20K 40K 60K 80K 100K 113572 110436 113404 110828 111198 113327 111012 90147 97314 98932
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 j i h g f e d c b a 20K 40K 60K 80K 100K 110693 114692 111782 113895 110803 113225 112613 97031 86715 90114
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 j i h g f e d c b a 200K 400K 600K 800K 1000K 751880 869565 925926 561798 709220 389105 632911 657895 862069 529101
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 j i h g f e d c b a 200K 400K 600K 800K 1000K 684932 847458 595238 487805 478469 613497 591716 729927 458716 515464
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 j i h g f e d c b a 500K 1000K 1500K 2000K 2500K 1941748 1930502 426439 2036660 1795332 320924 1818182 284252 1941748 2173913
OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 j i h g f e d c b a 500K 1000K 1500K 2000K 2500K 558036 2506266 2352941 2049180 1964637 235627 600601 1893939 161970 1886792
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 j i h g f e d 80M 160M 240M 320M 400M 349019800 350151200 354808000 237175700 236591000 236243900 240994500 1. (CXX) g++ options: -O3 -fopenmp -ldl
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
c: The test quit with a non-zero exit status.
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric j i h g f e d c b a 170K 340K 510K 680K 850K 569066 570458 572500 295522 295603 296125 298064 762529 768517 772162 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 j i h g f e d 0.3728 0.7456 1.1184 1.4912 1.864 1.245 1.288 1.284 1.648 1.657 1.654 1.657 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 j i h g f e d 9 18 27 36 45 25.56 26.39 26.11 37.95 38.02 38.07 38.11 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 j i h g f e d 20 40 60 80 100 68.52 68.24 68.57 97.53 97.99 99.42 98.98 1. (CXX) g++ options: -O3 -fopenmp
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon j i h g f e d 11 22 33 44 55 48.71 48.72 48.91 24.96 24.89 24.83 24.85 MIN: 48.45 / MAX: 49.47 MIN: 48.48 / MAX: 49.3 MIN: 48.64 / MAX: 49.47 MIN: 24.9 / MAX: 25.13 MIN: 24.81 / MAX: 25.06 MIN: 24.76 / MAX: 24.96 MIN: 24.78 / MAX: 25
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj j i h g f e d 10 20 30 40 50 43.90 43.77 43.84 22.26 22.27 22.29 22.35 MIN: 43.66 / MAX: 44.27 MIN: 43.51 / MAX: 44.16 MIN: 43.64 / MAX: 44.38 MIN: 22.18 / MAX: 22.43 MIN: 22.2 / MAX: 22.44 MIN: 22.22 / MAX: 22.46 MIN: 22.28 / MAX: 22.5
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown j i h g f e d 10 20 30 40 50 43.93 43.81 43.57 21.83 21.77 21.99 21.89 MIN: 43.46 / MAX: 45.01 MIN: 43.36 / MAX: 45.05 MIN: 43.11 / MAX: 44.65 MIN: 21.69 / MAX: 22.17 MIN: 21.63 / MAX: 22.18 MIN: 21.84 / MAX: 22.32 MIN: 21.74 / MAX: 22.23
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon j i h g f e d 12 24 36 48 60 54.15 54.19 54.22 27.91 27.83 27.83 27.74 MIN: 53.87 / MAX: 54.79 MIN: 53.91 / MAX: 54.97 MIN: 53.93 / MAX: 54.77 MIN: 27.81 / MAX: 28.17 MIN: 27.73 / MAX: 28.13 MIN: 27.72 / MAX: 28.1 MIN: 27.64 / MAX: 27.98
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj j i h g f e d 10 20 30 40 50 46.02 46.09 45.92 23.71 23.50 23.53 23.35 MIN: 45.75 / MAX: 46.57 MIN: 45.82 / MAX: 46.6 MIN: 45.64 / MAX: 46.53 MIN: 23.61 / MAX: 23.93 MIN: 23.4 / MAX: 23.74 MIN: 23.43 / MAX: 23.73 MIN: 23.26 / MAX: 23.57
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown j i h g f e d 10 20 30 40 50 45.40 45.71 45.19 22.42 22.44 22.34 22.39 MIN: 44.88 / MAX: 46.68 MIN: 45.11 / MAX: 47.49 MIN: 44.65 / MAX: 46.39 MIN: 22.22 / MAX: 22.85 MIN: 22.25 / MAX: 22.78 MIN: 22.15 / MAX: 22.75 MIN: 22.2 / MAX: 22.85
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar j i h g f e d 80 160 240 320 400 363 363 363 191 191 190 191 MIN: 24 / MAX: 6613 MIN: 24 / MAX: 6577 MIN: 24 / MAX: 6610 MIN: 13 / MAX: 3483 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3471
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC j i h g f e d 200 400 600 800 1000 922 922 926 489 488 487 487 MIN: 67 / MAX: 12356 MIN: 67 / MAX: 12374 MIN: 67 / MAX: 12416 MIN: 36 / MAX: 6969 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6949
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU j i h g f e d 0.48 0.96 1.44 1.92 2.4 1.15578 1.15012 1.14749 2.11813 2.13062 2.12570 2.13332 MIN: 1.03 MIN: 1 MIN: 1.01 MIN: 1.99 MIN: 1.97 MIN: 2.01 MIN: 2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU j i h g f e d 0.3539 0.7078 1.0617 1.4156 1.7695 0.768314 0.798540 0.778543 1.551180 1.572820 1.549110 1.558240 MIN: 0.71 MIN: 0.7 MIN: 0.71 MIN: 1.52 MIN: 1.53 MIN: 1.51 MIN: 1.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 0.3019 0.6038 0.9057 1.2076 1.5095 0.731618 0.735094 0.734461 1.335640 1.341830 1.338610 1.337890 MIN: 0.66 MIN: 0.66 MIN: 0.66 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU j i h g f e d 0.8649 1.7298 2.5947 3.4596 4.3245 3.68087 3.65907 3.72247 3.82381 3.81823 3.84421 3.81576 MIN: 2.85 MIN: 2.81 MIN: 2.83 MIN: 3.29 MIN: 3.25 MIN: 3.27 MIN: 3.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU j i h g f e d 0.1426 0.2852 0.4278 0.5704 0.713 0.430270 0.427512 0.426426 0.629108 0.630325 0.633975 0.628236 MIN: 0.38 MIN: 0.39 MIN: 0.38 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 0.6893 1.3786 2.0679 2.7572 3.4465 1.78691 1.78876 1.78170 3.05458 3.05674 3.06370 3.05991 MIN: 1.66 MIN: 1.65 MIN: 1.64 MIN: 2.97 MIN: 2.97 MIN: 2.97 MIN: 2.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU j i h g f e d 0.7615 1.523 2.2845 3.046 3.8075 1.73501 1.73499 1.73381 3.38156 3.37956 3.38436 3.37782 MIN: 1.64 MIN: 1.65 MIN: 1.64 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU j i h g f e d 0.1914 0.3828 0.5742 0.7656 0.957 0.440156 0.440368 0.440006 0.843492 0.850691 0.844434 0.847805 MIN: 0.41 MIN: 0.41 MIN: 0.41 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 0.4315 0.863 1.2945 1.726 2.1575 1.04333 1.04312 1.04100 1.91274 1.91422 1.91781 1.91374 MIN: 0.94 MIN: 0.94 MIN: 0.94 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU j i h g f e d 0.5772 1.1544 1.7316 2.3088 2.886 1.75453 1.64478 1.74203 2.51441 2.49714 2.56522 2.49408 MIN: 1.52 MIN: 1.42 MIN: 1.51 MIN: 2.3 MIN: 2.26 MIN: 2.32 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU j i h g f e d 0.4369 0.8738 1.3107 1.7476 2.1845 0.880016 1.941900 0.892701 0.647700 0.653182 0.657610 0.652259 MIN: 0.78 MIN: 0.87 MIN: 0.79 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 0.4722 0.9444 1.4166 1.8888 2.361 1.94941 1.24308 2.09880 1.12723 1.00136 1.14432 1.03749 MIN: 1.26 MIN: 1.04 MIN: 1.29 MIN: 0.93 MIN: 0.92 MIN: 1.07 MIN: 0.92 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU j i h g f e d 0.2881 0.5762 0.8643 1.1524 1.4405 0.936001 0.931793 0.926283 1.279180 1.206530 1.280430 1.257580 MIN: 0.86 MIN: 0.86 MIN: 0.85 MIN: 1.24 MIN: 1.18 MIN: 1.24 MIN: 1.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU j i h g f e d 0.1378 0.2756 0.4134 0.5512 0.689 0.309278 0.301535 0.302460 0.612320 0.600834 0.575794 0.603950 MIN: 0.28 MIN: 0.28 MIN: 0.27 MIN: 0.53 MIN: 0.53 MIN: 0.52 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 0.2388 0.4776 0.7164 0.9552 1.194 0.714970 0.644252 0.704550 1.045670 1.061440 1.054250 1.028750 MIN: 0.67 MIN: 0.61 MIN: 0.66 MIN: 0.98 MIN: 0.98 MIN: 0.97 MIN: 0.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU j i h g f e d 400 800 1200 1600 2000 988.59 991.07 986.96 1637.37 1636.76 1641.00 1641.92 MIN: 950.96 MIN: 953.96 MIN: 949.02 MIN: 1584.58 MIN: 1585.98 MIN: 1595.55 MIN: 1584.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU j i h g f e d 400 800 1200 1600 2000 994.61 987.36 993.56 1631.99 1636.44 1639.36 1642.51 MIN: 960.2 MIN: 952.16 MIN: 955.42 MIN: 1581.62 MIN: 1585.81 MIN: 1581.93 MIN: 1593.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 400 800 1200 1600 2000 991.12 985.74 987.84 1641.40 1642.35 1643.97 1643.99 MIN: 954.92 MIN: 949.93 MIN: 952.67 MIN: 1589.91 MIN: 1586.17 MIN: 1590.89 MIN: 1588.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU j i h g f e d 200 400 600 800 1000 569.80 566.18 564.12 848.03 851.49 849.71 838.52 MIN: 548.08 MIN: 544.56 MIN: 545.13 MIN: 807.34 MIN: 807.97 MIN: 805.98 MIN: 796.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU j i h g f e d 200 400 600 800 1000 563.58 564.45 568.75 837.60 849.34 851.66 849.16 MIN: 543.65 MIN: 545.57 MIN: 546.79 MIN: 796.61 MIN: 805.8 MIN: 809.45 MIN: 806.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU j i h g f e d 200 400 600 800 1000 566.39 563.33 568.65 847.42 845.31 841.08 847.38 MIN: 542.7 MIN: 544.04 MIN: 547.26 MIN: 806.72 MIN: 803.78 MIN: 798.46 MIN: 806.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl