Tests for a future article. AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107aPython Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b Processor: AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads), Motherboard: ASUS ROG ZENITH II EXTREME (1603 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: Samsung SSD 980 PRO 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: ASUS VP28U, Network: Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 6.2.0-32-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.2.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
novvy OpenBenchmarking.org Phoronix Test Suite AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads) ASUS ROG ZENITH II EXTREME (1603 BIOS) AMD Starship/Matisse 64GB Samsung SSD 980 PRO 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio ASUS VP28U Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 6.2.0-32-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49) 1.2.204 GCC 11.4.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Novvy Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107a - Python 3.10.12 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +2.5% +2.5% +5% +5% +7.5% +7.5% +10% +10% 8.4% 3.7% 2.6% 2.2% 2% blosclz shuffle - 256MB 10.1% blosclz bitshuffle - 256MB 9% IP Shapes 1D - f32 - CPU blosclz noshuffle - 256MB 8.3% blosclz shuffle - 128MB 6.1% blosclz noshuffle - 128MB 6% blosclz bitshuffle - 128MB 5.9% blosclz noshuffle - 64MB 4.2% clover_bm 4.2% e.G.B.S - 2400 3.8% blosclz noshuffle - 8MB 3.8% e.G.B.S - 1200 3.8% Time To Compile IP Shapes 1D - u8s8f32 - CPU 3.2% blosclz noshuffle - 32MB 2.9% LiH_ae_MSD blosclz bitshuffle - 16MB 2.3% e.G.B.S - 240 2.2% blosclz shuffle - 16MB blosclz shuffle - 32MB blosclz bitshuffle - 32MB 2% C-Blosc C-Blosc oneDNN C-Blosc C-Blosc C-Blosc C-Blosc C-Blosc CloverLeaf easyWave C-Blosc easyWave Timed Gem5 Compilation oneDNN C-Blosc QMCPACK C-Blosc easyWave C-Blosc C-Blosc C-Blosc a b
novvy blosc: blosclz shuffle - 8MB blosc: blosclz shuffle - 16MB blosc: blosclz shuffle - 32MB blosc: blosclz shuffle - 64MB blosc: blosclz noshuffle - 8MB blosc: blosclz shuffle - 128MB blosc: blosclz shuffle - 256MB blosc: blosclz bitshuffle - 8MB blosc: blosclz noshuffle - 16MB blosc: blosclz noshuffle - 32MB blosc: blosclz noshuffle - 64MB blosc: blosclz bitshuffle - 16MB blosc: blosclz bitshuffle - 32MB blosc: blosclz bitshuffle - 64MB blosc: blosclz noshuffle - 128MB blosc: blosclz noshuffle - 256MB blosc: blosclz bitshuffle - 128MB blosc: blosclz bitshuffle - 256MB quantlib: Multi-Threaded quantlib: Single-Threaded cloverleaf: clover_bm cloverleaf: clover_bm16 cloverleaf: clover_bm64_short qmcpack: H4_ae qmcpack: Li2_STO_ae qmcpack: LiH_ae_MSD qmcpack: simple-H2O qmcpack: O_ae_pyscf_UHF qmcpack: FeCO6_b3lyp_gms easywave: e2Asean Grid + BengkuluSept2007 Source - 240 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 easywave: e2Asean Grid + BengkuluSept2007 Source - 2400 embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only openvkl: vklBenchmarkCPU ISPC openvkl: vklBenchmarkCPU Scalar build-gem5: Time To Compile onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU cpuminer-opt: Magi cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU a b 14524.6 14681.1 13648.4 11647.2 11333.4 9128.9 6794.6 12902.5 11117.3 10114.3 8272 13502.4 12653.7 10766.7 6839.2 5388.7 8726 6511.1 110940.4 2935.3 26.36 1297.63 150.81 14.7 132.36 113.41 27.056 195.27 172.3 3.321 148.244 412.537 38.0919 33.8293 38.059 33.6235 36.6992 31.6532 1.14 1.14 0.57 554 311 301.837 1.51732 5.30723 1.35399 0.972836 6.75167 5.59343 2.74341 7.3594 1.54036 1.54437 3894.96 1090.13 3938.81 1097.99 3924.43 1101.21 4191 4289 4963 73331 140574 75018 143476 85890 165158 1057 1081 1251 16836 40015 17147 40607 20005 46279 973.65 335.46 11310 4585.89 206350 1440.1 49520 16380 21810 75090 106820 8.98 1769.17 63.37 252.4 63.23 252.67 465.19 34.33 11.43 1394.76 2556.07 6.24 81.55 196 900.08 17.76 851.68 18.76 2914.91 5.48 406.48 39.32 94.09 169.89 1134.72 28.18 1139.38 14.02 333.33 95.92 27213.1 1.16 358.24 89.24 39083.44 0.81 14553.4 15007 13927.4 11499.4 10921.7 8600.5 6173.9 12733.1 10930.2 9830.1 7935.6 13204.9 12401.9 10607.5 6452.2 4975.7 8241.7 5971.1 111297.1 2962.1 27.46 1307.50 151.11 14.49 132.12 110.56 26.898 195.12 172.23 3.395 153.816 428.292 38.0387 33.7705 37.5066 33.1848 36.4016 31.2808 1.14 1.14 0.57 552 311 291.051 1.40003 5.30471 1.39743 0.970336 6.79235 5.5548 2.74744 7.3635 1.55034 1.56101 3914.95 1094.58 3925.64 1101.56 3935.1 1094.99 4196 4299 4981 73227 140056 74992 143606 86000 165029 1058 1076 1252 16814 40024 17193 40704 19971 46238 969.25 335.25 11310 4575.65 206270 1431.7 49200 16390 21810 75110 106780 8.98 1768.09 62.81 254.29 63.29 252.37 466.04 34.27 11.42 1397.89 2559.91 6.23 81.71 195.62 902.81 17.71 847.73 18.85 2907.71 5.49 400.92 39.86 93.13 171.57 1133.46 28.21 1147.76 13.92 330.63 96.7 27049.12 1.17 364.05 87.83 38988.78 0.81 OpenBenchmarking.org
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 16MB b a 3K 6K 9K 12K 15K 15007.0 14681.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 32MB b a 3K 6K 9K 12K 15K 13927.4 13648.4 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 64MB b a 2K 4K 6K 8K 10K 11499.4 11647.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 8MB b a 2K 4K 6K 8K 10K 10921.7 11333.4 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 128MB b a 2K 4K 6K 8K 10K 8600.5 9128.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 256MB b a 1500 3000 4500 6000 7500 6173.9 6794.6 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 8MB b a 3K 6K 9K 12K 15K 12733.1 12902.5 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 16MB b a 2K 4K 6K 8K 10K 10930.2 11117.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 32MB b a 2K 4K 6K 8K 10K 9830.1 10114.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 64MB b a 2K 4K 6K 8K 10K 7935.6 8272.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 16MB b a 3K 6K 9K 12K 15K 13204.9 13502.4 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 32MB b a 3K 6K 9K 12K 15K 12401.9 12653.7 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 64MB b a 2K 4K 6K 8K 10K 10607.5 10766.7 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 128MB b a 1500 3000 4500 6000 7500 6452.2 6839.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 256MB b a 1200 2400 3600 4800 6000 4975.7 5388.7 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 128MB b a 2K 4K 6K 8K 10K 8241.7 8726.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 256MB b a 1400 2800 4200 5600 7000 5971.1 6511.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded b a 20K 40K 60K 80K 100K 111297.1 110940.4 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded b a 600 1200 1800 2400 3000 2962.1 2935.3 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm16 b a 300 600 900 1200 1500 1307.50 1297.63 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short b a 30 60 90 120 150 151.11 150.81 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae b a 4 8 12 16 20 14.49 14.70 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae b a 30 60 90 120 150 132.12 132.36 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD b a 30 60 90 120 150 110.56 113.41 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O b a 6 12 18 24 30 26.90 27.06 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF b a 40 80 120 160 200 195.12 195.27 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms b a 40 80 120 160 200 172.23 172.30 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 b a 0.7639 1.5278 2.2917 3.0556 3.8195 3.395 3.321 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 b a 30 60 90 120 150 153.82 148.24 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 b a 90 180 270 360 450 428.29 412.54 1. (CXX) g++ options: -O3 -fopenmp
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown b a 9 18 27 36 45 38.04 38.09 MIN: 37.67 / MAX: 38.81 MIN: 37.69 / MAX: 38.88
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown b a 8 16 24 32 40 33.77 33.83 MIN: 33.39 / MAX: 34.66 MIN: 33.46 / MAX: 34.54
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon b a 9 18 27 36 45 37.51 38.06 MIN: 37.27 / MAX: 38.29 MIN: 37.83 / MAX: 38.72
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj b a 8 16 24 32 40 33.18 33.62 MIN: 32.96 / MAX: 33.9 MIN: 33.42 / MAX: 34.56
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon b a 8 16 24 32 40 36.40 36.70 MIN: 36.2 / MAX: 36.94 MIN: 36.46 / MAX: 37.3
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj b a 7 14 21 28 35 31.28 31.65 MIN: 31.07 / MAX: 31.86 MIN: 31.45 / MAX: 32.14
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b a 0.3414 0.6828 1.0242 1.3656 1.707 1.40003 1.51732 MIN: 1.25 MIN: 1.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU b a 1.1941 2.3882 3.5823 4.7764 5.9705 5.30471 5.30723 MIN: 5.18 MIN: 5.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU b a 0.3144 0.6288 0.9432 1.2576 1.572 1.39743 1.35399 MIN: 1.19 MIN: 1.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU b a 0.2189 0.4378 0.6567 0.8756 1.0945 0.970336 0.972836 MIN: 0.85 MIN: 0.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU b a 2 4 6 8 10 6.79235 6.75167 MIN: 6.69 MIN: 6.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU b a 1.2585 2.517 3.7755 5.034 6.2925 5.55480 5.59343 MIN: 4.55 MIN: 4.86 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU b a 0.6182 1.2364 1.8546 2.4728 3.091 2.74744 2.74341 MIN: 2.68 MIN: 2.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b a 2 4 6 8 10 7.3635 7.3594 MIN: 7.29 MIN: 7.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU b a 0.3488 0.6976 1.0464 1.3952 1.744 1.55034 1.54036 MIN: 1.46 MIN: 1.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU b a 0.3512 0.7024 1.0536 1.4048 1.756 1.56101 1.54437 MIN: 1.5 MIN: 1.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU b a 800 1600 2400 3200 4000 3914.95 3894.96 MIN: 3907.56 MIN: 3887.05 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU b a 200 400 600 800 1000 1094.58 1090.13 MIN: 1086.58 MIN: 1080.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU b a 800 1600 2400 3200 4000 3925.64 3938.81 MIN: 3918.28 MIN: 3930.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU b a 200 400 600 800 1000 1101.56 1097.99 MIN: 1093.32 MIN: 1089.79 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU b a 800 1600 2400 3200 4000 3935.10 3924.43 MIN: 3925.28 MIN: 3917.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU b a 200 400 600 800 1000 1094.99 1101.21 MIN: 1085.01 MIN: 1092.35 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU b a 900 1800 2700 3600 4500 4196 4191
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi b a 200 400 600 800 1000 969.25 973.65 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin b a 20K 40K 60K 80K 100K 106780 106820 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU b a 3 6 9 12 15 8.98 8.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU b a 400 800 1200 1600 2000 1768.09 1769.17 MIN: 1629.02 / MAX: 1954.46 MIN: 1644.38 / MAX: 1945.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU b a 14 28 42 56 70 62.81 63.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU b a 60 120 180 240 300 254.29 252.40 MIN: 96.61 / MAX: 284.68 MIN: 168.02 / MAX: 287.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b a 14 28 42 56 70 63.29 63.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b a 60 120 180 240 300 252.37 252.67 MIN: 157.65 / MAX: 280.9 MIN: 153.81 / MAX: 286.5 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU b a 100 200 300 400 500 466.04 465.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU b a 8 16 24 32 40 34.27 34.33 MIN: 22.45 / MAX: 58.01 MIN: 13.07 / MAX: 56.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU b a 3 6 9 12 15 11.42 11.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU b a 300 600 900 1200 1500 1397.89 1394.76 MIN: 1372.07 / MAX: 1453.69 MIN: 1366.67 / MAX: 1441.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU b a 500 1000 1500 2000 2500 2559.91 2556.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU b a 2 4 6 8 10 6.23 6.24 MIN: 3.31 / MAX: 21.44 MIN: 3.12 / MAX: 21.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU b a 20 40 60 80 100 81.71 81.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU b a 40 80 120 160 200 195.62 196.00 MIN: 90.67 / MAX: 231.58 MIN: 144.14 / MAX: 229.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU b a 200 400 600 800 1000 902.81 900.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU b a 4 8 12 16 20 17.71 17.76 MIN: 13.46 / MAX: 45.06 MIN: 15.16 / MAX: 40.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU b a 200 400 600 800 1000 847.73 851.68 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU b a 5 10 15 20 25 18.85 18.76 MIN: 17.69 / MAX: 42.24 MIN: 10.52 / MAX: 37.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU b a 600 1200 1800 2400 3000 2907.71 2914.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU b a 1.2353 2.4706 3.7059 4.9412 6.1765 5.49 5.48 MIN: 3.75 / MAX: 18.22 MIN: 3.7 / MAX: 17.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b a 90 180 270 360 450 400.92 406.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b a 9 18 27 36 45 39.86 39.32 MIN: 20.88 / MAX: 77.55 MIN: 27.44 / MAX: 82.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU b a 20 40 60 80 100 93.13 94.09 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU b a 40 80 120 160 200 171.57 169.89 MIN: 137.68 / MAX: 231.69 MIN: 143.2 / MAX: 226.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 200 400 600 800 1000 1133.46 1134.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 7 14 21 28 35 28.21 28.18 MIN: 14.36 / MAX: 58.35 MIN: 14.37 / MAX: 63.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 200 400 600 800 1000 1147.76 1139.38 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 4 8 12 16 20 13.92 14.02 MIN: 7.89 / MAX: 28.49 MIN: 10.56 / MAX: 30.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU b a 70 140 210 280 350 330.63 333.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU b a 20 40 60 80 100 96.70 95.92 MIN: 64.71 / MAX: 148.58 MIN: 59.6 / MAX: 152.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b a 6K 12K 18K 24K 30K 27049.12 27213.10 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b a 0.2633 0.5266 0.7899 1.0532 1.3165 1.17 1.16 MIN: 0.65 / MAX: 12.7 MIN: 0.61 / MAX: 12.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU b a 80 160 240 320 400 364.05 358.24 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU b a 20 40 60 80 100 87.83 89.24 MIN: 55.52 / MAX: 132.16 MIN: 56.23 / MAX: 129.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU b a 8K 16K 24K 32K 40K 38988.78 39083.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU b a 0.1823 0.3646 0.5469 0.7292 0.9115 0.81 0.81 MIN: 0.49 / MAX: 10.29 MIN: 0.52 / MAX: 7.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107aPython Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 9 December 2023 01:31 by user phoronix.
b Processor: AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads), Motherboard: ASUS ROG ZENITH II EXTREME (1603 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: Samsung SSD 980 PRO 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: ASUS VP28U, Network: Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 6.2.0-32-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.2.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107aPython Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 9 December 2023 06:44 by user phoronix.