Tests for a future article.
Ryzen 9 5950X Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 16GB (2475/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 7 5800X3D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads) , Motherboard: ASRock X570 Pro4 (P4.30 BIOS) , Chipset: AMD Starship/Matisse, Memory: 16GB , Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz) , Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U , Network: Intel I211
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201205Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 7 5800X Changed Processor to AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads) .
Processor Change: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016
Ryzen 9 5900X Processor: AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads) , Motherboard: ASUS ROG CROSSHAIR VIII HERO (3904 BIOS) , Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: NVIDIA NV134 8GB , Audio: NVIDIA GP104 HD Audio , Monitor: ASUS MG28U , Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Core i9 12900K Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads) , Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1003 BIOS) , Chipset: Intel Device 7aa7 , Memory: 32GB , Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz) , Audio: Intel Device 7ad0 , Monitor: ASUS VP28U , Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 300 600 900 1200 1500 SE +/- 0.17, N = 3 SE +/- 4.25, N = 3 SE +/- 3.51, N = 3 SE +/- 1.17, N = 3 SE +/- 8.09, N = 3 362 1107 1223 1421 1540 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 1500 3000 4500 6000 7500 SE +/- 14.53, N = 15 SE +/- 27.21, N = 6 SE +/- 10.74, N = 6 SE +/- 34.51, N = 6 SE +/- 78.12, N = 15 1709.25 2755.72 2936.85 4631.82 6946.11 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D Ryzen 7 5800X 300 600 900 1200 1500 1503.26 667.28 654.35 553.75 390.27
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 600 1200 1800 2400 3000 SE +/- 1.65, N = 3 SE +/- 0.90, N = 3 SE +/- 2.02, N = 3 SE +/- 2.64, N = 3 SE +/- 2.70, N = 3 784.51 837.94 854.71 930.57 2721.15 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 500 1000 1500 2000 2500 SE +/- 2.73, N = 3 SE +/- 6.36, N = 3 SE +/- 11.02, N = 3 SE +/- 6.08, N = 3 SE +/- 22.18, N = 3 668 867 954 1254 2201 1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 500 1000 1500 2000 2500 SE +/- 7.32, N = 9 SE +/- 8.85, N = 9 SE +/- 7.53, N = 9 SE +/- 8.89, N = 3 SE +/- 19.84, N = 7 684 854 886 1160 2161 1. (CXX) g++ options: -flto -pthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 0.3279 0.6558 0.9837 1.3116 1.6395 SE +/- 0.002722, N = 5 SE +/- 0.002976, N = 5 SE +/- 0.002108, N = 5 SE +/- 0.003942, N = 5 SE +/- 0.002527, N = 5 1.457520 0.812047 0.604780 0.519733 0.476097 -lpthread - MIN: 1.35 MIN: 0.79 -lpthread - MIN: 0.58 -lpthread - MIN: 0.47 -lpthread - MIN: 0.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 300 600 900 1200 1500 1309.59 1174.47 1158.67 1023.32 429.61
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 4 8 12 16 20 SE +/- 0.02368, N = 7 SE +/- 0.04249, N = 7 SE +/- 0.22593, N = 15 SE +/- 0.04088, N = 7 SE +/- 0.00249, N = 7 18.03210 16.73820 16.35340 10.38000 6.00214 -lpthread - MIN: 17.6 -lpthread - MIN: 16.08 -lpthread - MIN: 14.78 -lpthread - MIN: 9.78 MIN: 5.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 800 1600 2400 3200 4000 SE +/- 0.92, N = 3 SE +/- 5.57, N = 3 SE +/- 3.45, N = 3 SE +/- 1.83, N = 3 SE +/- 2.07, N = 3 1346.57 1471.58 1541.67 1594.95 3872.03 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenFOAM OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 8 Input: Motorbike 60M Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 300 600 900 1200 1500 SE +/- 0.26, N = 3 SE +/- 0.77, N = 3 SE +/- 0.23, N = 3 SE +/- 1.12, N = 3 SE +/- 0.14, N = 3 1382.54 1277.55 1270.72 1090.21 487.80 -lfoamToVTK -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 30 60 90 120 150 SE +/- 0.16, N = 3 SE +/- 0.05, N = 3 SE +/- 1.17, N = 9 SE +/- 0.11, N = 3 SE +/- 0.58, N = 3 156.16 144.47 141.21 126.86 55.61 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 2K 4K 6K 8K 10K SE +/- 11.91, N = 6 SE +/- 6.53, N = 15 SE +/- 10.98, N = 6 SE +/- 37.29, N = 6 SE +/- 38.17, N = 15 3214.58 3367.67 3732.72 7793.77 8741.60 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 2 4 6 8 10 SE +/- 0.01310, N = 5 SE +/- 0.05069, N = 5 SE +/- 0.02525, N = 5 SE +/- 0.01065, N = 5 SE +/- 0.00441, N = 5 8.93552 7.79652 7.54843 6.55765 3.44643 -lpthread - MIN: 8.33 -lpthread - MIN: 7.29 -lpthread - MIN: 7.29 -lpthread - MIN: 6.32 MIN: 3.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 9 18 27 36 45 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.32, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 4 37.21 33.95 32.15 27.18 14.55 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 120 240 360 480 600 SE +/- 1.00, N = 4 SE +/- 0.57, N = 3 SE +/- 0.52, N = 4 SE +/- 1.80, N = 4 SE +/- 0.00, N = 5 220.04 221.73 238.67 527.72 540.54 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenet-v1-1.0 Ryzen 9 5900X Core i9 12900K Ryzen 9 5950X Ryzen 7 5800X Ryzen 7 5800X3D 0.9162 1.8324 2.7486 3.6648 4.581 SE +/- 0.037, N = 3 SE +/- 0.008, N = 3 SE +/- 0.067, N = 3 SE +/- 0.011, N = 3 SE +/- 0.010, N = 3 4.072 2.891 2.639 1.816 1.676 MIN: 3.95 / MAX: 4.31 MIN: 2.85 / MAX: 8.54 MIN: 2.52 / MAX: 11.27 MIN: 1.79 / MAX: 3.08 MIN: 1.63 / MAX: 2.97 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 140 280 420 560 700 SE +/- 0.44, N = 3 SE +/- 0.44, N = 3 SE +/- 0.33, N = 3 SE +/- 0.60, N = 3 SE +/- 1.09, N = 3 273 296 299 308 629 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenFOAM OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 8 Input: Motorbike 30M Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 40 80 120 160 200 SE +/- 0.25, N = 3 SE +/- 0.24, N = 3 SE +/- 0.15, N = 3 SE +/- 0.18, N = 3 SE +/- 0.32, N = 3 177.60 98.44 96.16 84.47 80.29 -lfoamToVTK -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenetV3 Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.538 1.076 1.614 2.152 2.69 SE +/- 0.003, N = 3 SE +/- 0.018, N = 3 SE +/- 0.007, N = 3 SE +/- 0.005, N = 3 SE +/- 0.006, N = 3 2.391 1.838 1.174 1.156 1.082 MIN: 1.87 / MAX: 3.85 MIN: 1.79 / MAX: 2.07 MIN: 1.15 / MAX: 2.06 MIN: 1.14 / MAX: 1.73 MIN: 1.06 / MAX: 2.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding Ryzen 7 5800X Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D 2K 4K 6K 8K 10K SE +/- 45.52, N = 3 SE +/- 12.67, N = 3 SE +/- 58.16, N = 3 SE +/- 0.00, N = 3 4256.05 4472.73 6728.09 8258.02 8746.52 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Lossless Compression Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 200 400 600 800 1000 SE +/- 1.21, N = 3 SE +/- 2.17, N = 3 SE +/- 4.10, N = 3 SE +/- 2.59, N = 3 SE +/- 0.96, N = 3 974.59 864.94 614.81 536.77 476.45 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Compression Effort 5 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 1.3219 2.6438 3.9657 5.2876 6.6095 SE +/- 0.004, N = 7 SE +/- 0.003, N = 7 SE +/- 0.003, N = 8 SE +/- 0.006, N = 9 SE +/- 0.003, N = 9 5.875 5.169 3.709 3.249 2.936 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 60 120 180 240 300 SE +/- 0.18, N = 3 SE +/- 0.03, N = 4 SE +/- 1.66, N = 4 SE +/- 1.16, N = 4 SE +/- 0.06, N = 5 266.19 222.33 213.78 213.32 133.97 MIN: 265.87 / MAX: 266.68 MIN: 222.15 / MAX: 222.66 MIN: 210.63 / MAX: 219.97 MIN: 209.34 / MAX: 215.21 MIN: 133.46 / MAX: 134.81 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 40 80 120 160 200 SE +/- 0.74, N = 3 SE +/- 0.62, N = 3 SE +/- 0.51, N = 3 SE +/- 1.48, N = 3 SE +/- 0.23, N = 3 199.65 178.01 129.73 110.28 101.57 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 90 180 270 360 450 SE +/- 1.68, N = 3 SE +/- 0.77, N = 3 SE +/- 0.76, N = 3 SE +/- 1.17, N = 3 SE +/- 0.51, N = 3 420.03 376.96 268.61 234.93 215.58 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding Ryzen 7 5800X Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X 1600 3200 4800 6400 8000 SE +/- 34.79, N = 3 SE +/- 19.39, N = 3 SE +/- 53.33, N = 3 SE +/- 48.56, N = 3 SE +/- 49.47, N = 3 3976.30 4198.51 6453.22 6643.64 7668.05 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 200 400 600 800 1000 SE +/- 0.29, N = 3 SE +/- 1.17, N = 3 SE +/- 0.60, N = 3 SE +/- 0.29, N = 3 SE +/- 11.29, N = 3 480 518 540 554 919 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.4581 0.9162 1.3743 1.8324 2.2905 SE +/- 0.00040, N = 3 SE +/- 0.00274, N = 3 SE +/- 0.00773, N = 3 SE +/- 0.00489, N = 3 SE +/- 0.00299, N = 3 2.03600 1.77641 1.35212 1.30049 1.06926 -lpthread - MIN: 2.01 -lpthread - MIN: 1.74 MIN: 1.28 -lpthread - MIN: 1.2 -lpthread - MIN: 0.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: MobileNetV2_224 Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.7369 1.4738 2.2107 2.9476 3.6845 SE +/- 0.014, N = 3 SE +/- 0.045, N = 3 SE +/- 0.020, N = 3 SE +/- 0.029, N = 3 SE +/- 0.011, N = 3 3.275 2.975 2.410 1.942 1.831 MIN: 3.21 / MAX: 10.86 MIN: 2.88 / MAX: 3.44 MIN: 2.36 / MAX: 3.86 MIN: 1.9 / MAX: 3.6 MIN: 1.79 / MAX: 2.92 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 2 4 6 8 10 SE +/- 0.00140, N = 9 SE +/- 0.00483, N = 9 SE +/- 0.00117, N = 9 SE +/- 0.00484, N = 9 SE +/- 0.00224, N = 9 6.37023 5.57109 5.25394 4.38773 3.62609 -lpthread - MIN: 6.32 -lpthread - MIN: 5.45 MIN: 5.16 -lpthread - MIN: 4.17 -lpthread - MIN: 3.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 0.8143 1.6286 2.4429 3.2572 4.0715 SE +/- 0.006, N = 8 SE +/- 0.007, N = 9 SE +/- 0.008, N = 10 SE +/- 0.013, N = 10 SE +/- 0.010, N = 11 3.619 3.170 2.439 2.165 2.062 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.6388 1.2776 1.9164 2.5552 3.194 SE +/- 0.00474, N = 9 SE +/- 0.00503, N = 9 SE +/- 0.00121, N = 9 SE +/- 0.00298, N = 9 SE +/- 0.00702, N = 9 2.83913 2.48393 2.22336 1.99742 1.61899 -lpthread - MIN: 2.8 -lpthread - MIN: 2.41 MIN: 2.2 -lpthread - MIN: 1.82 -lpthread - MIN: 1.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.3166 0.6332 0.9498 1.2664 1.583 SE +/- 0.000645, N = 4 SE +/- 0.001247, N = 4 SE +/- 0.008609, N = 4 SE +/- 0.007658, N = 4 SE +/- 0.006926, N = 15 1.407280 1.237850 1.052000 1.006941 0.823009 -lpthread - MIN: 1.39 -lpthread - MIN: 1.21 MIN: 1.01 -lpthread - MIN: 0.94 -lpthread - MIN: 0.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 600 1200 1800 2400 3000 SE +/- 0.87, N = 3 SE +/- 1.54, N = 3 SE +/- 3.11, N = 3 SE +/- 10.08, N = 3 SE +/- 1.88, N = 3 3016.48 2609.84 2563.76 2425.59 1792.97 MIN: 2943.1 / MAX: 3087.97 MIN: 2559.5 / MAX: 2657.73 MIN: 2481.34 / MAX: 2640.2 MIN: 2339.59 / MAX: 2518.62 MIN: 1751.55 / MAX: 1868.31 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 8 16 24 32 40 34.84 31.39 30.24 29.99 20.94
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: squeezenetv1.1 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 0.8973 1.7946 2.6919 3.5892 4.4865 SE +/- 0.098, N = 3 SE +/- 0.098, N = 3 SE +/- 0.017, N = 3 SE +/- 0.006, N = 3 SE +/- 0.049, N = 3 3.988 3.394 2.805 2.590 2.405 MIN: 3.72 / MAX: 4.79 MIN: 3.15 / MAX: 4.21 MIN: 2.76 / MAX: 10.42 MIN: 2.55 / MAX: 4.48 MIN: 2.33 / MAX: 3.56 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 20 40 60 80 100 SE +/- 0.29, N = 3 SE +/- 0.17, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.44, N = 3 67 72 85 88 111 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K 14 28 42 56 70 SE +/- 0.12, N = 8 SE +/- 0.13, N = 9 SE +/- 0.08, N = 9 SE +/- 0.22, N = 9 SE +/- 0.08, N = 10 63.61 53.22 50.94 50.85 38.92 MIN: 62.79 / MAX: 64.28 MIN: 52.43 / MAX: 54.16 MIN: 50.34 / MAX: 52.46 MIN: 49.98 / MAX: 52.59 MIN: 38.37 / MAX: 39.92 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 2K 4K 6K 8K 10K SE +/- 104.18, N = 12 SE +/- 58.87, N = 8 SE +/- 10.85, N = 3 SE +/- 14.95, N = 3 SE +/- 20.71, N = 3 6832 7062 7862 8826 11082 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K 60 120 180 240 300 SE +/- 0.18, N = 3 SE +/- 0.14, N = 4 SE +/- 0.71, N = 4 SE +/- 0.47, N = 4 SE +/- 0.31, N = 4 272.04 233.18 224.26 224.20 170.58 MIN: 270.94 / MAX: 276.3 MIN: 232.19 / MAX: 237.22 MIN: 219.36 / MAX: 242.57 MIN: 218.68 / MAX: 249.25 MIN: 157.87 / MAX: 209.34 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel Ryzen 7 5800X Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 1400 2800 4200 5600 7000 SE +/- 10.11, N = 3 SE +/- 46.82, N = 4 SE +/- 15.67, N = 3 SE +/- 5.61, N = 3 SE +/- 22.98, N = 3 4103 4516 4606 5591 6436 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: alexnet Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 3 6 9 12 15 SE +/- 0.01, N = 15 SE +/- 0.13, N = 4 SE +/- 0.01, N = 3 SE +/- 0.03, N = 15 SE +/- 0.05, N = 15 11.83 11.08 10.01 9.62 7.58 MIN: 11.63 / MAX: 18.46 MIN: 10.7 / MAX: 12.6 MIN: 9.92 / MAX: 12.23 MIN: 8.9 / MAX: 11.14 MIN: 7.18 / MAX: 9.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 0.8767 1.7534 2.6301 3.5068 4.3835 SE +/- 0.01755, N = 4 SE +/- 0.02871, N = 4 SE +/- 0.00733, N = 4 SE +/- 0.00793, N = 4 SE +/- 0.00329, N = 4 3.89647 3.44548 3.29136 2.89717 2.63611 -lpthread - MIN: 3.66 -lpthread - MIN: 3.05 -lpthread - MIN: 3.12 -lpthread - MIN: 2.81 MIN: 2.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 2K 4K 6K 8K 10K SE +/- 11.18, N = 3 SE +/- 8.85, N = 3 SE +/- 11.00, N = 3 SE +/- 5.78, N = 3 SE +/- 20.34, N = 3 5534 5621 5687 6919 8088 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet50 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 6 12 18 24 30 SE +/- 0.34, N = 4 SE +/- 0.04, N = 3 SE +/- 0.07, N = 15 SE +/- 0.09, N = 15 SE +/- 0.07, N = 15 24.40 21.21 20.22 18.13 16.84 MIN: 23.71 / MAX: 27.09 MIN: 20.94 / MAX: 23.21 MIN: 19.74 / MAX: 28.22 MIN: 17.64 / MAX: 24.72 MIN: 16.32 / MAX: 21.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 40K 80K 120K 160K 200K SE +/- 215.28, N = 3 SE +/- 39.00, N = 3 SE +/- 46.44, N = 3 SE +/- 248.50, N = 3 SE +/- 116.45, N = 3 193710 179146 178725 162022 134045 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 8 Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 110 220 330 440 550 SE +/- 0.26, N = 3 SE +/- 0.04, N = 3 SE +/- 0.25, N = 3 SE +/- 0.23, N = 3 SE +/- 0.16, N = 3 515.71 447.63 361.91 361.84 357.86 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 3. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 8 Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 200 400 600 800 1000 SE +/- 0.65, N = 3 SE +/- 0.32, N = 3 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 SE +/- 0.32, N = 3 825.56 733.91 581.46 581.35 575.13 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 3. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 8K 16K 24K 32K 40K SE +/- 30.55, N = 3 SE +/- 38.11, N = 3 SE +/- 7.06, N = 3 SE +/- 24.84, N = 3 SE +/- 17.32, N = 3 36658 34329 33905 30222 25590 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 20K 40K 60K 80K 100K SE +/- 34.42, N = 3 SE +/- 34.64, N = 3 SE +/- 260.55, N = 3 SE +/- 158.22, N = 3 SE +/- 741.46, N = 4 96624 89825 89492 80862 67833 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 16K 32K 48K 64K 80K SE +/- 178.17, N = 3 SE +/- 57.49, N = 3 SE +/- 49.12, N = 3 SE +/- 94.00, N = 3 SE +/- 323.04, N = 3 73352 68676 67661 60494 51713 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: squeezenet_ssd Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 4 8 12 16 20 SE +/- 0.06, N = 15 SE +/- 0.09, N = 4 SE +/- 0.02, N = 3 SE +/- 0.17, N = 15 SE +/- 0.05, N = 15 16.76 14.55 13.51 13.27 12.44 MIN: 16.11 / MAX: 23.01 MIN: 13.65 / MAX: 21.26 MIN: 13.16 / MAX: 20.71 MIN: 12.19 / MAX: 43.4 MIN: 12.03 / MAX: 14.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 400 800 1200 1600 2000 SE +/- 5.60, N = 3 SE +/- 8.84, N = 3 SE +/- 8.68, N = 3 SE +/- 0.28, N = 3 SE +/- 2.31, N = 3 1859.14 1814.13 1785.05 1613.81 1382.16 -lpthread - MIN: 1846.97 -lpthread - MIN: 1783.13 -lpthread - MIN: 1762.05 MIN: 1608.23 -lpthread - MIN: 1372.35 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 400 800 1200 1600 2000 SE +/- 4.89, N = 3 SE +/- 22.20, N = 3 SE +/- 13.72, N = 3 SE +/- 1.95, N = 3 SE +/- 0.61, N = 3 1864.54 1783.85 1742.06 1616.06 1387.86 -lpthread - MIN: 1851.66 -lpthread - MIN: 1730.18 -lpthread - MIN: 1715.13 MIN: 1608.68 -lpthread - MIN: 1380.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 400 800 1200 1600 2000 SE +/- 9.79, N = 3 SE +/- 19.76, N = 5 SE +/- 5.06, N = 3 SE +/- 2.95, N = 3 SE +/- 1.81, N = 3 1847.70 1820.40 1776.19 1617.10 1385.79 -lpthread - MIN: 1820.32 -lpthread - MIN: 1761.07 -lpthread - MIN: 1756.45 MIN: 1608.6 -lpthread - MIN: 1375.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 8 Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 200 400 600 800 1000 SE +/- 0.32, N = 3 SE +/- 0.86, N = 3 SE +/- 0.19, N = 3 SE +/- 0.30, N = 3 SE +/- 1.20, N = 3 1027.17 891.78 810.33 808.05 781.98 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 4 Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X3D 200 400 600 800 1000 SE +/- 0.80, N = 3 SE +/- 0.23, N = 3 SE +/- 0.65, N = 3 SE +/- 0.81, N = 3 SE +/- 0.34, N = 3 829.49 691.38 690.62 671.56 643.96 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 4 Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 100 200 300 400 500 SE +/- 0.49, N = 3 SE +/- 0.53, N = 3 SE +/- 0.78, N = 3 SE +/- 0.39, N = 3 SE +/- 0.21, N = 3 454.34 451.00 378.33 377.03 357.33 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 2 Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 110 220 330 440 550 SE +/- 0.29, N = 3 SE +/- 0.98, N = 3 SE +/- 0.35, N = 3 SE +/- 0.66, N = 3 SE +/- 0.45, N = 3 511.31 475.95 441.14 441.07 403.35 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 1 Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X3D 130 260 390 520 650 SE +/- 1.44, N = 3 SE +/- 0.96, N = 3 SE +/- 2.73, N = 3 SE +/- 1.75, N = 3 SE +/- 0.27, N = 3 601.31 572.60 569.14 537.38 476.91 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 1 Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X3D 60 120 180 240 300 SE +/- 0.17, N = 3 SE +/- 0.23, N = 3 SE +/- 1.36, N = 3 SE +/- 0.07, N = 3 SE +/- 0.10, N = 3 281.47 273.89 270.81 247.89 224.10 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 4 Core i9 12900K Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 60 120 180 240 300 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 SE +/- 0.18, N = 3 SE +/- 0.31, N = 3 SE +/- 0.14, N = 3 280.85 275.40 233.63 232.61 223.97 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 3. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 2 Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 50 100 150 200 250 SE +/- 0.53, N = 3 SE +/- 0.42, N = 3 SE +/- 0.16, N = 3 SE +/- 0.42, N = 3 SE +/- 0.14, N = 3 234.63 223.20 203.98 203.89 187.49 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 2 Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 200 400 600 800 1000 SE +/- 2.38, N = 3 SE +/- 0.67, N = 3 SE +/- 1.56, N = 3 SE +/- 1.57, N = 3 SE +/- 0.69, N = 3 799.40 717.04 713.34 712.62 672.48 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 1 Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 200 400 600 800 1000 SE +/- 5.46, N = 3 SE +/- 3.75, N = 3 SE +/- 4.75, N = 3 SE +/- 2.12, N = 3 SE +/- 2.03, N = 3 1154.09 1074.33 1073.71 1053.38 997.34 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU Ryzen 7 5800X Ryzen 9 5900X Core i9 12900K Ryzen 9 5950X Ryzen 7 5800X3D 700 1400 2100 2800 3500 SE +/- 1.56, N = 3 SE +/- 32.41, N = 3 SE +/- 0.76, N = 3 SE +/- 26.41, N = 3 SE +/- 2.34, N = 3 3065.26 2904.33 2881.32 2750.18 2691.24 -lpthread - MIN: 3058.73 -lpthread - MIN: 2837.45 MIN: 2872.06 -lpthread - MIN: 2685.91 -lpthread - MIN: 2678.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D 700 1400 2100 2800 3500 SE +/- 2.27, N = 3 SE +/- 0.27, N = 3 SE +/- 23.67, N = 3 SE +/- 10.92, N = 3 SE +/- 5.12, N = 3 3053.30 2881.72 2873.45 2703.33 2683.16 -lpthread - MIN: 3045.76 MIN: 2874.65 -lpthread - MIN: 2830.55 -lpthread - MIN: 2665.5 -lpthread - MIN: 2665.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X Ryzen 9 5900X Core i9 12900K Ryzen 9 5950X Ryzen 7 5800X3D 700 1400 2100 2800 3500 SE +/- 0.89, N = 3 SE +/- 15.10, N = 3 SE +/- 1.61, N = 3 SE +/- 23.91, N = 3 SE +/- 1.77, N = 3 3055.84 2908.28 2881.10 2745.44 2691.67 -lpthread - MIN: 3050.91 -lpthread - MIN: 2862.45 MIN: 2869.73 -lpthread - MIN: 2684.63 -lpthread - MIN: 2680.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: inception-v3 Ryzen 9 5950X Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X3D 6 12 18 24 30 SE +/- 0.13, N = 3 SE +/- 0.09, N = 3 SE +/- 0.66, N = 3 SE +/- 0.37, N = 3 SE +/- 0.09, N = 3 25.73 25.69 24.30 23.69 23.19 MIN: 25.09 / MAX: 33.94 MIN: 25.45 / MAX: 31.44 MIN: 22.9 / MAX: 36.09 MIN: 22.99 / MAX: 31.36 MIN: 22.93 / MAX: 31 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D Core i9 12900K 400 800 1200 1600 2000 SE +/- 1.33, N = 3 SE +/- 45.67, N = 12 SE +/- 46.80, N = 9 SE +/- 3.42, N = 3 SE +/- 0.67, N = 3 1042 1519 1668 1939 1975 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 9 5950X Ryzen 7 5800X3D Ryzen 9 5900X Core i9 12900K 200 400 600 800 1000 SE +/- 0.17, N = 3 SE +/- 1.20, N = 3 SE +/- 57.02, N = 12 SE +/- 4.80, N = 3 SE +/- 0.76, N = 3 557 800 822 949 988 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X 1600 3200 4800 6400 8000 SE +/- 10.27, N = 3 SE +/- 11.77, N = 3 SE +/- 10.33, N = 3 SE +/- 21.53, N = 3 SE +/- 318.33, N = 12 3628 4107 4747 6169 7545 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X 30 60 90 120 150 SE +/- 0.00, N = 3 SE +/- 4.73, N = 12 SE +/- 0.00, N = 3 SE +/- 0.17, N = 3 SE +/- 0.44, N = 3 49 98 101 107 115 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 150 300 450 600 750 SE +/- 28.87, N = 12 SE +/- 17.40, N = 9 SE +/- 17.59, N = 12 SE +/- 42.62, N = 12 SE +/- 1.42, N = 3 431 487 539 572 693 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: SqueezeNetV1.0 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D Core i9 12900K 1.177 2.354 3.531 4.708 5.885 SE +/- 0.113, N = 3 SE +/- 0.058, N = 3 SE +/- 0.032, N = 3 SE +/- 0.010, N = 3 SE +/- 0.177, N = 3 5.231 4.758 4.536 4.213 4.151 MIN: 4.95 / MAX: 6.51 MIN: 4.59 / MAX: 12.51 MIN: 4.47 / MAX: 5.71 MIN: 4.16 / MAX: 5.46 MIN: 3.91 / MAX: 6.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: resnet-v2-50 Ryzen 9 5900X Core i9 12900K Ryzen 9 5950X Ryzen 7 5800X Ryzen 7 5800X3D 6 12 18 24 30 SE +/- 0.14, N = 3 SE +/- 1.14, N = 3 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 24.50 23.09 20.52 18.44 16.36 MIN: 23.86 / MAX: 52.44 MIN: 21.7 / MAX: 30.17 MIN: 19.84 / MAX: 24.05 MIN: 18.25 / MAX: 25.86 MIN: 16 / MAX: 24.17 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: regnety_400m Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 3 6 9 12 15 SE +/- 0.08, N = 4 SE +/- 0.01, N = 3 SE +/- 0.22, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 9.61 8.43 7.39 5.93 5.18 MIN: 9.37 / MAX: 11.03 MIN: 8.36 / MAX: 8.75 MIN: 6.17 / MAX: 27.59 MIN: 5.83 / MAX: 7.68 MIN: 5.07 / MAX: 12.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: yolov4-tiny Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X Core i9 12900K Ryzen 7 5800X3D 5 10 15 20 25 SE +/- 0.40, N = 3 SE +/- 0.32, N = 4 SE +/- 0.21, N = 15 SE +/- 0.33, N = 15 SE +/- 0.19, N = 15 20.63 20.49 19.78 15.86 14.80 MIN: 19.48 / MAX: 21.6 MIN: 19.6 / MAX: 21.74 MIN: 18.64 / MAX: 21.1 MIN: 14.24 / MAX: 21 MIN: 14 / MAX: 16.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet18 Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 4 8 12 16 20 SE +/- 0.17, N = 4 SE +/- 0.03, N = 15 SE +/- 0.05, N = 3 SE +/- 0.05, N = 15 SE +/- 0.15, N = 15 14.28 13.02 12.50 10.27 9.66 MIN: 13.94 / MAX: 16.83 MIN: 12.77 / MAX: 32.91 MIN: 12.28 / MAX: 12.82 MIN: 9.71 / MAX: 12.09 MIN: 7.55 / MAX: 14.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: vgg16 Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 13 26 39 52 65 SE +/- 0.08, N = 4 SE +/- 0.07, N = 15 SE +/- 0.09, N = 3 SE +/- 0.13, N = 15 SE +/- 0.48, N = 15 56.55 55.97 50.75 42.60 28.24 MIN: 55.53 / MAX: 62.98 MIN: 54.91 / MAX: 64.62 MIN: 49.97 / MAX: 60.2 MIN: 41.52 / MAX: 50.92 MIN: 25.72 / MAX: 45.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: googlenet Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Core i9 12900K Ryzen 7 5800X3D 3 6 9 12 15 SE +/- 0.28, N = 4 SE +/- 0.02, N = 3 SE +/- 0.02, N = 15 SE +/- 0.21, N = 15 SE +/- 0.05, N = 15 12.92 11.44 10.22 9.94 7.31 MIN: 12.09 / MAX: 15.32 MIN: 11.28 / MAX: 11.82 MIN: 9.79 / MAX: 18.21 MIN: 7.91 / MAX: 14.3 MIN: 7.05 / MAX: 14.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: blazeface Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.405 0.81 1.215 1.62 2.025 SE +/- 0.02, N = 4 SE +/- 0.00, N = 3 SE +/- 0.05, N = 15 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 1.80 1.63 1.46 1.22 1.06 MIN: 1.75 / MAX: 2.15 MIN: 1.61 / MAX: 1.81 MIN: 1.15 / MAX: 2.96 MIN: 1.19 / MAX: 2.1 MIN: 1.04 / MAX: 4.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: efficientnet-b0 Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Ryzen 7 5800X3D 1.2015 2.403 3.6045 4.806 6.0075 SE +/- 0.11, N = 15 SE +/- 0.06, N = 4 SE +/- 0.01, N = 3 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 5.34 5.23 4.77 3.60 3.01 MIN: 4.35 / MAX: 9.28 MIN: 5.09 / MAX: 6.68 MIN: 4.7 / MAX: 5 MIN: 3.53 / MAX: 5.22 MIN: 2.93 / MAX: 13.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mnasnet Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.8708 1.7416 2.6124 3.4832 4.354 SE +/- 0.05, N = 4 SE +/- 0.01, N = 3 SE +/- 0.07, N = 15 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 3.87 3.44 3.10 2.24 2.01 MIN: 3.76 / MAX: 10.7 MIN: 3.39 / MAX: 3.72 MIN: 2.66 / MAX: 4.79 MIN: 2.2 / MAX: 3.68 MIN: 1.97 / MAX: 2.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: shufflenet-v2 Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.936 1.872 2.808 3.744 4.68 SE +/- 0.01, N = 4 SE +/- 0.01, N = 3 SE +/- 0.08, N = 14 SE +/- 0.00, N = 15 SE +/- 0.01, N = 15 4.16 3.88 3.10 2.35 2.11 MIN: 4.05 / MAX: 4.9 MIN: 3.82 / MAX: 4.06 MIN: 2.68 / MAX: 4.51 MIN: 2.31 / MAX: 3.74 MIN: 2.07 / MAX: 3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v3-v3 - Model: mobilenet-v3 Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.8483 1.6966 2.5449 3.3932 4.2415 SE +/- 0.00, N = 4 SE +/- 0.01, N = 3 SE +/- 0.06, N = 15 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 3.77 3.46 2.90 2.28 1.90 MIN: 3.7 / MAX: 4.73 MIN: 3.39 / MAX: 3.66 MIN: 2.53 / MAX: 4.55 MIN: 2.22 / MAX: 3.92 MIN: 1.84 / MAX: 2.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v2-v2 - Model: mobilenet-v2 Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D 0.9653 1.9306 2.8959 3.8612 4.8265 SE +/- 0.01, N = 4 SE +/- 0.01, N = 3 SE +/- 0.12, N = 15 SE +/- 0.01, N = 15 SE +/- 0.00, N = 15 4.29 3.91 3.41 2.61 2.13 MIN: 4.15 / MAX: 7.55 MIN: 3.82 / MAX: 4.11 MIN: 2.72 / MAX: 5.86 MIN: 2.54 / MAX: 3.77 MIN: 2.03 / MAX: 2.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mobilenet Ryzen 9 5950X Ryzen 7 5800X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D 3 6 9 12 15 SE +/- 0.15, N = 4 SE +/- 0.13, N = 15 SE +/- 0.01, N = 3 SE +/- 0.26, N = 15 SE +/- 0.08, N = 15 12.27 11.63 11.42 11.11 7.62 MIN: 11.71 / MAX: 18.81 MIN: 11.06 / MAX: 12.93 MIN: 11.16 / MAX: 18.48 MIN: 8.87 / MAX: 13.96 MIN: 7.25 / MAX: 9.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU Core i9 12900K Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X 2 4 6 8 10 SE +/- 0.16860, N = 12 SE +/- 0.11645, N = 15 SE +/- 0.06264, N = 3 SE +/- 0.07639, N = 3 SE +/- 0.22726, N = 15 8.73750 8.34203 7.27623 5.38689 4.75533 MIN: 4.15 -lpthread - MIN: 5.88 -lpthread - MIN: 5.11 -lpthread - MIN: 4 -lpthread - MIN: 3.38 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K 5 10 15 20 25 SE +/- 0.12899, N = 7 SE +/- 0.00896, N = 7 SE +/- 0.29208, N = 15 SE +/- 0.01917, N = 7 SE +/- 0.00220, N = 7 18.75350 16.70510 16.13980 12.62270 5.87536 -lpthread - MIN: 18.34 -lpthread - MIN: 16.31 -lpthread - MIN: 15.35 -lpthread - MIN: 12.28 MIN: 5.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Ryzen 9 5950X Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 16GB (2475/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 April 2022 04:52 by user phoronix.
Ryzen 7 5800X3D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads), Motherboard: ASRock X570 Pro4 (P4.30 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U, Network: Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201205Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 April 2022 19:07 by user phoronix.
Ryzen 7 5800X Processor: AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads), Motherboard: ASRock X570 Pro4 (P4.30 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U, Network: Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 27 April 2022 07:43 by user phoronix.
Ryzen 9 5900X Processor: AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (3904 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: NVIDIA NV134 8GB, Audio: NVIDIA GP104 HD Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2022 04:39 by user phoronix.
Core i9 12900K Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1003 BIOS), Chipset: Intel Device 7aa7, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: Intel Device 7ad0, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2022 19:02 by user phoronix.