Tests for a future article.
Ryzen 9 5950X Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 16GB (2475/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 7 5800X3D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads) , Motherboard: ASRock X570 Pro4 (P4.30 BIOS) , Chipset: AMD Starship/Matisse, Memory: 16GB , Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz) , Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U , Network: Intel I211
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201205Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 7 5800X Changed Processor to AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads) .
Processor Change: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016
Ryzen 9 5900X Processor: AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads) , Motherboard: ASUS ROG CROSSHAIR VIII HERO (3904 BIOS) , Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: NVIDIA NV134 8GB , Audio: NVIDIA GP104 HD Audio , Monitor: ASUS MG28U , Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Core i9 12900K Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads) , Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1003 BIOS) , Chipset: Intel Device 7aa7 , Memory: 32GB , Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz) , Audio: Intel Device 7ad0 , Monitor: ASUS VP28U , Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K 300 600 900 1200 1500 SE +/- 8.09, N = 3 SE +/- 1.17, N = 3 SE +/- 3.51, N = 3 SE +/- 4.25, N = 3 SE +/- 0.17, N = 3 1540 1421 1223 1107 362 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 1500 3000 4500 6000 7500 SE +/- 78.12, N = 15 SE +/- 34.51, N = 6 SE +/- 10.74, N = 6 SE +/- 27.21, N = 6 SE +/- 14.53, N = 15 6946.11 4631.82 2936.85 2755.72 1709.25 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Ryzen 7 5800X Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K 300 600 900 1200 1500 390.27 553.75 654.35 667.28 1503.26
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 600 1200 1800 2400 3000 SE +/- 2.70, N = 3 SE +/- 2.64, N = 3 SE +/- 2.02, N = 3 SE +/- 0.90, N = 3 SE +/- 1.65, N = 3 2721.15 930.57 854.71 837.94 784.51 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 500 1000 1500 2000 2500 SE +/- 22.18, N = 3 SE +/- 6.08, N = 3 SE +/- 11.02, N = 3 SE +/- 6.36, N = 3 SE +/- 2.73, N = 3 2201 1254 954 867 668 1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 500 1000 1500 2000 2500 SE +/- 19.84, N = 7 SE +/- 8.89, N = 3 SE +/- 7.53, N = 9 SE +/- 8.85, N = 9 SE +/- 7.32, N = 9 2161 1160 886 854 684 1. (CXX) g++ options: -flto -pthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K Ryzen 7 5800X 0.3279 0.6558 0.9837 1.3116 1.6395 SE +/- 0.002527, N = 5 SE +/- 0.003942, N = 5 SE +/- 0.002108, N = 5 SE +/- 0.002976, N = 5 SE +/- 0.002722, N = 5 0.476097 0.519733 0.604780 0.812047 1.457520 -lpthread - MIN: 0.42 -lpthread - MIN: 0.47 -lpthread - MIN: 0.58 MIN: 0.79 -lpthread - MIN: 1.35 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 300 600 900 1200 1500 429.61 1023.32 1158.67 1174.47 1309.59
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 4 8 12 16 20 SE +/- 0.00249, N = 7 SE +/- 0.04088, N = 7 SE +/- 0.22593, N = 15 SE +/- 0.04249, N = 7 SE +/- 0.02368, N = 7 6.00214 10.38000 16.35340 16.73820 18.03210 MIN: 5.9 -lpthread - MIN: 9.78 -lpthread - MIN: 14.78 -lpthread - MIN: 16.08 -lpthread - MIN: 17.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 800 1600 2400 3200 4000 SE +/- 2.07, N = 3 SE +/- 1.83, N = 3 SE +/- 3.45, N = 3 SE +/- 5.57, N = 3 SE +/- 0.92, N = 3 3872.03 1594.95 1541.67 1471.58 1346.57 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenFOAM OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 8 Input: Motorbike 60M Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 300 600 900 1200 1500 SE +/- 0.14, N = 3 SE +/- 1.12, N = 3 SE +/- 0.23, N = 3 SE +/- 0.77, N = 3 SE +/- 0.26, N = 3 487.80 1090.21 1270.72 1277.55 1382.54 -lfoamToVTK -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 30 60 90 120 150 SE +/- 0.58, N = 3 SE +/- 0.11, N = 3 SE +/- 1.17, N = 9 SE +/- 0.05, N = 3 SE +/- 0.16, N = 3 55.61 126.86 141.21 144.47 156.16 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 2K 4K 6K 8K 10K SE +/- 38.17, N = 15 SE +/- 37.29, N = 6 SE +/- 10.98, N = 6 SE +/- 6.53, N = 15 SE +/- 11.91, N = 6 8741.60 7793.77 3732.72 3367.67 3214.58 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 2 4 6 8 10 SE +/- 0.00441, N = 5 SE +/- 0.01065, N = 5 SE +/- 0.02525, N = 5 SE +/- 0.05069, N = 5 SE +/- 0.01310, N = 5 3.44643 6.55765 7.54843 7.79652 8.93552 MIN: 3.4 -lpthread - MIN: 6.32 -lpthread - MIN: 7.29 -lpthread - MIN: 7.29 -lpthread - MIN: 8.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 9 18 27 36 45 SE +/- 0.01, N = 4 SE +/- 0.01, N = 3 SE +/- 0.32, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 14.55 27.18 32.15 33.95 37.21 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 120 240 360 480 600 SE +/- 0.00, N = 5 SE +/- 1.80, N = 4 SE +/- 0.52, N = 4 SE +/- 0.57, N = 3 SE +/- 1.00, N = 4 540.54 527.72 238.67 221.73 220.04 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenet-v1-1.0 Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5950X Core i9 12900K Ryzen 9 5900X 0.9162 1.8324 2.7486 3.6648 4.581 SE +/- 0.010, N = 3 SE +/- 0.011, N = 3 SE +/- 0.067, N = 3 SE +/- 0.008, N = 3 SE +/- 0.037, N = 3 1.676 1.816 2.639 2.891 4.072 MIN: 1.63 / MAX: 2.97 MIN: 1.79 / MAX: 3.08 MIN: 2.52 / MAX: 11.27 MIN: 2.85 / MAX: 8.54 MIN: 3.95 / MAX: 4.31 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 140 280 420 560 700 SE +/- 1.09, N = 3 SE +/- 0.60, N = 3 SE +/- 0.33, N = 3 SE +/- 0.44, N = 3 SE +/- 0.44, N = 3 629 308 299 296 273 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenFOAM OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 8 Input: Motorbike 30M Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 40 80 120 160 200 SE +/- 0.32, N = 3 SE +/- 0.18, N = 3 SE +/- 0.15, N = 3 SE +/- 0.24, N = 3 SE +/- 0.25, N = 3 80.29 84.47 96.16 98.44 177.60 -lfoamToVTK -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenetV3 Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.538 1.076 1.614 2.152 2.69 SE +/- 0.006, N = 3 SE +/- 0.005, N = 3 SE +/- 0.007, N = 3 SE +/- 0.018, N = 3 SE +/- 0.003, N = 3 1.082 1.156 1.174 1.838 2.391 MIN: 1.06 / MAX: 2.25 MIN: 1.14 / MAX: 1.73 MIN: 1.15 / MAX: 2.06 MIN: 1.79 / MAX: 2.07 MIN: 1.87 / MAX: 3.85 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 58.16, N = 3 SE +/- 12.67, N = 3 SE +/- 45.52, N = 3 8746.52 8258.02 6728.09 4472.73 4256.05 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Lossless Compression Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 200 400 600 800 1000 SE +/- 0.96, N = 3 SE +/- 2.59, N = 3 SE +/- 4.10, N = 3 SE +/- 2.17, N = 3 SE +/- 1.21, N = 3 476.45 536.77 614.81 864.94 974.59 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Compression Effort 5 Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 1.3219 2.6438 3.9657 5.2876 6.6095 SE +/- 0.003, N = 9 SE +/- 0.006, N = 9 SE +/- 0.003, N = 8 SE +/- 0.003, N = 7 SE +/- 0.004, N = 7 2.936 3.249 3.709 5.169 5.875 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 60 120 180 240 300 SE +/- 0.06, N = 5 SE +/- 1.16, N = 4 SE +/- 1.66, N = 4 SE +/- 0.03, N = 4 SE +/- 0.18, N = 3 133.97 213.32 213.78 222.33 266.19 MIN: 133.46 / MAX: 134.81 MIN: 209.34 / MAX: 215.21 MIN: 210.63 / MAX: 219.97 MIN: 222.15 / MAX: 222.66 MIN: 265.87 / MAX: 266.68 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 40 80 120 160 200 SE +/- 0.23, N = 3 SE +/- 1.48, N = 3 SE +/- 0.51, N = 3 SE +/- 0.62, N = 3 SE +/- 0.74, N = 3 101.57 110.28 129.73 178.01 199.65 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 90 180 270 360 450 SE +/- 0.51, N = 3 SE +/- 1.17, N = 3 SE +/- 0.76, N = 3 SE +/- 0.77, N = 3 SE +/- 1.68, N = 3 215.58 234.93 268.61 376.96 420.03 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D Core i9 12900K Ryzen 7 5800X 1600 3200 4800 6400 8000 SE +/- 49.47, N = 3 SE +/- 48.56, N = 3 SE +/- 53.33, N = 3 SE +/- 19.39, N = 3 SE +/- 34.79, N = 3 7668.05 6643.64 6453.22 4198.51 3976.30 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 200 400 600 800 1000 SE +/- 11.29, N = 3 SE +/- 0.29, N = 3 SE +/- 0.60, N = 3 SE +/- 1.17, N = 3 SE +/- 0.29, N = 3 919 554 540 518 480 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X 0.4581 0.9162 1.3743 1.8324 2.2905 SE +/- 0.00299, N = 3 SE +/- 0.00489, N = 3 SE +/- 0.00773, N = 3 SE +/- 0.00274, N = 3 SE +/- 0.00040, N = 3 1.06926 1.30049 1.35212 1.77641 2.03600 -lpthread - MIN: 0.96 -lpthread - MIN: 1.2 MIN: 1.28 -lpthread - MIN: 1.74 -lpthread - MIN: 2.01 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: MobileNetV2_224 Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.7369 1.4738 2.2107 2.9476 3.6845 SE +/- 0.011, N = 3 SE +/- 0.029, N = 3 SE +/- 0.020, N = 3 SE +/- 0.045, N = 3 SE +/- 0.014, N = 3 1.831 1.942 2.410 2.975 3.275 MIN: 1.79 / MAX: 2.92 MIN: 1.9 / MAX: 3.6 MIN: 2.36 / MAX: 3.86 MIN: 2.88 / MAX: 3.44 MIN: 3.21 / MAX: 10.86 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X 2 4 6 8 10 SE +/- 0.00224, N = 9 SE +/- 0.00484, N = 9 SE +/- 0.00117, N = 9 SE +/- 0.00483, N = 9 SE +/- 0.00140, N = 9 3.62609 4.38773 5.25394 5.57109 6.37023 -lpthread - MIN: 3.43 -lpthread - MIN: 4.17 MIN: 5.16 -lpthread - MIN: 5.45 -lpthread - MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 0.8143 1.6286 2.4429 3.2572 4.0715 SE +/- 0.010, N = 11 SE +/- 0.013, N = 10 SE +/- 0.008, N = 10 SE +/- 0.007, N = 9 SE +/- 0.006, N = 8 2.062 2.165 2.439 3.170 3.619 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X 0.6388 1.2776 1.9164 2.5552 3.194 SE +/- 0.00702, N = 9 SE +/- 0.00298, N = 9 SE +/- 0.00121, N = 9 SE +/- 0.00503, N = 9 SE +/- 0.00474, N = 9 1.61899 1.99742 2.22336 2.48393 2.83913 -lpthread - MIN: 1.45 -lpthread - MIN: 1.82 MIN: 2.2 -lpthread - MIN: 2.41 -lpthread - MIN: 2.8 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X 0.3166 0.6332 0.9498 1.2664 1.583 SE +/- 0.006926, N = 15 SE +/- 0.007658, N = 4 SE +/- 0.008609, N = 4 SE +/- 0.001247, N = 4 SE +/- 0.000645, N = 4 0.823009 1.006941 1.052000 1.237850 1.407280 -lpthread - MIN: 0.7 -lpthread - MIN: 0.94 MIN: 1.01 -lpthread - MIN: 1.21 -lpthread - MIN: 1.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 600 1200 1800 2400 3000 SE +/- 1.88, N = 3 SE +/- 10.08, N = 3 SE +/- 3.11, N = 3 SE +/- 1.54, N = 3 SE +/- 0.87, N = 3 1792.97 2425.59 2563.76 2609.84 3016.48 MIN: 1751.55 / MAX: 1868.31 MIN: 2339.59 / MAX: 2518.62 MIN: 2481.34 / MAX: 2640.2 MIN: 2559.5 / MAX: 2657.73 MIN: 2943.1 / MAX: 3087.97 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ECP-CANDLE The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 8 16 24 32 40 20.94 29.99 30.24 31.39 34.84
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: squeezenetv1.1 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 0.8973 1.7946 2.6919 3.5892 4.4865 SE +/- 0.049, N = 3 SE +/- 0.006, N = 3 SE +/- 0.017, N = 3 SE +/- 0.098, N = 3 SE +/- 0.098, N = 3 2.405 2.590 2.805 3.394 3.988 MIN: 2.33 / MAX: 3.56 MIN: 2.55 / MAX: 4.48 MIN: 2.76 / MAX: 10.42 MIN: 3.15 / MAX: 4.21 MIN: 3.72 / MAX: 4.79 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X 20 40 60 80 100 SE +/- 0.44, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 SE +/- 0.17, N = 3 SE +/- 0.29, N = 3 111 88 85 72 67 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D Ryzen 7 5800X 14 28 42 56 70 SE +/- 0.08, N = 10 SE +/- 0.22, N = 9 SE +/- 0.08, N = 9 SE +/- 0.13, N = 9 SE +/- 0.12, N = 8 38.92 50.85 50.94 53.22 63.61 MIN: 38.37 / MAX: 39.92 MIN: 49.98 / MAX: 52.59 MIN: 50.34 / MAX: 52.46 MIN: 52.43 / MAX: 54.16 MIN: 62.79 / MAX: 64.28 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 2K 4K 6K 8K 10K SE +/- 20.71, N = 3 SE +/- 14.95, N = 3 SE +/- 10.85, N = 3 SE +/- 58.87, N = 8 SE +/- 104.18, N = 12 11082 8826 7862 7062 6832 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X3D Ryzen 7 5800X 60 120 180 240 300 SE +/- 0.31, N = 4 SE +/- 0.47, N = 4 SE +/- 0.71, N = 4 SE +/- 0.14, N = 4 SE +/- 0.18, N = 3 170.58 224.20 224.26 233.18 272.04 MIN: 157.87 / MAX: 209.34 MIN: 218.68 / MAX: 249.25 MIN: 219.36 / MAX: 242.57 MIN: 232.19 / MAX: 237.22 MIN: 270.94 / MAX: 276.3 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K Ryzen 7 5800X 1400 2800 4200 5600 7000 SE +/- 22.98, N = 3 SE +/- 5.61, N = 3 SE +/- 15.67, N = 3 SE +/- 46.82, N = 4 SE +/- 10.11, N = 3 6436 5591 4606 4516 4103 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: alexnet Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 3 6 9 12 15 SE +/- 0.05, N = 15 SE +/- 0.03, N = 15 SE +/- 0.01, N = 3 SE +/- 0.13, N = 4 SE +/- 0.01, N = 15 7.58 9.62 10.01 11.08 11.83 MIN: 7.18 / MAX: 9.34 MIN: 8.9 / MAX: 11.14 MIN: 9.92 / MAX: 12.23 MIN: 10.7 / MAX: 12.6 MIN: 11.63 / MAX: 18.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 0.8767 1.7534 2.6301 3.5068 4.3835 SE +/- 0.00329, N = 4 SE +/- 0.00793, N = 4 SE +/- 0.00733, N = 4 SE +/- 0.02871, N = 4 SE +/- 0.01755, N = 4 2.63611 2.89717 3.29136 3.44548 3.89647 MIN: 2.5 -lpthread - MIN: 2.81 -lpthread - MIN: 3.12 -lpthread - MIN: 3.05 -lpthread - MIN: 3.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 2K 4K 6K 8K 10K SE +/- 20.34, N = 3 SE +/- 5.78, N = 3 SE +/- 11.00, N = 3 SE +/- 8.85, N = 3 SE +/- 11.18, N = 3 8088 6919 5687 5621 5534 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet50 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 6 12 18 24 30 SE +/- 0.07, N = 15 SE +/- 0.09, N = 15 SE +/- 0.07, N = 15 SE +/- 0.04, N = 3 SE +/- 0.34, N = 4 16.84 18.13 20.22 21.21 24.40 MIN: 16.32 / MAX: 21.73 MIN: 17.64 / MAX: 24.72 MIN: 19.74 / MAX: 28.22 MIN: 20.94 / MAX: 23.21 MIN: 23.71 / MAX: 27.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 40K 80K 120K 160K 200K SE +/- 116.45, N = 3 SE +/- 248.50, N = 3 SE +/- 46.44, N = 3 SE +/- 39.00, N = 3 SE +/- 215.28, N = 3 134045 162022 178725 179146 193710 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 8 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K 110 220 330 440 550 SE +/- 0.16, N = 3 SE +/- 0.23, N = 3 SE +/- 0.25, N = 3 SE +/- 0.04, N = 3 SE +/- 0.26, N = 3 357.86 361.84 361.91 447.63 515.71 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 8 Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K 200 400 600 800 1000 SE +/- 0.32, N = 3 SE +/- 0.16, N = 3 SE +/- 0.28, N = 3 SE +/- 0.32, N = 3 SE +/- 0.65, N = 3 575.13 581.35 581.46 733.91 825.56 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 8K 16K 24K 32K 40K SE +/- 17.32, N = 3 SE +/- 24.84, N = 3 SE +/- 7.06, N = 3 SE +/- 38.11, N = 3 SE +/- 30.55, N = 3 25590 30222 33905 34329 36658 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 20K 40K 60K 80K 100K SE +/- 741.46, N = 4 SE +/- 158.22, N = 3 SE +/- 260.55, N = 3 SE +/- 34.64, N = 3 SE +/- 34.42, N = 3 67833 80862 89492 89825 96624 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 16K 32K 48K 64K 80K SE +/- 323.04, N = 3 SE +/- 94.00, N = 3 SE +/- 49.12, N = 3 SE +/- 57.49, N = 3 SE +/- 178.17, N = 3 51713 60494 67661 68676 73352 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: squeezenet_ssd Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 4 8 12 16 20 SE +/- 0.05, N = 15 SE +/- 0.17, N = 15 SE +/- 0.02, N = 3 SE +/- 0.09, N = 4 SE +/- 0.06, N = 15 12.44 13.27 13.51 14.55 16.76 MIN: 12.03 / MAX: 14.14 MIN: 12.19 / MAX: 43.4 MIN: 13.16 / MAX: 20.71 MIN: 13.65 / MAX: 21.26 MIN: 16.11 / MAX: 23.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 400 800 1200 1600 2000 SE +/- 2.31, N = 3 SE +/- 0.28, N = 3 SE +/- 8.68, N = 3 SE +/- 8.84, N = 3 SE +/- 5.60, N = 3 1382.16 1613.81 1785.05 1814.13 1859.14 -lpthread - MIN: 1372.35 MIN: 1608.23 -lpthread - MIN: 1762.05 -lpthread - MIN: 1783.13 -lpthread - MIN: 1846.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 400 800 1200 1600 2000 SE +/- 0.61, N = 3 SE +/- 1.95, N = 3 SE +/- 13.72, N = 3 SE +/- 22.20, N = 3 SE +/- 4.89, N = 3 1387.86 1616.06 1742.06 1783.85 1864.54 -lpthread - MIN: 1380.9 MIN: 1608.68 -lpthread - MIN: 1715.13 -lpthread - MIN: 1730.18 -lpthread - MIN: 1851.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 400 800 1200 1600 2000 SE +/- 1.81, N = 3 SE +/- 2.95, N = 3 SE +/- 5.06, N = 3 SE +/- 19.76, N = 5 SE +/- 9.79, N = 3 1385.79 1617.10 1776.19 1820.40 1847.70 -lpthread - MIN: 1375.42 MIN: 1608.6 -lpthread - MIN: 1756.45 -lpthread - MIN: 1761.07 -lpthread - MIN: 1820.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 8 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 200 400 600 800 1000 SE +/- 1.20, N = 3 SE +/- 0.30, N = 3 SE +/- 0.19, N = 3 SE +/- 0.86, N = 3 SE +/- 0.32, N = 3 781.98 808.05 810.33 891.78 1027.17 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 4 Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X 200 400 600 800 1000 SE +/- 0.34, N = 3 SE +/- 0.81, N = 3 SE +/- 0.65, N = 3 SE +/- 0.23, N = 3 SE +/- 0.80, N = 3 643.96 671.56 690.62 691.38 829.49 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 4 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 100 200 300 400 500 SE +/- 0.21, N = 3 SE +/- 0.39, N = 3 SE +/- 0.78, N = 3 SE +/- 0.53, N = 3 SE +/- 0.49, N = 3 357.33 377.03 378.33 451.00 454.34 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 2 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 110 220 330 440 550 SE +/- 0.45, N = 3 SE +/- 0.66, N = 3 SE +/- 0.35, N = 3 SE +/- 0.98, N = 3 SE +/- 0.29, N = 3 403.35 441.07 441.14 475.95 511.31 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 1 Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X 130 260 390 520 650 SE +/- 0.27, N = 3 SE +/- 1.75, N = 3 SE +/- 2.73, N = 3 SE +/- 0.96, N = 3 SE +/- 1.44, N = 3 476.91 537.38 569.14 572.60 601.31 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 1 Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X 60 120 180 240 300 SE +/- 0.10, N = 3 SE +/- 0.07, N = 3 SE +/- 1.36, N = 3 SE +/- 0.23, N = 3 SE +/- 0.17, N = 3 224.10 247.89 270.81 273.89 281.47 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 4 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X Core i9 12900K 60 120 180 240 300 SE +/- 0.14, N = 3 SE +/- 0.31, N = 3 SE +/- 0.18, N = 3 SE +/- 0.16, N = 3 SE +/- 0.28, N = 3 223.97 232.61 233.63 275.40 280.85 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Norne - Threads: 2 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 50 100 150 200 250 SE +/- 0.14, N = 3 SE +/- 0.42, N = 3 SE +/- 0.16, N = 3 SE +/- 0.42, N = 3 SE +/- 0.53, N = 3 187.49 203.89 203.98 223.20 234.63 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 2 Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 200 400 600 800 1000 SE +/- 0.69, N = 3 SE +/- 1.57, N = 3 SE +/- 1.56, N = 3 SE +/- 0.67, N = 3 SE +/- 2.38, N = 3 672.48 712.62 713.34 717.04 799.40 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media Git OPM Benchmark: Flow MPI Extra - Threads: 1 Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 200 400 600 800 1000 SE +/- 2.03, N = 3 SE +/- 2.12, N = 3 SE +/- 4.75, N = 3 SE +/- 3.75, N = 3 SE +/- 5.46, N = 3 997.34 1053.38 1073.71 1074.33 1154.09 1. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt 2. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 2022 3. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022 4. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 5. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 2022 6. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 2022
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU Ryzen 7 5800X3D Ryzen 9 5950X Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X 700 1400 2100 2800 3500 SE +/- 2.34, N = 3 SE +/- 26.41, N = 3 SE +/- 0.76, N = 3 SE +/- 32.41, N = 3 SE +/- 1.56, N = 3 2691.24 2750.18 2881.32 2904.33 3065.26 -lpthread - MIN: 2678.39 -lpthread - MIN: 2685.91 MIN: 2872.06 -lpthread - MIN: 2837.45 -lpthread - MIN: 3058.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X 700 1400 2100 2800 3500 SE +/- 5.12, N = 3 SE +/- 10.92, N = 3 SE +/- 23.67, N = 3 SE +/- 0.27, N = 3 SE +/- 2.27, N = 3 2683.16 2703.33 2873.45 2881.72 3053.30 -lpthread - MIN: 2665.66 -lpthread - MIN: 2665.5 -lpthread - MIN: 2830.55 MIN: 2874.65 -lpthread - MIN: 3045.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU Ryzen 7 5800X3D Ryzen 9 5950X Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X 700 1400 2100 2800 3500 SE +/- 1.77, N = 3 SE +/- 23.91, N = 3 SE +/- 1.61, N = 3 SE +/- 15.10, N = 3 SE +/- 0.89, N = 3 2691.67 2745.44 2881.10 2908.28 3055.84 -lpthread - MIN: 2680.17 -lpthread - MIN: 2684.63 MIN: 2869.73 -lpthread - MIN: 2862.45 -lpthread - MIN: 3050.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: inception-v3 Ryzen 7 5800X3D Ryzen 9 5900X Core i9 12900K Ryzen 7 5800X Ryzen 9 5950X 6 12 18 24 30 SE +/- 0.09, N = 3 SE +/- 0.37, N = 3 SE +/- 0.66, N = 3 SE +/- 0.09, N = 3 SE +/- 0.13, N = 3 23.19 23.69 24.30 25.69 25.73 MIN: 22.93 / MAX: 31 MIN: 22.99 / MAX: 31.36 MIN: 22.9 / MAX: 36.09 MIN: 25.45 / MAX: 31.44 MIN: 25.09 / MAX: 33.94 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X 400 800 1200 1600 2000 SE +/- 0.67, N = 3 SE +/- 3.42, N = 3 SE +/- 46.80, N = 9 SE +/- 45.67, N = 12 SE +/- 1.33, N = 3 1975 1939 1668 1519 1042 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 9 5950X Ryzen 7 5800X 200 400 600 800 1000 SE +/- 0.76, N = 3 SE +/- 4.80, N = 3 SE +/- 57.02, N = 12 SE +/- 1.20, N = 3 SE +/- 0.17, N = 3 988 949 822 800 557 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X 1600 3200 4800 6400 8000 SE +/- 318.33, N = 12 SE +/- 21.53, N = 3 SE +/- 10.33, N = 3 SE +/- 11.77, N = 3 SE +/- 10.27, N = 3 7545 6169 4747 4107 3628 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Ryzen 9 5900X Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5950X Ryzen 7 5800X 30 60 90 120 150 SE +/- 0.44, N = 3 SE +/- 0.17, N = 3 SE +/- 0.00, N = 3 SE +/- 4.73, N = 12 SE +/- 0.00, N = 3 115 107 101 98 49 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 150 300 450 600 750 SE +/- 1.42, N = 3 SE +/- 42.62, N = 12 SE +/- 17.59, N = 12 SE +/- 17.40, N = 9 SE +/- 28.87, N = 12 693 572 539 487 431 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: SqueezeNetV1.0 Core i9 12900K Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 1.177 2.354 3.531 4.708 5.885 SE +/- 0.177, N = 3 SE +/- 0.010, N = 3 SE +/- 0.032, N = 3 SE +/- 0.058, N = 3 SE +/- 0.113, N = 3 4.151 4.213 4.536 4.758 5.231 MIN: 3.91 / MAX: 6.32 MIN: 4.16 / MAX: 5.46 MIN: 4.47 / MAX: 5.71 MIN: 4.59 / MAX: 12.51 MIN: 4.95 / MAX: 6.51 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: resnet-v2-50 Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5950X Core i9 12900K Ryzen 9 5900X 6 12 18 24 30 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 SE +/- 0.14, N = 3 SE +/- 1.14, N = 3 SE +/- 0.14, N = 3 16.36 18.44 20.52 23.09 24.50 MIN: 16 / MAX: 24.17 MIN: 18.25 / MAX: 25.86 MIN: 19.84 / MAX: 24.05 MIN: 21.7 / MAX: 30.17 MIN: 23.86 / MAX: 52.44 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: regnety_400m Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 3 6 9 12 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.22, N = 15 SE +/- 0.01, N = 3 SE +/- 0.08, N = 4 5.18 5.93 7.39 8.43 9.61 MIN: 5.07 / MAX: 12.17 MIN: 5.83 / MAX: 7.68 MIN: 6.17 / MAX: 27.59 MIN: 8.36 / MAX: 8.75 MIN: 9.37 / MAX: 11.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: yolov4-tiny Ryzen 7 5800X3D Core i9 12900K Ryzen 7 5800X Ryzen 9 5950X Ryzen 9 5900X 5 10 15 20 25 SE +/- 0.19, N = 15 SE +/- 0.33, N = 15 SE +/- 0.21, N = 15 SE +/- 0.32, N = 4 SE +/- 0.40, N = 3 14.80 15.86 19.78 20.49 20.63 MIN: 14 / MAX: 16.98 MIN: 14.24 / MAX: 21 MIN: 18.64 / MAX: 21.1 MIN: 19.6 / MAX: 21.74 MIN: 19.48 / MAX: 21.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet18 Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 4 8 12 16 20 SE +/- 0.15, N = 15 SE +/- 0.05, N = 15 SE +/- 0.05, N = 3 SE +/- 0.03, N = 15 SE +/- 0.17, N = 4 9.66 10.27 12.50 13.02 14.28 MIN: 7.55 / MAX: 14.7 MIN: 9.71 / MAX: 12.09 MIN: 12.28 / MAX: 12.82 MIN: 12.77 / MAX: 32.91 MIN: 13.94 / MAX: 16.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: vgg16 Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 13 26 39 52 65 SE +/- 0.48, N = 15 SE +/- 0.13, N = 15 SE +/- 0.09, N = 3 SE +/- 0.07, N = 15 SE +/- 0.08, N = 4 28.24 42.60 50.75 55.97 56.55 MIN: 25.72 / MAX: 45.6 MIN: 41.52 / MAX: 50.92 MIN: 49.97 / MAX: 60.2 MIN: 54.91 / MAX: 64.62 MIN: 55.53 / MAX: 62.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: googlenet Ryzen 7 5800X3D Core i9 12900K Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X 3 6 9 12 15 SE +/- 0.05, N = 15 SE +/- 0.21, N = 15 SE +/- 0.02, N = 15 SE +/- 0.02, N = 3 SE +/- 0.28, N = 4 7.31 9.94 10.22 11.44 12.92 MIN: 7.05 / MAX: 14.4 MIN: 7.91 / MAX: 14.3 MIN: 9.79 / MAX: 18.21 MIN: 11.28 / MAX: 11.82 MIN: 12.09 / MAX: 15.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: blazeface Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.405 0.81 1.215 1.62 2.025 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 SE +/- 0.05, N = 15 SE +/- 0.00, N = 3 SE +/- 0.02, N = 4 1.06 1.22 1.46 1.63 1.80 MIN: 1.04 / MAX: 4.31 MIN: 1.19 / MAX: 2.1 MIN: 1.15 / MAX: 2.96 MIN: 1.61 / MAX: 1.81 MIN: 1.75 / MAX: 2.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: efficientnet-b0 Ryzen 7 5800X3D Ryzen 7 5800X Ryzen 9 5900X Ryzen 9 5950X Core i9 12900K 1.2015 2.403 3.6045 4.806 6.0075 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 3 SE +/- 0.06, N = 4 SE +/- 0.11, N = 15 3.01 3.60 4.77 5.23 5.34 MIN: 2.93 / MAX: 13.07 MIN: 3.53 / MAX: 5.22 MIN: 4.7 / MAX: 5 MIN: 5.09 / MAX: 6.68 MIN: 4.35 / MAX: 9.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mnasnet Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.8708 1.7416 2.6124 3.4832 4.354 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 SE +/- 0.07, N = 15 SE +/- 0.01, N = 3 SE +/- 0.05, N = 4 2.01 2.24 3.10 3.44 3.87 MIN: 1.97 / MAX: 2.76 MIN: 2.2 / MAX: 3.68 MIN: 2.66 / MAX: 4.79 MIN: 3.39 / MAX: 3.72 MIN: 3.76 / MAX: 10.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: shufflenet-v2 Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.936 1.872 2.808 3.744 4.68 SE +/- 0.01, N = 15 SE +/- 0.00, N = 15 SE +/- 0.08, N = 14 SE +/- 0.01, N = 3 SE +/- 0.01, N = 4 2.11 2.35 3.10 3.88 4.16 MIN: 2.07 / MAX: 3 MIN: 2.31 / MAX: 3.74 MIN: 2.68 / MAX: 4.51 MIN: 3.82 / MAX: 4.06 MIN: 4.05 / MAX: 4.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v3-v3 - Model: mobilenet-v3 Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.8483 1.6966 2.5449 3.3932 4.2415 SE +/- 0.00, N = 15 SE +/- 0.00, N = 15 SE +/- 0.06, N = 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 4 1.90 2.28 2.90 3.46 3.77 MIN: 1.84 / MAX: 2.38 MIN: 2.22 / MAX: 3.92 MIN: 2.53 / MAX: 4.55 MIN: 3.39 / MAX: 3.66 MIN: 3.7 / MAX: 4.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v2-v2 - Model: mobilenet-v2 Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K Ryzen 9 5900X Ryzen 9 5950X 0.9653 1.9306 2.8959 3.8612 4.8265 SE +/- 0.00, N = 15 SE +/- 0.01, N = 15 SE +/- 0.12, N = 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 4 2.13 2.61 3.41 3.91 4.29 MIN: 2.03 / MAX: 2.88 MIN: 2.54 / MAX: 3.77 MIN: 2.72 / MAX: 5.86 MIN: 3.82 / MAX: 4.11 MIN: 4.15 / MAX: 7.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mobilenet Ryzen 7 5800X3D Core i9 12900K Ryzen 9 5900X Ryzen 7 5800X Ryzen 9 5950X 3 6 9 12 15 SE +/- 0.08, N = 15 SE +/- 0.26, N = 15 SE +/- 0.01, N = 3 SE +/- 0.13, N = 15 SE +/- 0.15, N = 4 7.62 11.11 11.42 11.63 12.27 MIN: 7.25 / MAX: 9.92 MIN: 8.87 / MAX: 13.96 MIN: 11.16 / MAX: 18.48 MIN: 11.06 / MAX: 12.93 MIN: 11.71 / MAX: 18.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU Ryzen 9 5950X Ryzen 9 5900X Ryzen 7 5800X3D Ryzen 7 5800X Core i9 12900K 2 4 6 8 10 SE +/- 0.22726, N = 15 SE +/- 0.07639, N = 3 SE +/- 0.06264, N = 3 SE +/- 0.11645, N = 15 SE +/- 0.16860, N = 12 4.75533 5.38689 7.27623 8.34203 8.73750 -lpthread - MIN: 3.38 -lpthread - MIN: 4 -lpthread - MIN: 5.11 -lpthread - MIN: 5.88 MIN: 4.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Core i9 12900K Ryzen 7 5800X3D Ryzen 9 5900X Ryzen 9 5950X Ryzen 7 5800X 5 10 15 20 25 SE +/- 0.00220, N = 7 SE +/- 0.01917, N = 7 SE +/- 0.29208, N = 15 SE +/- 0.00896, N = 7 SE +/- 0.12899, N = 7 5.87536 12.62270 16.13980 16.70510 18.75350 MIN: 5.78 -lpthread - MIN: 12.28 -lpthread - MIN: 15.35 -lpthread - MIN: 16.31 -lpthread - MIN: 18.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Ryzen 9 5950X Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 16GB (2475/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 April 2022 04:52 by user phoronix.
Ryzen 7 5800X3D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads), Motherboard: ASRock X570 Pro4 (P4.30 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U, Network: Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201205Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 April 2022 19:07 by user phoronix.
Ryzen 7 5800X Processor: AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads), Motherboard: ASRock X570 Pro4 (P4.30 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: AMD Navi 21 HDMI Audio, Monitor: ASUS VP28U, Network: Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 27 April 2022 07:43 by user phoronix.
Ryzen 9 5900X Processor: AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (3904 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: NVIDIA NV134 8GB, Audio: NVIDIA GP104 HD Audio, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2022 04:39 by user phoronix.
Core i9 12900K Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1003 BIOS), Chipset: Intel Device 7aa7, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: AMD Radeon RX 6800 XT 16GB (2575/1000MHz), Audio: Intel Device 7ad0, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.17.4-051704-generic (x86_64), Desktop: GNOME Shell 42.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44), Vulkan: 1.3.211, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2022 19:02 by user phoronix.