Benchmarks for a future article..
Ryzen 9 7900X3D Processor: AMD Ryzen 9 7900X3D 12-Core @ 4.40GHz (12 Cores / 24 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (9922 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: AMD Radeon RX 7900 XTX 24GB (2304/1249MHz), Audio: AMD Device ab30, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.04, Kernel: 6.2.2-060202-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6 + Wayland, OpenGL: 4.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.11.1Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 9 7900X Changed Processor to AMD Ryzen 9 7900X 12-Core @ 4.70GHz (12 Cores / 24 Threads) .
Ryzen 9 79500X3D Changed Processor to AMD Ryzen 9 7950X3D 16-Core @ 4.20GHz (16 Cores / 32 Threads) .
Ryzen 9 7950X Changed Processor to AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads) .
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
Result
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 60 120 180 240 300 SE +/- 1.44, N = 3 SE +/- 1.41, N = 3 SE +/- 1.59, N = 3 SE +/- 0.81, N = 3 258.64 269.00 279.87 250.50 MIN: 15.82 / MAX: 10000 MIN: 14.44 / MAX: 8571.43 MIN: 13.15 / MAX: 7500 MIN: 12.61 / MAX: 10000
Result Confidence
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 50 100 150 200 250 Min: 255.91 / Avg: 258.64 / Max: 260.79 Min: 266.23 / Avg: 269 / Max: 270.84 Min: 276.7 / Avg: 279.87 / Max: 281.74 Min: 249.39 / Avg: 250.5 / Max: 252.07
Result
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 70 140 210 280 350 SE +/- 0.26, N = 3 SE +/- 0.93, N = 3 SE +/- 2.16, N = 3 SE +/- 3.10, N = 3 285.41 303.48 316.84 275.45 MIN: 16.08 / MAX: 12000 MIN: 14.85 / MAX: 10000 MIN: 16.12 / MAX: 10000 MIN: 16.28 / MAX: 12000
Result Confidence
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 60 120 180 240 300 Min: 284.88 / Avg: 285.41 / Max: 285.67 Min: 301.63 / Avg: 303.48 / Max: 304.52 Min: 313.04 / Avg: 316.84 / Max: 320.52 Min: 272.27 / Avg: 275.45 / Max: 281.64
Result
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 70 140 210 280 350 SE +/- 3.74, N = 3 SE +/- 1.13, N = 3 SE +/- 3.76, N = 3 SE +/- 0.25, N = 3 290.14 308.81 321.76 276.94 MIN: 16.06 / MAX: 12000 MIN: 14.79 / MAX: 12000 MIN: 15.82 / MAX: 10000 MIN: 15.4 / MAX: 10000
Result Confidence
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 60 120 180 240 300 Min: 285.88 / Avg: 290.14 / Max: 297.6 Min: 307.16 / Avg: 308.81 / Max: 310.96 Min: 316.52 / Avg: 321.76 / Max: 329.04 Min: 276.45 / Avg: 276.94 / Max: 277.27
CloverLeaf CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
Settings: Zstd Compression 9
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 3 6 9 12 15 SE +/- 0.40412, N = 15 SE +/- 0.45158, N = 15 SE +/- 0.38311, N = 15 SE +/- 0.36261, N = 15 10.11033 10.07832 9.99604 10.08625 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 20 40 60 80 100 SE +/- 4.06, N = 15 SE +/- 4.87, N = 15 SE +/- 3.62, N = 15 SE +/- 3.19, N = 15 101.17 102.27 101.97 100.75 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 3 6 9 12 15 Min: 8.28 / Avg: 10.11 / Max: 11.96 Min: 7.89 / Avg: 10.08 / Max: 11.74 Min: 8.75 / Avg: 10 / Max: 12.71 Min: 9.19 / Avg: 10.09 / Max: 12.39 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 3 6 9 12 15 SE +/- 0.03144, N = 3 SE +/- 0.02410, N = 3 SE +/- 0.01779, N = 3 SE +/- 0.10261, N = 5 8.60115 7.97025 9.44018 9.95491 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 30 60 90 120 150 SE +/- 0.42, N = 3 SE +/- 0.38, N = 3 SE +/- 0.20, N = 3 SE +/- 1.07, N = 5 116.27 125.47 105.93 100.49 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 3 6 9 12 15 Min: 8.55 / Avg: 8.6 / Max: 8.66 Min: 7.93 / Avg: 7.97 / Max: 8.01 Min: 9.42 / Avg: 9.44 / Max: 9.47 Min: 9.55 / Avg: 9.95 / Max: 10.09 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.7964 1.5928 2.3892 3.1856 3.982 SE +/- 0.00921, N = 3 SE +/- 0.00156, N = 3 SE +/- 0.04984, N = 3 SE +/- 0.00558, N = 3 1.75053 2.87377 3.53941 2.22836 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 120 240 360 480 600 SE +/- 3.02, N = 3 SE +/- 0.19, N = 3 SE +/- 4.02, N = 3 SE +/- 1.13, N = 3 571.29 347.97 282.64 448.76 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 2 4 6 8 10 Min: 1.73 / Avg: 1.75 / Max: 1.76 Min: 2.87 / Avg: 2.87 / Max: 2.88 Min: 3.44 / Avg: 3.54 / Max: 3.61 Min: 2.22 / Avg: 2.23 / Max: 2.24 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.4419 0.8838 1.3257 1.7676 2.2095 SE +/- 0.00841, N = 3 SE +/- 0.01582, N = 5 SE +/- 0.01691, N = 3 SE +/- 0.02061, N = 3 1.57963 1.50536 1.91347 1.96381 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 140 280 420 560 700 SE +/- 3.35, N = 3 SE +/- 7.18, N = 5 SE +/- 4.59, N = 3 SE +/- 5.36, N = 3 633.10 664.59 522.69 509.33 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 2 4 6 8 10 Min: 1.57 / Avg: 1.58 / Max: 1.6 Min: 1.44 / Avg: 1.51 / Max: 1.53 Min: 1.89 / Avg: 1.91 / Max: 1.95 Min: 1.93 / Avg: 1.96 / Max: 2 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 SE +/- 5.34, N = 15 SE +/- 7.10, N = 15 SE +/- 5.64, N = 15 SE +/- 6.30, N = 13 173.78 138.33 161.23 165.01 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 2 4 6 8 10 SE +/- 0.22770, N = 15 SE +/- 0.40239, N = 15 SE +/- 0.18428, N = 15 SE +/- 0.19969, N = 13 5.85153 7.51702 6.29145 6.14984 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 30 60 90 120 150 Min: 123.51 / Avg: 173.78 / Max: 191.14 Min: 101.27 / Avg: 138.33 / Max: 166.47 Min: 135.19 / Avg: 161.23 / Max: 212.66 Min: 139.89 / Avg: 165.01 / Max: 230.59 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 30 60 90 120 150 SE +/- 0.24, N = 3 SE +/- 0.10, N = 3 SE +/- 0.24, N = 3 SE +/- 0.49, N = 3 112.42 99.05 124.53 139.43 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 3 6 9 12 15 SE +/- 0.01908, N = 3 SE +/- 0.00988, N = 3 SE +/- 0.01572, N = 3 SE +/- 0.02538, N = 3 8.89505 10.09470 8.02960 7.17184 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 30 60 90 120 150 Min: 111.96 / Avg: 112.42 / Max: 112.78 Min: 98.86 / Avg: 99.05 / Max: 99.15 Min: 124.23 / Avg: 124.53 / Max: 125.01 Min: 138.9 / Avg: 139.43 / Max: 140.41 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 SE +/- 0.19, N = 3 SE +/- 0.07, N = 3 19.73 18.82 20.19 21.28 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 12 24 36 48 60 SE +/- 0.07, N = 3 SE +/- 0.34, N = 3 SE +/- 0.47, N = 3 SE +/- 0.16, N = 3 50.69 53.14 49.54 47.00 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 5 10 15 20 25 Min: 19.7 / Avg: 19.73 / Max: 19.78 Min: 18.69 / Avg: 18.82 / Max: 19.06 Min: 19.82 / Avg: 20.19 / Max: 20.45 Min: 21.15 / Avg: 21.28 / Max: 21.4 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 4 8 12 16 20 SE +/- 0.15, N = 3 SE +/- 0.08, N = 3 SE +/- 0.15, N = 4 SE +/- 0.18, N = 3 13.87 12.89 14.81 16.23 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 20 40 60 80 100 SE +/- 0.81, N = 3 SE +/- 0.50, N = 3 SE +/- 0.70, N = 4 SE +/- 0.67, N = 3 72.14 77.57 67.55 61.61 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 4 8 12 16 20 Min: 13.56 / Avg: 13.87 / Max: 14.04 Min: 12.75 / Avg: 12.89 / Max: 13.03 Min: 14.41 / Avg: 14.81 / Max: 15.13 Min: 15.99 / Avg: 16.23 / Max: 16.58 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 SE +/- 1.80, N = 15 SE +/- 1.34, N = 3 SE +/- 2.46, N = 3 SE +/- 0.39, N = 3 172.81 195.76 185.38 169.50 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 1.327 2.654 3.981 5.308 6.635 SE +/- 0.06489, N = 15 SE +/- 0.03513, N = 3 SE +/- 0.07238, N = 3 SE +/- 0.01353, N = 3 5.79435 5.10690 5.39416 5.89782 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 Min: 153.18 / Avg: 172.81 / Max: 178.21 Min: 193.12 / Avg: 195.76 / Max: 197.48 Min: 180.58 / Avg: 185.38 / Max: 188.74 Min: 169.11 / Avg: 169.5 / Max: 170.28 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 SE +/- 1.56, N = 3 SE +/- 0.56, N = 3 SE +/- 0.66, N = 3 SE +/- 0.20, N = 3 146.34 157.29 164.53 146.50 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 2 4 6 8 10 SE +/- 0.07342, N = 3 SE +/- 0.02269, N = 3 SE +/- 0.02467, N = 3 SE +/- 0.00930, N = 3 6.83177 6.35362 6.07438 6.82261 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 30 60 90 120 150 Min: 143.22 / Avg: 146.34 / Max: 147.95 Min: 156.26 / Avg: 157.29 / Max: 158.19 Min: 163.2 / Avg: 164.53 / Max: 165.24 Min: 146.2 / Avg: 146.5 / Max: 146.88 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 10 20 30 40 50 SE +/- 1.23, N = 15 SE +/- 1.38, N = 12 SE +/- 1.50, N = 15 SE +/- 1.19, N = 15 42.38 39.74 37.39 44.79 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 6 12 18 24 30 SE +/- 0.86, N = 15 SE +/- 1.05, N = 12 SE +/- 1.02, N = 15 SE +/- 0.66, N = 15 23.94 25.56 27.31 22.57 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 9 18 27 36 45 Min: 31.23 / Avg: 42.38 / Max: 45.55 Min: 30.09 / Avg: 39.74 / Max: 43 Min: 32.56 / Avg: 37.39 / Max: 46.04 Min: 35.9 / Avg: 44.79 / Max: 48.12 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 8 16 24 32 40 SE +/- 0.21, N = 3 SE +/- 0.11, N = 3 SE +/- 0.21, N = 3 SE +/- 0.11, N = 3 31.26 28.46 33.59 36.63 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 8 16 24 32 40 SE +/- 0.21, N = 3 SE +/- 0.13, N = 3 SE +/- 0.19, N = 3 SE +/- 0.08, N = 3 31.99 35.14 29.77 27.30 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 8 16 24 32 40 Min: 30.91 / Avg: 31.26 / Max: 31.63 Min: 28.24 / Avg: 28.46 / Max: 28.57 Min: 33.3 / Avg: 33.59 / Max: 34.01 Min: 36.47 / Avg: 36.63 / Max: 36.84 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 110 220 330 440 550 SE +/- 10.29, N = 12 SE +/- 6.30, N = 15 SE +/- 9.53, N = 15 SE +/- 4.65, N = 15 430.96 424.17 486.61 485.24 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.532 1.064 1.596 2.128 2.66 SE +/- 0.05616, N = 12 SE +/- 0.03587, N = 15 SE +/- 0.04809, N = 15 SE +/- 0.02087, N = 15 2.33461 2.36463 2.06799 2.06309 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 90 180 270 360 450 Min: 384.24 / Avg: 430.96 / Max: 471.23 Min: 389.22 / Avg: 424.17 / Max: 448.75 Min: 394.27 / Avg: 486.61 / Max: 503.45 Min: 443.79 / Avg: 485.24 / Max: 502.95 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 90 180 270 360 450 SE +/- 1.41, N = 3 SE +/- 1.54, N = 3 SE +/- 2.07, N = 3 SE +/- 0.99, N = 3 374.01 335.71 384.72 411.01 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.6701 1.3402 2.0103 2.6804 3.3505 SE +/- 0.01009, N = 3 SE +/- 0.01374, N = 3 SE +/- 0.01408, N = 3 SE +/- 0.00583, N = 3 2.67309 2.97822 2.59883 2.43239 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 70 140 210 280 350 Min: 371.36 / Avg: 374.01 / Max: 376.17 Min: 332.74 / Avg: 335.71 / Max: 337.93 Min: 380.64 / Avg: 384.72 / Max: 387.36 Min: 409.27 / Avg: 411.01 / Max: 412.69 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 300 600 900 1200 1500 SE +/- 31.87, N = 15 SE +/- 7.21, N = 3 SE +/- 39.20, N = 15 SE +/- 11.41, N = 6 998.86 1155.14 1084.91 1224.00 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.2282 0.4564 0.6846 0.9128 1.141 SE +/- 0.029891, N = 15 SE +/- 0.005430, N = 3 SE +/- 0.033238, N = 15 SE +/- 0.007823, N = 6 1.014088 0.865505 0.938149 0.816994 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 200 400 600 800 1000 Min: 893.95 / Avg: 998.86 / Max: 1203.66 Min: 1140.81 / Avg: 1155.14 / Max: 1163.71 Min: 930.42 / Avg: 1084.91 / Max: 1277.41 Min: 1170.64 / Avg: 1224 / Max: 1243.96 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 200 400 600 800 1000 SE +/- 9.08, N = 5 SE +/- 11.49, N = 3 SE +/- 7.64, N = 3 SE +/- 6.96, N = 3 830.07 802.98 947.64 922.34 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 0.2801 0.5602 0.8403 1.1204 1.4005 SE +/- 0.01294, N = 5 SE +/- 0.01782, N = 3 SE +/- 0.00846, N = 3 SE +/- 0.00816, N = 3 1.20427 1.24481 1.05436 1.08333 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 170 340 510 680 850 Min: 811.8 / Avg: 830.07 / Max: 863.22 Min: 783.57 / Avg: 802.98 / Max: 823.33 Min: 939.92 / Avg: 947.64 / Max: 962.92 Min: 911.66 / Avg: 922.34 / Max: 935.42 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 16 32 48 64 80 SE +/- 1.94, N = 15 SE +/- 0.38, N = 3 SE +/- 2.04, N = 15 SE +/- 0.09, N = 3 60.11 68.92 70.16 70.09 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 4 8 12 16 20 SE +/- 0.55, N = 15 SE +/- 0.08, N = 3 SE +/- 0.48, N = 15 SE +/- 0.02, N = 3 16.88 14.51 14.45 14.27 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 14 28 42 56 70 Min: 51.96 / Avg: 60.11 / Max: 69.07 Min: 68.5 / Avg: 68.92 / Max: 69.67 Min: 55.81 / Avg: 70.16 / Max: 75.58 Min: 69.96 / Avg: 70.09 / Max: 70.27 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 12 24 36 48 60 SE +/- 0.05, N = 3 SE +/- 0.10, N = 3 SE +/- 0.55, N = 3 SE +/- 0.42, N = 3 49.50 48.26 49.66 51.74 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Inference Time Cost (ms)
OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.23, N = 3 SE +/- 0.16, N = 3 20.20 20.72 20.14 19.33 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Result Confidence
OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 10 20 30 40 50 Min: 49.39 / Avg: 49.5 / Max: 49.57 Min: 48.1 / Avg: 48.26 / Max: 48.45 Min: 48.56 / Avg: 49.66 / Max: 50.33 Min: 50.92 / Avg: 51.74 / Max: 52.28 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
Open Porous Media Git This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.
OPM Benchmark: Upscale-Relperm - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: Flow MPI Norne - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: Flow MPI Extra - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: Drogon - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: SPE10 Model 1 - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: SPE10 Model 2 - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: Smeaheia - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OPM Benchmark: PUNQ-S3 - Threads: 16
Ryzen 9 7900X3D: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 7900X: The test run did not produce a result. E: There are not enough slots available in the system to satisfy the 16
Ryzen 9 79500X3D: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
Ryzen 9 7950X: The test run did not produce a result. E: mpirun was unable to launch the specified application as it could not access
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 6 12 18 24 30 25.02 26.21 23.91 22.79 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 171.62 138.74 125.99 161.67 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 40 80 120 160 200 199.34 193.36 178.11 190.94 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time Ryzen 9 7900X Ryzen 9 7900X3D Ryzen 9 79500X3D Ryzen 9 7950X 500 1000 1500 2000 2500 2161.56 1948.06 1863.95 2126.10 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
PyHPC Benchmarks PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing
Ryzen 9 7900X3D: The test run did not produce a result.
Ryzen 9 7900X: The test run did not produce a result.
Ryzen 9 79500X3D: The test run did not produce a result.
Ryzen 9 7950X: The test run did not produce a result.
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
Ryzen 9 7900X3D Processor: AMD Ryzen 9 7900X3D 12-Core @ 4.40GHz (12 Cores / 24 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (9922 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: AMD Radeon RX 7900 XTX 24GB (2304/1249MHz), Audio: AMD Device ab30, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.04, Kernel: 6.2.2-060202-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6 + Wayland, OpenGL: 4.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.11.1Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 3 March 2023 20:08 by user phoronix.
Ryzen 9 7900X Processor: AMD Ryzen 9 7900X 12-Core @ 4.70GHz (12 Cores / 24 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (9922 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: AMD Radeon RX 7900 XTX 24GB (2304/1249MHz), Audio: AMD Device ab30, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.04, Kernel: 6.2.2-060202-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6 + Wayland, OpenGL: 4.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.11.1Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 March 2023 06:58 by user phoronix.
Ryzen 9 79500X3D Processor: AMD Ryzen 9 7950X3D 16-Core @ 4.20GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (9922 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: AMD Radeon RX 7900 XTX 24GB (2304/1249MHz), Audio: AMD Device ab30, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.04, Kernel: 6.2.2-060202-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6 + Wayland, OpenGL: 4.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.11.1Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 March 2023 21:15 by user phoronix.
Ryzen 9 7950X Processor: AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR X670E HERO (9922 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: Western Digital WD_BLACK SN850X 1000GB + 2000GB, Graphics: AMD Radeon RX 7900 XTX 24GB (2304/1249MHz), Audio: AMD Device ab30, Monitor: ASUS MG28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.04, Kernel: 6.2.2-060202-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server 1.21.1.6 + Wayland, OpenGL: 4.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.11.1Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 5 March 2023 06:57 by user phoronix.