AMD 3D V-Cache Comparison

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204299-NE-CC771232156
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 3 Tests
CPU Massive 3 Tests
Creator Workloads 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 13 Tests
Machine Learning 10 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 2 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 4 Tests
Python 2 Tests
Scientific Computing 2 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ryzen 9 5950X
April 26 2022
  14 Hours, 53 Minutes
Ryzen 7 5800X3D
April 26 2022
  14 Hours, 56 Minutes
Ryzen 7 5800X
April 27 2022
  18 Hours, 29 Minutes
Ryzen 9 5900X
April 28 2022
  14 Hours, 16 Minutes
Core i9 12900K
April 28 2022
  14 Hours, 51 Minutes
Invert Hiding All Results Option
  15 Hours, 29 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD 3D V-Cache ComparisonProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionDisplay DriverRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900KAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 1TBAMD Radeon RX 6800 16GB (2475/1000MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.17.4-051704-generic (x86_64)GNOME Shell 42.0X Server + Wayland4.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44)1.3.211GCC 11.2.0ext43840x2160AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads)ASRock X570 Pro4 (P4.30 BIOS)16GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)ASUS VP28UIntel I211AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads)AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3904 BIOS)NVIDIA NV134 8GBNVIDIA GP104 HD AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211nouveau4.3 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa)Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1003 BIOS)Intel Device 7aa732GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX4114.6 Mesa 22.2.0-devel (git-092ac67 2022-04-21 jammy-oibaf-ppa) (LLVM 14.0.0 DRM 3.44)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Ryzen 9 5950X: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016- Ryzen 7 5800X3D: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201205- Ryzen 7 5800X: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016- Ryzen 9 5900X: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016- Core i9 12900K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Python Details- Python 3.10.4Security Details- Ryzen 9 5950X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 7 5800X3D: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 7 5800X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 9 5900X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected- Core i9 12900K: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900KResult OverviewPhoronix Test Suite100%156%211%267%323%LeelaChessZeroXcompact3d Incompact3dOpenFOAMASKAPCaffeWebP2 Image EncodeTNNNCNNMlpack BenchmarkMobile Neural NetworkONNX RuntimeoneDNNNumpy BenchmarkECP-CANDLEOpen Porous Media Git

AMD 3D V-Cache Comparisononnx: ArcFace ResNet-100 - CPU - Parallelaskap: tConvolve OpenMP - Griddingecp-candle: P3B2askap: tConvolve MT - Griddinglczero: BLASlczero: Eigenonednn: IP Shapes 3D - u8s8f32 - CPUecp-candle: P3B1onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUaskap: tConvolve MT - Degriddingopenfoam: Motorbike 60Mincompact3d: input.i3d 193 Cells Per Directionaskap: tConvolve OpenMP - Degriddingonednn: IP Shapes 3D - f32 - CPUincompact3d: input.i3d 129 Cells Per Directionaskap: Hogbom Clean OpenMPmnn: mobilenet-v1-1.0onnx: yolov4 - CPU - Parallelopenfoam: Motorbike 30Mmnn: mobilenetV3askap: tConvolve MPI - Griddingwebp2: Quality 100, Lossless Compressionmlpack: scikit_qdawebp2: Quality 100, Compression Effort 5tnn: CPU - SqueezeNet v1.1webp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7askap: tConvolve MPI - Degriddingmlpack: scikit_svmonnx: bertsquad-12 - CPU - Parallelonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUmnn: MobileNetV2_224onednn: Deconvolution Batch shapes_3d - f32 - CPUwebp2: Defaultonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUtnn: CPU - DenseNetecp-candle: P1B2mnn: squeezenetv1.1onnx: fcn-resnet101-11 - CPU - Paralleltnn: CPU - SqueezeNet v2onnx: GPT-2 - CPU - Standardtnn: CPU - MobileNet v2onnx: super-resolution-10 - CPU - Parallelncnn: CPU - alexnetonednn: IP Shapes 1D - f32 - CPUonnx: GPT-2 - CPU - Parallelncnn: CPU - resnet50caffe: GoogleNet - CPU - 200opm-git: Flow MPI Norne - 8opm-git: Flow MPI Norne-4C MSW - 8caffe: AlexNet - CPU - 100caffe: GoogleNet - CPU - 100caffe: AlexNet - CPU - 200numpy: ncnn: CPU - squeezenet_ssdonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopm-git: Flow MPI Extra - 8opm-git: Flow MPI Extra - 4opm-git: Flow MPI Norne-4C MSW - 4mlpack: scikit_icaopm-git: Flow MPI Norne-4C MSW - 2opm-git: Flow MPI Norne-4C MSW - 1opm-git: Flow MPI Norne - 1opm-git: Flow MPI Norne - 4opm-git: Flow MPI Norne - 2opm-git: Flow MPI Extra - 2opm-git: Flow MPI Extra - 1onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUmnn: inception-v3onnx: ArcFace ResNet-100 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Standardmlpack: scikit_linearridgeregressionmnn: SqueezeNetV1.0mnn: resnet-v2-50ncnn: CPU - regnety_400mncnn: CPU - yolov4-tinyncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K15402755.72654.353784.5076686840.4760971309.59418.03211346.571382.54156.1625263214.587.7965233.9481214220.0362.63929698.442.3916728.09536.76755.063.249213.323110.284234.9276643.6416.565541.069263.2753.626092.1651.618990.8230092425.59431.3923.9888850.9407062224.264643611.083.89647553424.40193710357.86575.13366589662473352591.5414.551814.131783.851820.40808.05690.62377.0341.77441.07569.14270.81232.61203.89712.621074.332750.182703.332745.4425.72616688006169984871.695.23120.5159.6120.4914.2856.5512.921.805.233.874.163.774.2912.274.7553316.705112236946.11553.745930.574125411600.6047801023.31510.38001594.951090.21126.8610338741.66.5576527.1814976527.7231.67630880.291.0828746.52864.93538.275.169222.332178.013376.9566453.2216.175181.776411.8315.571093.1702.483931.237852609.84329.9922.5907253.2218826233.17946069.622.89717691918.13162022361.91581.46302228086260494603.0912.441382.161387.861385.79781.98643.96357.3334.88403.35476.91224.10223.97187.49672.48997.342691.242683.162691.6723.193193982241071075721.634.21316.3565.1814.8010.2742.607.311.063.012.012.111.902.137.627.2762312.622711071709.25390.269854.7078678541.457521158.67416.35341471.581270.72141.2109543367.678.9355237.2105344221.7321.816273177.601.1564256.05974.59152.575.875266.187199.645420.0323976.3020.294802.036001.9426.370233.6192.839131.407283016.47934.8412.8056763.6056832272.037410311.833.29136562120.22179146447.63733.91339058982567661491.5616.761859.141864.541847.701027.17829.49454.3439.16511.31601.31281.47275.40234.63799.401154.093065.263053.303055.8425.69210425573628494311.824.53618.4425.9319.7813.0255.9710.221.223.602.242.352.282.6111.638.3420318.753514212936.85667.282837.9449548860.5197331174.46916.73821541.671277.55144.4737503732.727.5484332.1536293238.6674.07229996.161.8388258.02614.81251.393.709213.776129.733268.6127668.0516.155401.300492.9754.387732.4391.997421.0069412563.75730.2413.3948550.8527862224.199559110.013.44548568721.21178725361.84581.35343298949268676605.1713.511785.051742.061776.19810.33691.38378.3339.37441.14572.60273.89233.63203.98713.341073.712904.332873.452908.2823.692151994975451155391.584.75824.4998.4320.6312.5050.7511.441.634.773.443.883.463.9111.425.3868916.13983624631.821503.2642721.15220121610.812047429.616.002143872.03487.8055.60967137793.773.4464314.5530450540.5412.89162984.471.1744472.73476.45127.312.936133.970101.568215.5824198.5110.549191.352122.415.253942.0622.223361.052001792.97220.9412.40511138.91511082170.58345167.582.63611808816.84134045515.71825.56255906783351713665.3413.271613.811616.061617.10891.78671.56451.0032.93475.95537.38247.89280.85223.20717.041053.382881.322881.722881.1024.301197598847471016931.604.15123.0857.3915.869.6628.249.941.465.343.103.102.903.4111.118.737505.87536OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500SE +/- 8.09, N = 3SE +/- 3.51, N = 3SE +/- 4.25, N = 3SE +/- 1.17, N = 3SE +/- 0.17, N = 315401223110714213621. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500Min: 1526.5 / Avg: 1540.17 / Max: 1554.5Min: 1218.5 / Avg: 1222.5 / Max: 1229.5Min: 1098.5 / Avg: 1107 / Max: 1111.5Min: 1420 / Avg: 1421.17 / Max: 1423.5Min: 361.5 / Avg: 361.67 / Max: 3621. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K15003000450060007500SE +/- 27.21, N = 6SE +/- 78.12, N = 15SE +/- 14.53, N = 15SE +/- 10.74, N = 6SE +/- 34.51, N = 62755.726946.111709.252936.854631.821. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K12002400360048006000Min: 2662.56 / Avg: 2755.72 / Max: 2832.51Min: 6494.05 / Avg: 6946.11 / Max: 7396Min: 1603.95 / Avg: 1709.25 / Max: 1786.95Min: 2894.09 / Avg: 2936.85 / Max: 2958.4Min: 4512.81 / Avg: 4631.82 / Max: 4754.571. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.4Benchmark: P3B2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500654.35553.75390.27667.281503.26

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K6001200180024003000SE +/- 1.65, N = 3SE +/- 2.64, N = 3SE +/- 2.02, N = 3SE +/- 0.90, N = 3SE +/- 2.70, N = 3784.51930.57854.71837.942721.151. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500Min: 781.31 / Avg: 784.51 / Max: 786.79Min: 925.91 / Avg: 930.57 / Max: 935.05Min: 850.66 / Avg: 854.71 / Max: 856.82Min: 836.19 / Avg: 837.94 / Max: 839.15Min: 2716.9 / Avg: 2721.15 / Max: 2726.171. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500SE +/- 2.73, N = 3SE +/- 6.08, N = 3SE +/- 6.36, N = 3SE +/- 11.02, N = 3SE +/- 22.18, N = 3668125486795422011. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000Min: 663 / Avg: 668.33 / Max: 672Min: 1243 / Avg: 1254 / Max: 1264Min: 860 / Avg: 867.33 / Max: 880Min: 940 / Avg: 954.33 / Max: 976Min: 2157 / Avg: 2201.33 / Max: 22251. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500SE +/- 7.32, N = 9SE +/- 8.89, N = 3SE +/- 8.85, N = 9SE +/- 7.53, N = 9SE +/- 19.84, N = 7684116085488621611. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000Min: 657 / Avg: 684 / Max: 730Min: 1147 / Avg: 1160 / Max: 1177Min: 811 / Avg: 853.78 / Max: 889Min: 853 / Avg: 885.56 / Max: 927Min: 2081 / Avg: 2160.71 / Max: 22541. (CXX) g++ options: -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.32790.65580.98371.31161.6395SE +/- 0.002527, N = 5SE +/- 0.002108, N = 5SE +/- 0.002722, N = 5SE +/- 0.003942, N = 5SE +/- 0.002976, N = 50.4760970.6047801.4575200.5197330.812047-lpthread - MIN: 0.42-lpthread - MIN: 0.58-lpthread - MIN: 1.35-lpthread - MIN: 0.47MIN: 0.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 0.47 / Avg: 0.48 / Max: 0.49Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 1.45 / Avg: 1.46 / Max: 1.46Min: 0.51 / Avg: 0.52 / Max: 0.53Min: 0.81 / Avg: 0.81 / Max: 0.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.4Benchmark: P3B1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K300600900120015001309.591023.321158.671174.47429.61

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620SE +/- 0.02368, N = 7SE +/- 0.04088, N = 7SE +/- 0.22593, N = 15SE +/- 0.04249, N = 7SE +/- 0.00249, N = 718.0321010.3800016.3534016.738206.00214-lpthread - MIN: 17.6-lpthread - MIN: 9.78-lpthread - MIN: 14.78-lpthread - MIN: 16.08MIN: 5.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025Min: 17.92 / Avg: 18.03 / Max: 18.12Min: 10.18 / Avg: 10.38 / Max: 10.5Min: 15.43 / Avg: 16.35 / Max: 17.7Min: 16.58 / Avg: 16.74 / Max: 16.85Min: 5.99 / Avg: 6 / Max: 6.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K8001600240032004000SE +/- 0.92, N = 3SE +/- 1.83, N = 3SE +/- 5.57, N = 3SE +/- 3.45, N = 3SE +/- 2.07, N = 31346.571594.951471.581541.673872.031. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K7001400210028003500Min: 1344.94 / Avg: 1346.57 / Max: 1348.13Min: 1592.56 / Avg: 1594.95 / Max: 1598.54Min: 1460.44 / Avg: 1471.58 / Max: 1477.15Min: 1534.99 / Avg: 1541.67 / Max: 1546.5Min: 3868.13 / Avg: 3872.03 / Max: 3875.161. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500SE +/- 0.26, N = 3SE +/- 1.12, N = 3SE +/- 0.23, N = 3SE +/- 0.77, N = 3SE +/- 0.14, N = 31382.541090.211270.721277.55487.80-lfoamToVTK -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000Min: 1382.26 / Avg: 1382.54 / Max: 1383.06Min: 1088.32 / Avg: 1090.21 / Max: 1092.19Min: 1270.31 / Avg: 1270.72 / Max: 1271.1Min: 1276.03 / Avg: 1277.55 / Max: 1278.57Min: 487.55 / Avg: 487.8 / Max: 488.051. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K306090120150SE +/- 0.16, N = 3SE +/- 0.11, N = 3SE +/- 1.17, N = 9SE +/- 0.05, N = 3SE +/- 0.58, N = 3156.16126.86141.21144.4755.611. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K306090120150Min: 155.86 / Avg: 156.16 / Max: 156.4Min: 126.74 / Avg: 126.86 / Max: 127.08Min: 138.79 / Avg: 141.21 / Max: 146.07Min: 144.4 / Avg: 144.47 / Max: 144.58Min: 54.91 / Avg: 55.61 / Max: 56.771. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2K4K6K8K10KSE +/- 11.91, N = 6SE +/- 38.17, N = 15SE +/- 6.53, N = 15SE +/- 10.98, N = 6SE +/- 37.29, N = 63214.588741.603367.673732.727793.771. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K15003000450060007500Min: 3169.71 / Avg: 3214.58 / Max: 3247.02Min: 8588.9 / Avg: 8741.59 / Max: 8875.2Min: 3328.2 / Avg: 3367.67 / Max: 3413.54Min: 3698 / Avg: 3732.72 / Max: 3750.08Min: 7607.31 / Avg: 7793.77 / Max: 7831.061. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810SE +/- 0.05069, N = 5SE +/- 0.01065, N = 5SE +/- 0.01310, N = 5SE +/- 0.02525, N = 5SE +/- 0.00441, N = 57.796526.557658.935527.548433.44643-lpthread - MIN: 7.29-lpthread - MIN: 6.32-lpthread - MIN: 8.33-lpthread - MIN: 7.29MIN: 3.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215Min: 7.73 / Avg: 7.8 / Max: 8Min: 6.53 / Avg: 6.56 / Max: 6.59Min: 8.91 / Avg: 8.94 / Max: 8.97Min: 7.48 / Avg: 7.55 / Max: 7.63Min: 3.44 / Avg: 3.45 / Max: 3.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K918273645SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.32, N = 3SE +/- 0.01, N = 433.9527.1837.2132.1514.551. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K816243240Min: 33.89 / Avg: 33.95 / Max: 34.04Min: 27.16 / Avg: 27.18 / Max: 27.2Min: 37.17 / Avg: 37.21 / Max: 37.25Min: 31.59 / Avg: 32.15 / Max: 32.69Min: 14.54 / Avg: 14.55 / Max: 14.581. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K120240360480600SE +/- 1.00, N = 4SE +/- 1.80, N = 4SE +/- 0.57, N = 3SE +/- 0.52, N = 4SE +/- 0.00, N = 5220.04527.72221.73238.67540.541. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K100200300400500Min: 218.34 / Avg: 220.04 / Max: 222.22Min: 523.56 / Avg: 527.72 / Max: 531.92Min: 220.75 / Avg: 221.73 / Max: 222.72Min: 237.53 / Avg: 238.67 / Max: 239.81Min: 540.54 / Avg: 540.54 / Max: 540.541. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.91621.83242.74863.66484.581SE +/- 0.067, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 3SE +/- 0.037, N = 3SE +/- 0.008, N = 32.6391.6761.8164.0722.891MIN: 2.52 / MAX: 11.27MIN: 1.63 / MAX: 2.97MIN: 1.79 / MAX: 3.08MIN: 3.95 / MAX: 4.31MIN: 2.85 / MAX: 8.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 2.57 / Avg: 2.64 / Max: 2.77Min: 1.66 / Avg: 1.68 / Max: 1.69Min: 1.8 / Avg: 1.82 / Max: 1.84Min: 4 / Avg: 4.07 / Max: 4.12Min: 2.88 / Avg: 2.89 / Max: 2.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K140280420560700SE +/- 0.44, N = 3SE +/- 0.60, N = 3SE +/- 0.44, N = 3SE +/- 0.33, N = 3SE +/- 1.09, N = 32963082732996291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K110220330440550Min: 295 / Avg: 295.67 / Max: 296.5Min: 307 / Avg: 308.17 / Max: 309Min: 272 / Avg: 272.67 / Max: 273.5Min: 298.5 / Avg: 299.17 / Max: 299.5Min: 627.5 / Avg: 628.83 / Max: 6311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K4080120160200SE +/- 0.24, N = 3SE +/- 0.32, N = 3SE +/- 0.25, N = 3SE +/- 0.15, N = 3SE +/- 0.18, N = 398.4480.29177.6096.1684.47-lfoamToVTK -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K306090120150Min: 98.16 / Avg: 98.44 / Max: 98.93Min: 79.65 / Avg: 80.29 / Max: 80.69Min: 177.16 / Avg: 177.6 / Max: 178.02Min: 95.99 / Avg: 96.16 / Max: 96.47Min: 84.11 / Avg: 84.47 / Max: 84.71. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.5381.0761.6142.1522.69SE +/- 0.003, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.018, N = 3SE +/- 0.007, N = 32.3911.0821.1561.8381.174MIN: 1.87 / MAX: 3.85MIN: 1.06 / MAX: 2.25MIN: 1.14 / MAX: 1.73MIN: 1.79 / MAX: 2.07MIN: 1.15 / MAX: 2.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 2.39 / Avg: 2.39 / Max: 2.4Min: 1.08 / Avg: 1.08 / Max: 1.09Min: 1.15 / Avg: 1.16 / Max: 1.17Min: 1.81 / Avg: 1.84 / Max: 1.87Min: 1.16 / Avg: 1.17 / Max: 1.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 45.52, N = 3SE +/- 58.16, N = 3SE +/- 12.67, N = 36728.098746.524256.058258.024472.731. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K15003000450060007500Min: 8746.52 / Avg: 8746.52 / Max: 8746.52Min: 4165.01 / Avg: 4256.05 / Max: 4301.57Min: 8199.86 / Avg: 8258.02 / Max: 8374.33Min: 4447.38 / Avg: 4472.73 / Max: 4485.41. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 2.59, N = 3SE +/- 2.17, N = 3SE +/- 1.21, N = 3SE +/- 4.10, N = 3SE +/- 0.96, N = 3536.77864.94974.59614.81476.451. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000Min: 531.66 / Avg: 536.77 / Max: 540.06Min: 860.82 / Avg: 864.93 / Max: 868.19Min: 973.03 / Avg: 974.59 / Max: 976.97Min: 606.73 / Avg: 614.81 / Max: 620.06Min: 474.56 / Avg: 476.45 / Max: 477.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1224364860SE +/- 0.20, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.13, N = 355.0638.2752.5751.3927.31
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1122334455Min: 54.68 / Avg: 55.06 / Max: 55.33Min: 38.13 / Avg: 38.27 / Max: 38.39Min: 52.45 / Avg: 52.57 / Max: 52.78Min: 50.71 / Avg: 51.39 / Max: 51.78Min: 27.06 / Avg: 27.31 / Max: 27.49

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1.32192.64383.96575.28766.6095SE +/- 0.006, N = 9SE +/- 0.003, N = 7SE +/- 0.004, N = 7SE +/- 0.003, N = 8SE +/- 0.003, N = 93.2495.1695.8753.7092.9361. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.22 / Avg: 3.25 / Max: 3.27Min: 5.16 / Avg: 5.17 / Max: 5.18Min: 5.86 / Avg: 5.88 / Max: 5.89Min: 3.69 / Avg: 3.71 / Max: 3.72Min: 2.92 / Avg: 2.94 / Max: 2.951. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K60120180240300SE +/- 1.16, N = 4SE +/- 0.03, N = 4SE +/- 0.18, N = 3SE +/- 1.66, N = 4SE +/- 0.06, N = 5213.32222.33266.19213.78133.97MIN: 209.34 / MAX: 215.21MIN: 222.15 / MAX: 222.66MIN: 265.87 / MAX: 266.68MIN: 210.63 / MAX: 219.97MIN: 133.46 / MAX: 134.811. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K50100150200250Min: 209.87 / Avg: 213.32 / Max: 214.84Min: 222.26 / Avg: 222.33 / Max: 222.38Min: 266 / Avg: 266.19 / Max: 266.55Min: 210.84 / Avg: 213.78 / Max: 217.77Min: 133.84 / Avg: 133.97 / Max: 134.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K4080120160200SE +/- 1.48, N = 3SE +/- 0.62, N = 3SE +/- 0.74, N = 3SE +/- 0.51, N = 3SE +/- 0.23, N = 3110.28178.01199.65129.73101.571. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K4080120160200Min: 108.74 / Avg: 110.28 / Max: 113.24Min: 177.13 / Avg: 178.01 / Max: 179.21Min: 198.51 / Avg: 199.64 / Max: 201.05Min: 129.11 / Avg: 129.73 / Max: 130.76Min: 101.29 / Avg: 101.57 / Max: 102.031. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K90180270360450SE +/- 1.17, N = 3SE +/- 0.77, N = 3SE +/- 1.68, N = 3SE +/- 0.76, N = 3SE +/- 0.51, N = 3234.93376.96420.03268.61215.581. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K70140210280350Min: 232.74 / Avg: 234.93 / Max: 236.73Min: 375.67 / Avg: 376.96 / Max: 378.33Min: 417.37 / Avg: 420.03 / Max: 423.13Min: 267.76 / Avg: 268.61 / Max: 270.13Min: 214.96 / Avg: 215.58 / Max: 216.591. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K16003200480064008000SE +/- 48.56, N = 3SE +/- 53.33, N = 3SE +/- 34.79, N = 3SE +/- 49.47, N = 3SE +/- 19.39, N = 36643.646453.223976.307668.054198.511. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K13002600390052006500Min: 6559.89 / Avg: 6643.64 / Max: 6728.09Min: 6399.89 / Avg: 6453.22 / Max: 6559.89Min: 3916.35 / Avg: 3976.3 / Max: 4036.86Min: 7569.1 / Avg: 7668.05 / Max: 7717.52Min: 4165.01 / Avg: 4198.51 / Max: 4232.191. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 416.5616.1720.2916.1510.54
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025Min: 16.5 / Avg: 16.56 / Max: 16.6Min: 16.16 / Avg: 16.17 / Max: 16.19Min: 20.28 / Avg: 20.29 / Max: 20.31Min: 15.98 / Avg: 16.15 / Max: 16.23Min: 10.53 / Avg: 10.54 / Max: 10.55

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 0.29, N = 3SE +/- 1.17, N = 3SE +/- 0.29, N = 3SE +/- 0.60, N = 3SE +/- 11.29, N = 35545184805409191. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K160320480640800Min: 553 / Avg: 553.5 / Max: 554Min: 516 / Avg: 518.17 / Max: 520Min: 479.5 / Avg: 480 / Max: 480.5Min: 539 / Avg: 540.17 / Max: 541Min: 897.5 / Avg: 919.17 / Max: 935.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.45810.91621.37431.83242.2905SE +/- 0.00299, N = 3SE +/- 0.00274, N = 3SE +/- 0.00040, N = 3SE +/- 0.00489, N = 3SE +/- 0.00773, N = 31.069261.776412.036001.300491.35212-lpthread - MIN: 0.96-lpthread - MIN: 1.74-lpthread - MIN: 2.01-lpthread - MIN: 1.2MIN: 1.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 1.06 / Avg: 1.07 / Max: 1.07Min: 1.77 / Avg: 1.78 / Max: 1.78Min: 2.04 / Avg: 2.04 / Max: 2.04Min: 1.29 / Avg: 1.3 / Max: 1.31Min: 1.34 / Avg: 1.35 / Max: 1.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.73691.47382.21072.94763.6845SE +/- 0.014, N = 3SE +/- 0.011, N = 3SE +/- 0.029, N = 3SE +/- 0.045, N = 3SE +/- 0.020, N = 33.2751.8311.9422.9752.410MIN: 3.21 / MAX: 10.86MIN: 1.79 / MAX: 2.92MIN: 1.9 / MAX: 3.6MIN: 2.88 / MAX: 3.44MIN: 2.36 / MAX: 3.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.26 / Avg: 3.27 / Max: 3.3Min: 1.81 / Avg: 1.83 / Max: 1.84Min: 1.91 / Avg: 1.94 / Max: 2Min: 2.92 / Avg: 2.98 / Max: 3.07Min: 2.38 / Avg: 2.41 / Max: 2.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810SE +/- 0.00224, N = 9SE +/- 0.00483, N = 9SE +/- 0.00140, N = 9SE +/- 0.00484, N = 9SE +/- 0.00117, N = 93.626095.571096.370234.387735.25394-lpthread - MIN: 3.43-lpthread - MIN: 5.45-lpthread - MIN: 6.32-lpthread - MIN: 4.17MIN: 5.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215Min: 3.62 / Avg: 3.63 / Max: 3.64Min: 5.56 / Avg: 5.57 / Max: 5.61Min: 6.36 / Avg: 6.37 / Max: 6.38Min: 4.37 / Avg: 4.39 / Max: 4.41Min: 5.25 / Avg: 5.25 / Max: 5.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.81431.62862.44293.25724.0715SE +/- 0.013, N = 10SE +/- 0.007, N = 9SE +/- 0.006, N = 8SE +/- 0.008, N = 10SE +/- 0.010, N = 112.1653.1703.6192.4392.0621. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 2.11 / Avg: 2.17 / Max: 2.24Min: 3.14 / Avg: 3.17 / Max: 3.2Min: 3.59 / Avg: 3.62 / Max: 3.64Min: 2.41 / Avg: 2.44 / Max: 2.47Min: 1.99 / Avg: 2.06 / Max: 2.11. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.63881.27761.91642.55523.194SE +/- 0.00702, N = 9SE +/- 0.00503, N = 9SE +/- 0.00474, N = 9SE +/- 0.00298, N = 9SE +/- 0.00121, N = 91.618992.483932.839131.997422.22336-lpthread - MIN: 1.45-lpthread - MIN: 2.41-lpthread - MIN: 2.8-lpthread - MIN: 1.82MIN: 2.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 1.59 / Avg: 1.62 / Max: 1.66Min: 2.46 / Avg: 2.48 / Max: 2.5Min: 2.82 / Avg: 2.84 / Max: 2.86Min: 1.97 / Avg: 2 / Max: 2Min: 2.22 / Avg: 2.22 / Max: 2.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.31660.63320.94981.26641.583SE +/- 0.006926, N = 15SE +/- 0.001247, N = 4SE +/- 0.000645, N = 4SE +/- 0.007658, N = 4SE +/- 0.008609, N = 40.8230091.2378501.4072801.0069411.052000-lpthread - MIN: 0.7-lpthread - MIN: 1.21-lpthread - MIN: 1.39-lpthread - MIN: 0.94MIN: 1.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 0.78 / Avg: 0.82 / Max: 0.86Min: 1.23 / Avg: 1.24 / Max: 1.24Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 0.99 / Avg: 1.01 / Max: 1.03Min: 1.04 / Avg: 1.05 / Max: 1.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K6001200180024003000SE +/- 10.08, N = 3SE +/- 1.54, N = 3SE +/- 0.87, N = 3SE +/- 3.11, N = 3SE +/- 1.88, N = 32425.592609.843016.482563.761792.97MIN: 2339.59 / MAX: 2518.62MIN: 2559.5 / MAX: 2657.73MIN: 2943.1 / MAX: 3087.97MIN: 2481.34 / MAX: 2640.2MIN: 1751.55 / MAX: 1868.311. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500Min: 2407.33 / Avg: 2425.59 / Max: 2442.12Min: 2608.05 / Avg: 2609.84 / Max: 2612.92Min: 3015.46 / Avg: 3016.48 / Max: 3018.22Min: 2559.07 / Avg: 2563.76 / Max: 2569.64Min: 1789.29 / Avg: 1792.97 / Max: 1795.481. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.4Benchmark: P1B2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K81624324031.3929.9934.8430.2420.94

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.89731.79462.69193.58924.4865SE +/- 0.098, N = 3SE +/- 0.006, N = 3SE +/- 0.017, N = 3SE +/- 0.098, N = 3SE +/- 0.049, N = 33.9882.5902.8053.3942.405MIN: 3.72 / MAX: 4.79MIN: 2.55 / MAX: 4.48MIN: 2.76 / MAX: 10.42MIN: 3.15 / MAX: 4.21MIN: 2.33 / MAX: 3.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.79 / Avg: 3.99 / Max: 4.09Min: 2.58 / Avg: 2.59 / Max: 2.6Min: 2.79 / Avg: 2.8 / Max: 2.84Min: 3.2 / Avg: 3.39 / Max: 3.51Min: 2.35 / Avg: 2.41 / Max: 2.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K20406080100SE +/- 0.33, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.00, N = 3SE +/- 0.44, N = 3887267851111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K20406080100Min: 87.5 / Avg: 88.17 / Max: 88.5Min: 72 / Avg: 72.17 / Max: 72.5Min: 66 / Avg: 66.5 / Max: 67Min: 85 / Avg: 85 / Max: 85Min: 110 / Avg: 110.83 / Max: 111.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1428425670SE +/- 0.08, N = 9SE +/- 0.13, N = 9SE +/- 0.12, N = 8SE +/- 0.22, N = 9SE +/- 0.08, N = 1050.9453.2263.6150.8538.92MIN: 50.34 / MAX: 52.46MIN: 52.43 / MAX: 54.16MIN: 62.79 / MAX: 64.28MIN: 49.98 / MAX: 52.59MIN: 38.37 / MAX: 39.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1224364860Min: 50.44 / Avg: 50.94 / Max: 51.2Min: 52.52 / Avg: 53.22 / Max: 53.96Min: 62.97 / Avg: 63.61 / Max: 64.13Min: 50.17 / Avg: 50.85 / Max: 52.34Min: 38.72 / Avg: 38.91 / Max: 39.591. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2K4K6K8K10KSE +/- 58.87, N = 8SE +/- 14.95, N = 3SE +/- 104.18, N = 12SE +/- 10.85, N = 3SE +/- 20.71, N = 37062882668327862110821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2K4K6K8K10KMin: 6960.5 / Avg: 7062.31 / Max: 7471.5Min: 8798.5 / Avg: 8825.83 / Max: 8850Min: 6188.5 / Avg: 6832.42 / Max: 7318.5Min: 7841 / Avg: 7861.83 / Max: 7877.5Min: 11041.5 / Avg: 11081.67 / Max: 11110.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K60120180240300SE +/- 0.71, N = 4SE +/- 0.14, N = 4SE +/- 0.18, N = 3SE +/- 0.47, N = 4SE +/- 0.31, N = 4224.26233.18272.04224.20170.58MIN: 219.36 / MAX: 242.57MIN: 232.19 / MAX: 237.22MIN: 270.94 / MAX: 276.3MIN: 218.68 / MAX: 249.25MIN: 157.87 / MAX: 209.341. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K50100150200250Min: 222.21 / Avg: 224.26 / Max: 225.41Min: 232.78 / Avg: 233.18 / Max: 233.43Min: 271.83 / Avg: 272.04 / Max: 272.4Min: 222.79 / Avg: 224.2 / Max: 224.82Min: 169.95 / Avg: 170.58 / Max: 171.321. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K14002800420056007000SE +/- 22.98, N = 3SE +/- 15.67, N = 3SE +/- 10.11, N = 3SE +/- 5.61, N = 3SE +/- 46.82, N = 4643646064103559145161. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K11002200330044005500Min: 6395.5 / Avg: 6436.33 / Max: 6475Min: 4575 / Avg: 4606 / Max: 4625.5Min: 4087.5 / Avg: 4103 / Max: 4122Min: 5583 / Avg: 5591.33 / Max: 5602Min: 4409.5 / Avg: 4515.5 / Max: 46371. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215SE +/- 0.13, N = 4SE +/- 0.03, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.05, N = 1511.089.6211.8310.017.58MIN: 10.7 / MAX: 12.6MIN: 8.9 / MAX: 11.14MIN: 11.63 / MAX: 18.46MIN: 9.92 / MAX: 12.23MIN: 7.18 / MAX: 9.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215Min: 10.92 / Avg: 11.08 / Max: 11.46Min: 9.38 / Avg: 9.62 / Max: 9.8Min: 11.75 / Avg: 11.83 / Max: 11.93Min: 10 / Avg: 10.01 / Max: 10.03Min: 7.23 / Avg: 7.58 / Max: 7.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.87671.75342.63013.50684.3835SE +/- 0.01755, N = 4SE +/- 0.00793, N = 4SE +/- 0.00733, N = 4SE +/- 0.02871, N = 4SE +/- 0.00329, N = 43.896472.897173.291363.445482.63611-lpthread - MIN: 3.66-lpthread - MIN: 2.81-lpthread - MIN: 3.12-lpthread - MIN: 3.05MIN: 2.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.87 / Avg: 3.9 / Max: 3.95Min: 2.88 / Avg: 2.9 / Max: 2.91Min: 3.27 / Avg: 3.29 / Max: 3.31Min: 3.37 / Avg: 3.45 / Max: 3.5Min: 2.63 / Avg: 2.64 / Max: 2.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2K4K6K8K10KSE +/- 11.18, N = 3SE +/- 5.78, N = 3SE +/- 8.85, N = 3SE +/- 11.00, N = 3SE +/- 20.34, N = 3553469195621568780881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K14002800420056007000Min: 5515.5 / Avg: 5533.5 / Max: 5554Min: 6909.5 / Avg: 6919.33 / Max: 6929.5Min: 5606.5 / Avg: 5620.83 / Max: 5637Min: 5667 / Avg: 5686.83 / Max: 5705Min: 8047 / Avg: 8087.67 / Max: 8108.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430SE +/- 0.34, N = 4SE +/- 0.09, N = 15SE +/- 0.07, N = 15SE +/- 0.04, N = 3SE +/- 0.07, N = 1524.4018.1320.2221.2116.84MIN: 23.71 / MAX: 27.09MIN: 17.64 / MAX: 24.72MIN: 19.74 / MAX: 28.22MIN: 20.94 / MAX: 23.21MIN: 16.32 / MAX: 21.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430Min: 24.03 / Avg: 24.4 / Max: 25.43Min: 17.8 / Avg: 18.13 / Max: 18.76Min: 19.91 / Avg: 20.22 / Max: 20.55Min: 21.14 / Avg: 21.21 / Max: 21.25Min: 16.5 / Avg: 16.84 / Max: 17.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K40K80K120K160K200KSE +/- 215.28, N = 3SE +/- 248.50, N = 3SE +/- 39.00, N = 3SE +/- 46.44, N = 3SE +/- 116.45, N = 31937101620221791461787251340451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30K60K90K120K150KMin: 193357 / Avg: 193710 / Max: 194100Min: 161549 / Avg: 162021.67 / Max: 162391Min: 179068 / Avg: 179146 / Max: 179186Min: 178633 / Avg: 178725 / Max: 178782Min: 133827 / Avg: 134045 / Max: 1342251. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Open Porous Media Git

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K110220330440550SE +/- 0.16, N = 3SE +/- 0.25, N = 3SE +/- 0.04, N = 3SE +/- 0.23, N = 3SE +/- 0.26, N = 3357.86361.91447.63361.84515.711. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K90180270360450Min: 357.65 / Avg: 357.86 / Max: 358.17Min: 361.58 / Avg: 361.91 / Max: 362.4Min: 447.56 / Avg: 447.63 / Max: 447.68Min: 361.44 / Avg: 361.84 / Max: 362.22Min: 515.31 / Avg: 515.71 / Max: 516.191. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 0.32, N = 3SE +/- 0.28, N = 3SE +/- 0.32, N = 3SE +/- 0.16, N = 3SE +/- 0.65, N = 3575.13581.46733.91581.35825.561. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K150300450600750Min: 574.6 / Avg: 575.13 / Max: 575.69Min: 581.09 / Avg: 581.46 / Max: 582.02Min: 733.51 / Avg: 733.91 / Max: 734.54Min: 581.02 / Avg: 581.35 / Max: 581.52Min: 824.42 / Avg: 825.56 / Max: 826.661. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K8K16K24K32K40KSE +/- 30.55, N = 3SE +/- 24.84, N = 3SE +/- 7.06, N = 3SE +/- 38.11, N = 3SE +/- 17.32, N = 336658302223390534329255901. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K6K12K18K24K30KMin: 36602 / Avg: 36658.33 / Max: 36707Min: 30172 / Avg: 30221.67 / Max: 30247Min: 33892 / Avg: 33905.33 / Max: 33916Min: 34256 / Avg: 34329.33 / Max: 34384Min: 25560 / Avg: 25590.33 / Max: 256201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K20K40K60K80K100KSE +/- 34.42, N = 3SE +/- 158.22, N = 3SE +/- 34.64, N = 3SE +/- 260.55, N = 3SE +/- 741.46, N = 496624808628982589492678331. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K20K40K60K80K100KMin: 96567 / Avg: 96624.33 / Max: 96686Min: 80609 / Avg: 80861.67 / Max: 81153Min: 89783 / Avg: 89825.33 / Max: 89894Min: 89192 / Avg: 89492 / Max: 90011Min: 66887 / Avg: 67832.75 / Max: 700451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K16K32K48K64K80KSE +/- 178.17, N = 3SE +/- 94.00, N = 3SE +/- 49.12, N = 3SE +/- 57.49, N = 3SE +/- 323.04, N = 373352604946766168676517131. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K13K26K39K52K65KMin: 73013 / Avg: 73351.67 / Max: 73617Min: 60347 / Avg: 60494 / Max: 60669Min: 67599 / Avg: 67661 / Max: 67758Min: 68579 / Avg: 68676.33 / Max: 68778Min: 51115 / Avg: 51712.67 / Max: 522241. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K140280420560700SE +/- 0.86, N = 3SE +/- 1.21, N = 3SE +/- 0.43, N = 3SE +/- 8.26, N = 3SE +/- 0.68, N = 3591.54603.09491.56605.17665.34
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K120240360480600Min: 589.93 / Avg: 591.54 / Max: 592.89Min: 601.11 / Avg: 603.09 / Max: 605.28Min: 491.06 / Avg: 491.56 / Max: 492.42Min: 588.65 / Avg: 605.17 / Max: 613.76Min: 664.14 / Avg: 665.34 / Max: 666.5

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620SE +/- 0.09, N = 4SE +/- 0.05, N = 15SE +/- 0.06, N = 15SE +/- 0.02, N = 3SE +/- 0.17, N = 1514.5512.4416.7613.5113.27MIN: 13.65 / MAX: 21.26MIN: 12.03 / MAX: 14.14MIN: 16.11 / MAX: 23.01MIN: 13.16 / MAX: 20.71MIN: 12.19 / MAX: 43.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620Min: 14.39 / Avg: 14.55 / Max: 14.77Min: 12.28 / Avg: 12.44 / Max: 13.02Min: 16.49 / Avg: 16.76 / Max: 17.25Min: 13.49 / Avg: 13.51 / Max: 13.54Min: 12.29 / Avg: 13.27 / Max: 14.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000SE +/- 8.84, N = 3SE +/- 2.31, N = 3SE +/- 5.60, N = 3SE +/- 8.68, N = 3SE +/- 0.28, N = 31814.131382.161859.141785.051613.81-lpthread - MIN: 1783.13-lpthread - MIN: 1372.35-lpthread - MIN: 1846.97-lpthread - MIN: 1762.05MIN: 1608.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500Min: 1796.48 / Avg: 1814.13 / Max: 1823.75Min: 1378.1 / Avg: 1382.16 / Max: 1386.11Min: 1853.31 / Avg: 1859.14 / Max: 1870.33Min: 1772.21 / Avg: 1785.05 / Max: 1801.59Min: 1613.31 / Avg: 1613.81 / Max: 1614.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000SE +/- 22.20, N = 3SE +/- 0.61, N = 3SE +/- 4.89, N = 3SE +/- 13.72, N = 3SE +/- 1.95, N = 31783.851387.861864.541742.061616.06-lpthread - MIN: 1730.18-lpthread - MIN: 1380.9-lpthread - MIN: 1851.66-lpthread - MIN: 1715.13MIN: 1608.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500Min: 1744.44 / Avg: 1783.85 / Max: 1821.27Min: 1386.65 / Avg: 1387.86 / Max: 1388.5Min: 1857.83 / Avg: 1864.54 / Max: 1874.06Min: 1723.06 / Avg: 1742.06 / Max: 1768.7Min: 1613.49 / Avg: 1616.06 / Max: 1619.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000SE +/- 19.76, N = 5SE +/- 1.81, N = 3SE +/- 9.79, N = 3SE +/- 5.06, N = 3SE +/- 2.95, N = 31820.401385.791847.701776.191617.10-lpthread - MIN: 1761.07-lpthread - MIN: 1375.42-lpthread - MIN: 1820.32-lpthread - MIN: 1756.45MIN: 1608.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500Min: 1774.65 / Avg: 1820.4 / Max: 1879.6Min: 1382.27 / Avg: 1385.79 / Max: 1388.26Min: 1828.78 / Avg: 1847.7 / Max: 1861.54Min: 1766.56 / Avg: 1776.19 / Max: 1783.7Min: 1613.64 / Avg: 1617.1 / Max: 1622.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Open Porous Media Git

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 0.30, N = 3SE +/- 1.20, N = 3SE +/- 0.32, N = 3SE +/- 0.19, N = 3SE +/- 0.86, N = 3808.05781.981027.17810.33891.781. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 8Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000Min: 807.56 / Avg: 808.05 / Max: 808.58Min: 780.6 / Avg: 781.98 / Max: 784.37Min: 1026.65 / Avg: 1027.17 / Max: 1027.74Min: 810.04 / Avg: 810.33 / Max: 810.68Min: 890.3 / Avg: 891.78 / Max: 893.291. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 0.65, N = 3SE +/- 0.34, N = 3SE +/- 0.80, N = 3SE +/- 0.23, N = 3SE +/- 0.81, N = 3690.62643.96829.49691.38671.561. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K150300450600750Min: 689.53 / Avg: 690.62 / Max: 691.79Min: 643.34 / Avg: 643.96 / Max: 644.52Min: 828.46 / Avg: 829.49 / Max: 831.07Min: 691 / Avg: 691.38 / Max: 691.78Min: 670.19 / Avg: 671.56 / Max: 672.991. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K100200300400500SE +/- 0.39, N = 3SE +/- 0.21, N = 3SE +/- 0.49, N = 3SE +/- 0.78, N = 3SE +/- 0.53, N = 3377.03357.33454.34378.33451.001. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K80160240320400Min: 376.43 / Avg: 377.03 / Max: 377.75Min: 356.93 / Avg: 357.33 / Max: 357.66Min: 453.44 / Avg: 454.34 / Max: 455.13Min: 376.86 / Avg: 378.33 / Max: 379.53Min: 450.31 / Avg: 451 / Max: 452.041. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1020304050SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.24, N = 3SE +/- 0.02, N = 341.7734.8839.1639.3732.93
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K918273645Min: 41.59 / Avg: 41.77 / Max: 41.99Min: 34.81 / Avg: 34.88 / Max: 34.92Min: 39.09 / Avg: 39.16 / Max: 39.25Min: 38.98 / Avg: 39.37 / Max: 39.82Min: 32.89 / Avg: 32.93 / Max: 32.97

Open Porous Media Git

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile builds OPM and its dependencies from upstream Git. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K110220330440550SE +/- 0.66, N = 3SE +/- 0.45, N = 3SE +/- 0.29, N = 3SE +/- 0.35, N = 3SE +/- 0.98, N = 3441.07403.35511.31441.14475.951. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K90180270360450Min: 439.82 / Avg: 441.07 / Max: 442.04Min: 402.51 / Avg: 403.35 / Max: 404.05Min: 510.95 / Avg: 511.31 / Max: 511.88Min: 440.44 / Avg: 441.14 / Max: 441.53Min: 474.71 / Avg: 475.95 / Max: 477.891. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K130260390520650SE +/- 2.73, N = 3SE +/- 0.27, N = 3SE +/- 1.44, N = 3SE +/- 0.96, N = 3SE +/- 1.75, N = 3569.14476.91601.31572.60537.381. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne-4C MSW - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K110220330440550Min: 566.17 / Avg: 569.14 / Max: 574.59Min: 476.52 / Avg: 476.91 / Max: 477.42Min: 598.53 / Avg: 601.31 / Max: 603.37Min: 570.7 / Avg: 572.6 / Max: 573.81Min: 533.96 / Avg: 537.38 / Max: 539.711. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K60120180240300SE +/- 1.36, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.23, N = 3SE +/- 0.07, N = 3270.81224.10281.47273.89247.891. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K50100150200250Min: 268.98 / Avg: 270.81 / Max: 273.47Min: 223.91 / Avg: 224.1 / Max: 224.21Min: 281.28 / Avg: 281.47 / Max: 281.8Min: 273.65 / Avg: 273.89 / Max: 274.34Min: 247.75 / Avg: 247.89 / Max: 247.971. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K60120180240300SE +/- 0.31, N = 3SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 0.18, N = 3SE +/- 0.28, N = 3232.61223.97275.40233.63280.851. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 4Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K50100150200250Min: 232.17 / Avg: 232.61 / Max: 233.21Min: 223.7 / Avg: 223.97 / Max: 224.16Min: 275.08 / Avg: 275.4 / Max: 275.59Min: 233.33 / Avg: 233.63 / Max: 233.94Min: 280.33 / Avg: 280.85 / Max: 281.281. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K50100150200250SE +/- 0.42, N = 3SE +/- 0.14, N = 3SE +/- 0.53, N = 3SE +/- 0.16, N = 3SE +/- 0.42, N = 3203.89187.49234.63203.98223.201. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Norne - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K4080120160200Min: 203.25 / Avg: 203.89 / Max: 204.68Min: 187.24 / Avg: 187.49 / Max: 187.73Min: 233.87 / Avg: 234.63 / Max: 235.64Min: 203.71 / Avg: 203.98 / Max: 204.27Min: 222.45 / Avg: 223.2 / Max: 223.891. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 1.57, N = 3SE +/- 0.69, N = 3SE +/- 2.38, N = 3SE +/- 1.56, N = 3SE +/- 0.67, N = 3712.62672.48799.40713.34717.041. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K140280420560700Min: 709.52 / Avg: 712.62 / Max: 714.59Min: 671.35 / Avg: 672.48 / Max: 673.72Min: 796.29 / Avg: 799.4 / Max: 804.07Min: 710.95 / Avg: 713.34 / Max: 716.26Min: 715.71 / Avg: 717.04 / Max: 717.861. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 3.75, N = 3SE +/- 2.03, N = 3SE +/- 5.46, N = 3SE +/- 4.75, N = 3SE +/- 2.12, N = 31074.33997.341154.091073.711053.381. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous Media GitOPM Benchmark: Flow MPI Extra - Threads: 1Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000Min: 1067.24 / Avg: 1074.33 / Max: 1079.97Min: 993.48 / Avg: 997.34 / Max: 1000.37Min: 1144.54 / Avg: 1154.09 / Max: 1163.46Min: 1065.79 / Avg: 1073.71 / Max: 1082.21Min: 1049.43 / Avg: 1053.38 / Max: 1056.71. (CXX) g++ options: -pipe -pthread -fopenmp -O3 -mtune=native -UNDEBUG -lm -ldl -lrt2. Ryzen 9 5950X: Build Time Mon Apr 25 06:10:54 PM EDT 20223. Ryzen 7 5800X3D: Build Time Mon Apr 25 06:10:54 PM EDT 20224. Ryzen 7 5800X: Build Time Mon Apr 25 06:10:54 PM EDT 20225. Ryzen 9 5900X: Build Time Mon Apr 25 06:10:54 PM EDT 20226. Core i9 12900K: Build Time Thu Apr 28 06:45:36 PM EDT 2022

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K7001400210028003500SE +/- 26.41, N = 3SE +/- 2.34, N = 3SE +/- 1.56, N = 3SE +/- 32.41, N = 3SE +/- 0.76, N = 32750.182691.243065.262904.332881.32-lpthread - MIN: 2685.91-lpthread - MIN: 2678.39-lpthread - MIN: 3058.73-lpthread - MIN: 2837.45MIN: 2872.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500Min: 2702.42 / Avg: 2750.18 / Max: 2793.58Min: 2686.58 / Avg: 2691.24 / Max: 2693.99Min: 3063.55 / Avg: 3065.26 / Max: 3068.38Min: 2851.31 / Avg: 2904.33 / Max: 2963.12Min: 2879.97 / Avg: 2881.32 / Max: 2882.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K7001400210028003500SE +/- 10.92, N = 3SE +/- 5.12, N = 3SE +/- 2.27, N = 3SE +/- 23.67, N = 3SE +/- 0.27, N = 32703.332683.163053.302873.452881.72-lpthread - MIN: 2665.5-lpthread - MIN: 2665.66-lpthread - MIN: 3045.76-lpthread - MIN: 2830.55MIN: 2874.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500Min: 2687.33 / Avg: 2703.33 / Max: 2724.21Min: 2673.66 / Avg: 2683.16 / Max: 2691.21Min: 3048.75 / Avg: 3053.3 / Max: 3055.58Min: 2840.46 / Avg: 2873.45 / Max: 2919.36Min: 2881.25 / Avg: 2881.72 / Max: 2882.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K7001400210028003500SE +/- 23.91, N = 3SE +/- 1.77, N = 3SE +/- 0.89, N = 3SE +/- 15.10, N = 3SE +/- 1.61, N = 32745.442691.673055.842908.282881.10-lpthread - MIN: 2684.63-lpthread - MIN: 2680.17-lpthread - MIN: 3050.91-lpthread - MIN: 2862.45MIN: 2869.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K5001000150020002500Min: 2698.66 / Avg: 2745.44 / Max: 2777.44Min: 2688.56 / Avg: 2691.67 / Max: 2694.68Min: 3054.12 / Avg: 3055.84 / Max: 3057.07Min: 2878.53 / Avg: 2908.28 / Max: 2927.68Min: 2878.15 / Avg: 2881.1 / Max: 2883.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 3SE +/- 0.66, N = 325.7323.1925.6923.6924.30MIN: 25.09 / MAX: 33.94MIN: 22.93 / MAX: 31MIN: 25.45 / MAX: 31.44MIN: 22.99 / MAX: 31.36MIN: 22.9 / MAX: 36.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430Min: 25.47 / Avg: 25.73 / Max: 25.86Min: 23.08 / Avg: 23.19 / Max: 23.37Min: 25.53 / Avg: 25.69 / Max: 25.84Min: 23.26 / Avg: 23.69 / Max: 24.42Min: 23.05 / Avg: 24.3 / Max: 25.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K9001800270036004500Min: 2199 / Avg: 4402.82 / Max: 5050Min: 2193 / Avg: 4303.7 / Max: 4723Min: 3789 / Avg: 3795.04 / Max: 4564Min: 2199 / Avg: 4402.92 / Max: 5179Min: 800 / Avg: 4663.3 / Max: 5219

ONNX Runtime

MinAvgMaxRyzen 9 5950X359742804942Ryzen 7 5800X3D219542564534Ryzen 7 5800X379137953816Ryzen 9 5900X219944504906Core i9 12900K80147995200OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.11CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K400800120016002000SE +/- 46.80, N = 9SE +/- 3.42, N = 3SE +/- 1.33, N = 3SE +/- 45.67, N = 12SE +/- 0.67, N = 3166819391042151919751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K30060090012001500Min: 1511.5 / Avg: 1667.72 / Max: 2018Min: 1935.5 / Avg: 1939.17 / Max: 1946Min: 1040.5 / Avg: 1041.83 / Max: 1044.5Min: 1459.5 / Avg: 1518.92 / Max: 2020.5Min: 1974 / Avg: 1974.67 / Max: 19761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxRyzen 9 5950X219940894769Ryzen 7 5800X3D219542394541Ryzen 7 5800X379137953824Ryzen 9 5900X219943044771Core i9 12900K80147885100OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.11CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000SE +/- 1.20, N = 3SE +/- 57.02, N = 12SE +/- 0.17, N = 3SE +/- 4.80, N = 3SE +/- 0.76, N = 38008225579499881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K2004006008001000Min: 797.5 / Avg: 799.83 / Max: 801.5Min: 595.5 / Avg: 822 / Max: 1005.5Min: 556.5 / Avg: 556.67 / Max: 557Min: 941 / Avg: 948.67 / Max: 957.5Min: 986.5 / Avg: 988 / Max: 9891. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxRyzen 9 5950X219940534444Ryzen 7 5800X3D219642904384Ryzen 7 5800X379237963818Ryzen 9 5900X219943764658Core i9 12900K80147714919OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.11CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K16003200480064008000SE +/- 21.53, N = 3SE +/- 11.77, N = 3SE +/- 10.27, N = 3SE +/- 318.33, N = 12SE +/- 10.33, N = 3616941073628754547471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K13002600390052006500Min: 6142.5 / Avg: 6168.83 / Max: 6211.5Min: 4090.5 / Avg: 4107.33 / Max: 4130Min: 3615.5 / Avg: 3628.17 / Max: 3648.5Min: 5411.5 / Avg: 7545.46 / Max: 8560Min: 4726.5 / Avg: 4747.17 / Max: 4757.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxRyzen 9 5950X219944574746Ryzen 7 5800X3D219442404540Ryzen 7 5800X379137963881Ryzen 9 5900X219943994698Core i9 12900K80047735109OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.11CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K306090120150SE +/- 4.73, N = 12SE +/- 0.17, N = 3SE +/- 0.00, N = 3SE +/- 0.44, N = 3SE +/- 0.00, N = 398107491151011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K20406080100Min: 87 / Avg: 98.04 / Max: 127Min: 106.5 / Avg: 106.67 / Max: 107Min: 49 / Avg: 49 / Max: 49Min: 114.5 / Avg: 115.33 / Max: 116Min: 100.5 / Avg: 100.5 / Max: 100.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxRyzen 9 5950X220143964872Ryzen 7 5800X3D219542644541Ryzen 7 5800X379137953877Ryzen 9 5900X219944564868Core i9 12900K80147815194OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.11CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K150300450600750SE +/- 17.40, N = 9SE +/- 42.62, N = 12SE +/- 28.87, N = 12SE +/- 17.59, N = 12SE +/- 1.42, N = 34875724315396931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K120240360480600Min: 457 / Avg: 486.94 / Max: 622Min: 429 / Avg: 572.13 / Max: 718.5Min: 352.5 / Avg: 430.92 / Max: 563Min: 438 / Avg: 539.08 / Max: 601Min: 690 / Avg: 692.83 / Max: 694.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Mlpack Benchmark

MinAvgMaxRyzen 9 5950X359845564807Ryzen 7 5800X3D219543804545Ryzen 7 5800X379137953860Ryzen 9 5900X219946024907Core i9 12900K80048065200OpenBenchmarking.orgMegahertz, More Is BetterMlpack BenchmarkCPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.40950.8191.22851.6382.0475SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 7SE +/- 0.00, N = 3SE +/- 0.05, N = 121.691.631.821.581.60
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 1.67 / Avg: 1.69 / Max: 1.7Min: 1.63 / Avg: 1.63 / Max: 1.64Min: 1.73 / Avg: 1.82 / Max: 1.85Min: 1.58 / Avg: 1.58 / Max: 1.58Min: 1.42 / Avg: 1.6 / Max: 2.08

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1.1772.3543.5314.7085.885SE +/- 0.113, N = 3SE +/- 0.010, N = 3SE +/- 0.032, N = 3SE +/- 0.058, N = 3SE +/- 0.177, N = 35.2314.2134.5364.7584.151MIN: 4.95 / MAX: 6.51MIN: 4.16 / MAX: 5.46MIN: 4.47 / MAX: 5.71MIN: 4.59 / MAX: 12.51MIN: 3.91 / MAX: 6.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 5.01 / Avg: 5.23 / Max: 5.36Min: 4.19 / Avg: 4.21 / Max: 4.23Min: 4.5 / Avg: 4.54 / Max: 4.6Min: 4.65 / Avg: 4.76 / Max: 4.84Min: 3.93 / Avg: 4.15 / Max: 4.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 1.14, N = 320.5216.3618.4424.5023.09MIN: 19.84 / MAX: 24.05MIN: 16 / MAX: 24.17MIN: 18.25 / MAX: 25.86MIN: 23.86 / MAX: 52.44MIN: 21.7 / MAX: 30.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K612182430Min: 20.23 / Avg: 20.52 / Max: 20.71Min: 16.21 / Avg: 16.36 / Max: 16.53Min: 18.39 / Avg: 18.44 / Max: 18.5Min: 24.34 / Avg: 24.5 / Max: 24.77Min: 21.79 / Avg: 23.08 / Max: 25.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

MinAvgMaxRyzen 9 5950X359944664715Ryzen 7 5800X3D219543104448Ryzen 7 5800X379037933824Ryzen 9 5900X219943934604Core i9 12900K80046235197OpenBenchmarking.orgMegahertz, More Is BetterNCNN 20210720CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215SE +/- 0.08, N = 4SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.22, N = 159.615.185.938.437.39MIN: 9.37 / MAX: 11.03MIN: 5.07 / MAX: 12.17MIN: 5.83 / MAX: 7.68MIN: 8.36 / MAX: 8.75MIN: 6.17 / MAX: 27.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215Min: 9.51 / Avg: 9.61 / Max: 9.84Min: 5.1 / Avg: 5.18 / Max: 5.29Min: 5.89 / Avg: 5.93 / Max: 5.97Min: 8.42 / Avg: 8.43 / Max: 8.44Min: 6.25 / Avg: 7.39 / Max: 8.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025SE +/- 0.32, N = 4SE +/- 0.19, N = 15SE +/- 0.21, N = 15SE +/- 0.40, N = 3SE +/- 0.33, N = 1520.4914.8019.7820.6315.86MIN: 19.6 / MAX: 21.74MIN: 14 / MAX: 16.98MIN: 18.64 / MAX: 21.1MIN: 19.48 / MAX: 21.6MIN: 14.24 / MAX: 211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025Min: 19.91 / Avg: 20.49 / Max: 21.06Min: 14.12 / Avg: 14.8 / Max: 16.01Min: 18.73 / Avg: 19.78 / Max: 20.75Min: 19.82 / Avg: 20.63 / Max: 21.06Min: 14.35 / Avg: 15.86 / Max: 17.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620SE +/- 0.17, N = 4SE +/- 0.05, N = 15SE +/- 0.03, N = 15SE +/- 0.05, N = 3SE +/- 0.15, N = 1514.2810.2713.0212.509.66MIN: 13.94 / MAX: 16.83MIN: 9.71 / MAX: 12.09MIN: 12.77 / MAX: 32.91MIN: 12.28 / MAX: 12.82MIN: 7.55 / MAX: 14.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620Min: 14.08 / Avg: 14.28 / Max: 14.78Min: 9.93 / Avg: 10.27 / Max: 10.52Min: 12.87 / Avg: 13.02 / Max: 13.3Min: 12.39 / Avg: 12.5 / Max: 12.57Min: 7.62 / Avg: 9.66 / Max: 10.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1326395265SE +/- 0.08, N = 4SE +/- 0.13, N = 15SE +/- 0.07, N = 15SE +/- 0.09, N = 3SE +/- 0.48, N = 1556.5542.6055.9750.7528.24MIN: 55.53 / MAX: 62.98MIN: 41.52 / MAX: 50.92MIN: 54.91 / MAX: 64.62MIN: 49.97 / MAX: 60.2MIN: 25.72 / MAX: 45.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1122334455Min: 56.33 / Avg: 56.55 / Max: 56.7Min: 42.11 / Avg: 42.6 / Max: 43.64Min: 55.62 / Avg: 55.97 / Max: 56.66Min: 50.6 / Avg: 50.75 / Max: 50.91Min: 25.89 / Avg: 28.24 / Max: 30.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215SE +/- 0.28, N = 4SE +/- 0.05, N = 15SE +/- 0.02, N = 15SE +/- 0.02, N = 3SE +/- 0.21, N = 1512.927.3110.2211.449.94MIN: 12.09 / MAX: 15.32MIN: 7.05 / MAX: 14.4MIN: 9.79 / MAX: 18.21MIN: 11.28 / MAX: 11.82MIN: 7.91 / MAX: 14.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620Min: 12.42 / Avg: 12.92 / Max: 13.54Min: 7.18 / Avg: 7.31 / Max: 7.78Min: 10.05 / Avg: 10.22 / Max: 10.35Min: 11.41 / Avg: 11.44 / Max: 11.47Min: 7.98 / Avg: 9.94 / Max: 10.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.4050.811.2151.622.025SE +/- 0.02, N = 4SE +/- 0.00, N = 15SE +/- 0.00, N = 15SE +/- 0.00, N = 3SE +/- 0.05, N = 151.801.061.221.631.46MIN: 1.75 / MAX: 2.15MIN: 1.04 / MAX: 4.31MIN: 1.19 / MAX: 2.1MIN: 1.61 / MAX: 1.81MIN: 1.15 / MAX: 2.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 1.77 / Avg: 1.8 / Max: 1.85Min: 1.05 / Avg: 1.06 / Max: 1.09Min: 1.21 / Avg: 1.22 / Max: 1.22Min: 1.63 / Avg: 1.63 / Max: 1.63Min: 1.16 / Avg: 1.46 / Max: 1.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K1.20152.4033.60454.8066.0075SE +/- 0.06, N = 4SE +/- 0.01, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.11, N = 155.233.013.604.775.34MIN: 5.09 / MAX: 6.68MIN: 2.93 / MAX: 13.07MIN: 3.53 / MAX: 5.22MIN: 4.7 / MAX: 5MIN: 4.35 / MAX: 9.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 5.16 / Avg: 5.23 / Max: 5.39Min: 2.96 / Avg: 3.01 / Max: 3.07Min: 3.57 / Avg: 3.6 / Max: 3.69Min: 4.76 / Avg: 4.77 / Max: 4.79Min: 4.39 / Avg: 5.34 / Max: 5.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.87081.74162.61243.48324.354SE +/- 0.05, N = 4SE +/- 0.00, N = 15SE +/- 0.00, N = 15SE +/- 0.01, N = 3SE +/- 0.07, N = 153.872.012.243.443.10MIN: 3.76 / MAX: 10.7MIN: 1.97 / MAX: 2.76MIN: 2.2 / MAX: 3.68MIN: 3.39 / MAX: 3.72MIN: 2.66 / MAX: 4.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.81 / Avg: 3.87 / Max: 4.02Min: 2 / Avg: 2.01 / Max: 2.02Min: 2.23 / Avg: 2.24 / Max: 2.27Min: 3.43 / Avg: 3.44 / Max: 3.45Min: 2.69 / Avg: 3.1 / Max: 3.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.9361.8722.8083.7444.68SE +/- 0.01, N = 4SE +/- 0.01, N = 15SE +/- 0.00, N = 15SE +/- 0.01, N = 3SE +/- 0.08, N = 144.162.112.353.883.10MIN: 4.05 / MAX: 4.9MIN: 2.07 / MAX: 3MIN: 2.31 / MAX: 3.74MIN: 3.82 / MAX: 4.06MIN: 2.68 / MAX: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 4.14 / Avg: 4.16 / Max: 4.17Min: 2.09 / Avg: 2.11 / Max: 2.2Min: 2.34 / Avg: 2.35 / Max: 2.37Min: 3.87 / Avg: 3.88 / Max: 3.89Min: 2.72 / Avg: 3.1 / Max: 3.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.84831.69662.54493.39324.2415SE +/- 0.00, N = 4SE +/- 0.00, N = 15SE +/- 0.00, N = 15SE +/- 0.01, N = 3SE +/- 0.06, N = 153.771.902.283.462.90MIN: 3.7 / MAX: 4.73MIN: 1.84 / MAX: 2.38MIN: 2.22 / MAX: 3.92MIN: 3.39 / MAX: 3.66MIN: 2.53 / MAX: 4.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 3.77 / Avg: 3.77 / Max: 3.78Min: 1.88 / Avg: 1.9 / Max: 1.92Min: 2.26 / Avg: 2.28 / Max: 2.32Min: 3.45 / Avg: 3.46 / Max: 3.47Min: 2.56 / Avg: 2.9 / Max: 3.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K0.96531.93062.89593.86124.8265SE +/- 0.01, N = 4SE +/- 0.00, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.12, N = 154.292.132.613.913.41MIN: 4.15 / MAX: 7.55MIN: 2.03 / MAX: 2.88MIN: 2.54 / MAX: 3.77MIN: 3.82 / MAX: 4.11MIN: 2.72 / MAX: 5.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Ryzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810Min: 4.27 / Avg: 4.29 / Max: 4.31Min: 2.11 / Avg: 2.13 / Max: 2.15Min: 2.59 / Avg: 2.61 / Max: 2.68Min: 3.9 / Avg: 3.91 / Max: 3.92Min: 2.77 / Avg: 3.41 / Max: 4.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215SE +/- 0.15, N = 4SE +/- 0.08, N = 15SE +/- 0.13, N = 15SE +/- 0.01, N = 3SE +/- 0.26, N = 1512.277.6211.6311.4211.11MIN: 11.71 / MAX: 18.81MIN: 7.25 / MAX: 9.92MIN: 11.06 / MAX: 12.93MIN: 11.16 / MAX: 18.48MIN: 8.87 / MAX: 13.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenetRyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K48121620Min: 12.08 / Avg: 12.27 / Max: 12.72Min: 7.42 / Avg: 7.62 / Max: 8.58Min: 11.18 / Avg: 11.63 / Max: 12.52Min: 11.41 / Avg: 11.42 / Max: 11.43Min: 8.95 / Avg: 11.11 / Max: 12.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

MinAvgMaxRyzen 9 5950X220040524747Ryzen 7 5800X3D219541694441Ryzen 7 5800X379037933799Ryzen 9 5900X220040814722Core i9 12900K80043104900OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 2.6CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K246810SE +/- 0.22726, N = 15SE +/- 0.06264, N = 3SE +/- 0.11645, N = 15SE +/- 0.07639, N = 3SE +/- 0.16860, N = 124.755337.276238.342035.386898.73750-lpthread - MIN: 3.38-lpthread - MIN: 5.11-lpthread - MIN: 5.88-lpthread - MIN: 4MIN: 4.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K3691215Min: 3.84 / Avg: 4.76 / Max: 6.95Min: 7.21 / Avg: 7.28 / Max: 7.4Min: 7.59 / Avg: 8.34 / Max: 9.16Min: 5.3 / Avg: 5.39 / Max: 5.54Min: 7.94 / Avg: 8.74 / Max: 10.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

MinAvgMaxRyzen 9 5950X220038614849Ryzen 7 5800X3D219538074441Ryzen 7 5800X379137943798Ryzen 9 5900X219939014803Core i9 12900K80134655105OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 2.6CPU Peak Freq (Highest CPU Core Frequency) Monitor13002600390052006500

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025SE +/- 0.00896, N = 7SE +/- 0.01917, N = 7SE +/- 0.12899, N = 7SE +/- 0.29208, N = 15SE +/- 0.00220, N = 716.7051012.6227018.7535016.139805.87536-lpthread - MIN: 16.31-lpthread - MIN: 12.28-lpthread - MIN: 18.34-lpthread - MIN: 15.35MIN: 5.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURyzen 9 5950XRyzen 7 5800X3DRyzen 7 5800XRyzen 9 5900XCore i9 12900K510152025Min: 16.67 / Avg: 16.71 / Max: 16.74Min: 12.56 / Avg: 12.62 / Max: 12.68Min: 18.57 / Avg: 18.75 / Max: 19.52Min: 15.78 / Avg: 16.14 / Max: 20.23Min: 5.87 / Avg: 5.88 / Max: 5.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

106 Results Shown

ONNX Runtime
ASKAP
ECP-CANDLE
ASKAP
LeelaChessZero:
  BLAS
  Eigen
oneDNN
ECP-CANDLE
oneDNN
ASKAP
OpenFOAM
Xcompact3d Incompact3d
ASKAP
oneDNN
Xcompact3d Incompact3d
ASKAP
Mobile Neural Network
ONNX Runtime
OpenFOAM
Mobile Neural Network
ASKAP
WebP2 Image Encode
Mlpack Benchmark
WebP2 Image Encode
TNN
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
ASKAP
Mlpack Benchmark
ONNX Runtime
oneDNN
Mobile Neural Network
oneDNN
WebP2 Image Encode
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
TNN
ECP-CANDLE
Mobile Neural Network
ONNX Runtime
TNN
ONNX Runtime
TNN
ONNX Runtime
NCNN
oneDNN
ONNX Runtime
NCNN
Caffe
Open Porous Media Git:
  Flow MPI Norne - 8
  Flow MPI Norne-4C MSW - 8
Caffe:
  AlexNet - CPU - 100
  GoogleNet - CPU - 100
  AlexNet - CPU - 200
Numpy Benchmark
NCNN
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
Open Porous Media Git:
  Flow MPI Extra - 8
  Flow MPI Extra - 4
  Flow MPI Norne-4C MSW - 4
Mlpack Benchmark
Open Porous Media Git:
  Flow MPI Norne-4C MSW - 2
  Flow MPI Norne-4C MSW - 1
  Flow MPI Norne - 1
  Flow MPI Norne - 4
  Flow MPI Norne - 2
  Flow MPI Extra - 2
  Flow MPI Extra - 1
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
Mobile Neural Network
CPU Peak Freq (Highest CPU Core Frequency) Monitor:
  Phoronix Test Suite System Monitoring
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
ONNX Runtime
Mlpack Benchmark
Mlpack Benchmark
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
NCNN
NCNN:
  CPU - regnety_400m
  CPU - yolov4-tiny
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
oneDNN
oneDNN
oneDNN
oneDNN