AMD EPYC 7763 Cooling Performance

AMD EPYC 7763 64-Core CPU benchmarks by Michael Larabel evaluating some heatsink fans in a 4U server.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104096-IB-HEATSINK430
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Noctua NH-U9 TR4-SP3
April 08 2021
  8 Hours, 12 Minutes
Dynatron A26
April 09 2021
  8 Hours, 45 Minutes
Dynatron A38
April 09 2021
  11 Hours, 17 Minutes
Invert Behavior (Only Show Selected Data)
  9 Hours, 25 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7763 Cooling PerformanceOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)Supermicro H12SSL-i v1.01 (2.0 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPllvmpipe2 x Broadcom NetXtreme BCM5720 2-port PCIeUbuntu 20.045.12.0-051200rc6daily20210408-generic (x86_64) 20210407GNOME Shell 3.36.4X Server 1.20.83.3 Mesa 20.0.8 (LLVM 10.0.0 128 bits)GCC 9.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD EPYC 7763 Cooling Performance BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119 - Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38Result OverviewPhoronix Test Suite100%101%101%102%StockfishXcompact3d Incompact3dTimed Erlang/OTP CompilationChaos Group V-RAYViennaCLOpenSCADMobile Neural NetworkTimed Node.js CompilationASTC EncoderTimed GDB GNU Debugger CompilationLuaRadioGROMACSIndigoBenchAOM AV1Timed Apache CompilationSVT-AV1Timed Linux Kernel CompilationNAMDsimdjsonsrsLTEGNU RadioSVT-HEVCLiquid-DSPGNU GMP GMPbenchTimed Mesa CompilationSVT-VP9BlenderoneDNN

AMD EPYC 7763 Cooling Performanceincompact3d: X3D-benchmarking input.i3donednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 4Kstockfish: Total Timeaom-av1: Speed 6 Two-Pass - Bosphorus 4Kviennacl: CPU BLAS - sAXPYsrslte: OFDM_Testonednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Kviennacl: CPU BLAS - dGEMV-Taom-av1: Speed 8 Realtime - Bosphorus 1080pincompact3d: input.i3d 193 Cells Per Directionaom-av1: Speed 9 Realtime - Bosphorus 4Kmnn: SqueezeNetV1.0viennacl: CPU BLAS - dAXPYonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbuild-erlang: Time To Compilegnuradio: Five Back to Back FIR Filtersopenscad: Mini-ITX Casesvt-av1: Enc Mode 4 - 1080ponednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUmnn: resnet-v2-50luaradio: Five Back to Back FIR Filtersopenscad: Leonardo Phone Case Slimonednn: Recurrent Neural Network Training - u8s8f32 - CPUopenscad: Pistolv-ray: CPUaom-av1: Speed 6 Realtime - Bosphorus 4Kmnn: inception-v3onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUopenscad: Retro Caraom-av1: Speed 4 Two-Pass - Bosphorus 1080pastcenc: Exhaustiveaom-av1: Speed 6 Realtime - Bosphorus 1080pmnn: MobileNetV2_224onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUsvt-av1: Enc Mode 8 - 1080psimdjson: DistinctUserIDaom-av1: Speed 6 Two-Pass - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pviennacl: CPU BLAS - dCOPYindigobench: CPU - Supercaronednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUgnuradio: FM Deemphasis Filtergnuradio: IIR Filtersrslte: PHY_DL_Testgnuradio: Signal Source (Cosine)viennacl: CPU BLAS - sDOTonednn: Deconvolution Batch shapes_3d - f32 - CPUsvt-hevc: 7 - Bosphorus 1080ponednn: Deconvolution Batch shapes_1d - f32 - CPUmnn: mobilenet-v1-1.0srslte: PHY_DL_Testviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNbuild-nodejs: Time To Compilegnuradio: Hilbert Transformbuild-gdb: Time To Compileastcenc: Mediumliquid-dsp: 16 - 256 - 57gnuradio: FIR Filterblender: Fishy Cat - CPU-Onlysvt-hevc: 1 - Bosphorus 1080pgromacs: water_GMX50_bareliquid-dsp: 64 - 256 - 57luaradio: FM Deemphasis Filterbuild-apache: Time To Compileviennacl: CPU BLAS - dDOTliquid-dsp: 128 - 256 - 57build-linux-kernel: Time To Compileopenscad: Projector Mount Swivelonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUnamd: ATPase Simulation - 327,506 Atomssimdjson: PartialTweetsonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUviennacl: CPU BLAS - dGEMM-TTblender: BMW27 - CPU-Onlyviennacl: CPU BLAS - dGEMM-TNastcenc: Thoroughgmpbench: Total Timebuild-mesa: Time To Compilesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080pblender: Pabellon Barcelona - CPU-Onlyluaradio: Complex Phaseblender: Classroom - CPU-Onlyonednn: Recurrent Neural Network Inference - f32 - CPUincompact3d: input.i3d 129 Cells Per Directionblender: Barbershop - CPU-Onlyluaradio: Hilbert Transformindigobench: CPU - Bedroomliquid-dsp: 32 - 256 - 57onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUsimdjson: Kostyasimdjson: LargeRandsvt-av1: Enc Mode 0 - 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - sCOPYsvt-vp9: Visual Quality Optimized - Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38625.8123370.663059104.574.721566566859.156271156333331373.9134.179386.1322.509648737.455.78512221370.82134.328560.945.8419.4733.663071.6522022.1301101.618.6991385.32109.8335791216.0928.1710.38492919.0636.7420.442824.613.7570.7209281.1775892.8614.0121.31607.02140124.1900.605058765.3608.694.33322.26373.03820324.427.136082.328257.486.188.1110.706378.298.8144.9176803310000639.045.9837.765.5772792400000344.723.5901115301793333326.930100.9020.7818510.381103.64664.6801.1779689.731.9292.17.99255098.819.835471.60468.7693.59591.880.96665.2285.15653072111.6193.511.3941613766667664.9880.8800472.830.960.1300.20.588.61035345.24667.2684940.650624101.704.831601076519.166401162333331378.8634.2978186.5622.330851238.065.79712051391.14132.497567.745.2329.4523.649411.6567122.3971110.618.5251369.98108.6175827016.2528.4510.38149518.8866.7120.619624.713.7890.7267891.1868592.3474.0021.47602.55140924.2560.607655760.3604.794.33324.76403.02459323.607.178392.342257.186.688.6111.318377.399.3264.9412800190000641.645.8037.895.5822782866667344.223.6731116302806666726.877100.7230.7834640.381643.63666.4831.1786389.931.8592.18.00625099.219.796471.09468.5993.72591.680.86666.0495.15065159111.4893.611.3991614933333664.8070.8797522.830.960.1300.20.5082.61052347.78627.6671140.640331101.274.771585120049.356401178666671400.1934.7377987.6322.708764437.465.87812041381.77133.119560.145.4829.3553.617621.6725622.2691097.418.7341377.49108.7945850416.1428.3280.38520718.9276.6820.559124.823.7800.7215861.1790693.0483.9821.46605.97139924.3600.603621764.1607.093.73303.66363.04352322.437.180022.332255.986.388.2110.941376.299.0864.9424804146667642.046.0137.925.5992793733333343.423.5861112302813333326.844101.0450.7810600.382153.63666.1301.1808789.931.9291.97.98985089.119.800470.69467.9693.72591.080.92665.5265.15051767111.5393.511.4041614566667665.2650.8794802.830.960.1300.20.5078.91044346.82OpenBenchmarking.org

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 0.48, N = 3SE +/- 11.65, N = 9SE +/- 0.23, N = 3625.81667.27627.671. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.14920.29840.44760.59680.746SE +/- 0.006455, N = 5SE +/- 0.004042, N = 5SE +/- 0.005445, N = 50.6630590.6506240.640331MIN: 0.61MIN: 0.59MIN: 0.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 1.05, N = 7SE +/- 1.09, N = 6SE +/- 0.67, N = 6104.57101.70101.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.08682.17363.26044.34725.434SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 34.724.834.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830M60M90M120M150MSE +/- 2161918.89, N = 4SE +/- 2061799.88, N = 15SE +/- 1246901.76, N = 31566566851601076511585120041. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A383691215SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 69.159.169.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 2.32, N = 15SE +/- 2.67, N = 14SE +/- 1.86, N = 156276406401. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_TestNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830M60M90M120M150MSE +/- 1770436.23, N = 3SE +/- 1902921.73, N = 3SE +/- 1178039.80, N = 31156333331162333331178666671. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 3.07, N = 3SE +/- 9.11, N = 3SE +/- 14.96, N = 31373.911378.861400.19MIN: 1332.45MIN: 1326.59MIN: 1335.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38816243240SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.28, N = 334.1034.2934.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A382004006008001000SE +/- 1.64, N = 15SE +/- 1.78, N = 14SE +/- 3.44, N = 157937817791. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.36, N = 6SE +/- 0.59, N = 6SE +/- 0.58, N = 686.1386.5687.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.04, N = 3SE +/- 0.30, N = 3SE +/- 0.08, N = 322.5122.3322.711. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38918273645SE +/- 0.43, N = 3SE +/- 0.46, N = 6SE +/- 0.48, N = 537.4538.0637.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.32262.64523.96785.29046.613SE +/- 0.039, N = 3SE +/- 0.027, N = 3SE +/- 0.015, N = 35.7855.7975.878MIN: 5.58 / MAX: 6.64MIN: 5.54 / MAX: 7.52MIN: 5.64 / MAX: 6.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 2.23, N = 15SE +/- 2.28, N = 14SE +/- 1.31, N = 151222120512041. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 8.02, N = 3SE +/- 4.55, N = 3SE +/- 8.30, N = 31370.821391.141381.77MIN: 1322.76MIN: 1343.15MIN: 1330.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38306090120150SE +/- 0.26, N = 3SE +/- 0.39, N = 3SE +/- 0.29, N = 3134.33132.50133.12

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38120240360480600SE +/- 8.06, N = 4SE +/- 5.98, N = 3SE +/- 5.99, N = 9560.9567.7560.11. 3.8.1.0

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381020304050SE +/- 0.06, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 345.8445.2345.481. OpenSCAD version 2019.05

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A383691215SE +/- 0.108, N = 4SE +/- 0.083, N = 4SE +/- 0.114, N = 69.4739.4529.3551. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.82421.64842.47263.29684.121SE +/- 0.04102, N = 5SE +/- 0.02260, N = 5SE +/- 0.03350, N = 53.663073.649413.61762MIN: 3.37MIN: 3.4MIN: 3.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.37630.75261.12891.50521.8815SE +/- 0.00301, N = 7SE +/- 0.00304, N = 7SE +/- 0.01439, N = 71.652201.656711.67256MIN: 1.57MIN: 1.57MIN: 1.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 322.1322.4022.27MIN: 21.58 / MAX: 32.33MIN: 21.64 / MAX: 41.29MIN: 21.57 / MAX: 30.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A382004006008001000SE +/- 3.75, N = 3SE +/- 1.05, N = 3SE +/- 2.63, N = 31101.61110.61097.4

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 318.7018.5318.731. OpenSCAD version 2019.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 3.58, N = 3SE +/- 3.15, N = 3SE +/- 4.57, N = 31385.321369.981377.49MIN: 1350.61MIN: 1325.96MIN: 1335.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.20, N = 3SE +/- 0.10, N = 3SE +/- 0.31, N = 3109.83108.62108.791. OpenSCAD version 2019.05

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3813K26K39K52K65KSE +/- 265.36, N = 3SE +/- 814.01, N = 3SE +/- 477.68, N = 3579125827058504

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3848121620SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 316.0916.2516.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38714212835SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.10, N = 328.1728.4528.33MIN: 27.06 / MAX: 43.23MIN: 27.16 / MAX: 42.86MIN: 27.14 / MAX: 44.361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.08670.17340.26010.34680.4335SE +/- 0.005034, N = 4SE +/- 0.001495, N = 4SE +/- 0.004169, N = 40.3849290.3814950.385207MIN: 0.36MIN: 0.36MIN: 0.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro CarNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 319.0618.8918.931. OpenSCAD version 2019.05

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.746.716.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 320.4420.6220.561. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38612182430SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.10, N = 324.6124.7124.821. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.85251.7052.55753.414.2625SE +/- 0.017, N = 3SE +/- 0.021, N = 3SE +/- 0.016, N = 33.7573.7893.780MIN: 3.63 / MAX: 6.09MIN: 3.66 / MAX: 4.64MIN: 3.66 / MAX: 6.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.16350.3270.49050.6540.8175SE +/- 0.002487, N = 4SE +/- 0.001153, N = 4SE +/- 0.001725, N = 40.7209280.7267890.721586MIN: 0.67MIN: 0.67MIN: 0.671. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.2670.5340.8011.0681.335SE +/- 0.00777, N = 4SE +/- 0.00760, N = 4SE +/- 0.01187, N = 41.177581.186851.17906MIN: 1MIN: 0.98MIN: 0.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.39, N = 6SE +/- 0.38, N = 6SE +/- 0.62, N = 692.8692.3593.051. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.90231.80462.70693.60924.5115SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.014.003.981. (CXX) g++ options: -O3 -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 321.3121.4721.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38130260390520650SE +/- 1.56, N = 12SE +/- 1.45, N = 12SE +/- 0.85, N = 12607.02602.55605.971. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 5.47, N = 15SE +/- 3.55, N = 14SE +/- 3.16, N = 151401140913991. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 324.1924.2624.36

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.13670.27340.41010.54680.6835SE +/- 0.001828, N = 3SE +/- 0.001607, N = 3SE +/- 0.000705, N = 30.6050580.6076550.603621MIN: 0.56MIN: 0.56MIN: 0.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38160320480640800SE +/- 1.86, N = 4SE +/- 1.05, N = 3SE +/- 1.58, N = 9765.3760.3764.11. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38130260390520650SE +/- 1.08, N = 4SE +/- 1.25, N = 3SE +/- 1.20, N = 9608.6604.7607.01. 3.8.1.0

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.46, N = 3SE +/- 0.23, N = 3SE +/- 0.52, N = 394.394.393.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A387001400210028003500SE +/- 6.28, N = 4SE +/- 26.65, N = 3SE +/- 17.69, N = 93322.23324.73303.61. 3.8.1.0

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 0.77, N = 14SE +/- 1.19, N = 14SE +/- 1.17, N = 136376406361. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.68481.36962.05442.73923.424SE +/- 0.00904, N = 9SE +/- 0.00911, N = 9SE +/- 0.00693, N = 93.038203.024593.04352MIN: 2.21MIN: 2.24MIN: 2.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3870140210280350SE +/- 1.48, N = 10SE +/- 0.66, N = 10SE +/- 0.82, N = 10324.42323.60322.431. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38246810SE +/- 0.03799, N = 3SE +/- 0.02988, N = 3SE +/- 0.01979, N = 37.136087.178397.18002MIN: 6.04MIN: 6.17MIN: 6.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.5271.0541.5812.1082.635SE +/- 0.010, N = 3SE +/- 0.012, N = 3SE +/- 0.014, N = 32.3282.3422.332MIN: 2.28 / MAX: 2.55MIN: 2.29 / MAX: 2.64MIN: 2.28 / MAX: 2.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3860120180240300SE +/- 0.22, N = 3SE +/- 0.18, N = 3SE +/- 0.84, N = 3257.4257.1255.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.28, N = 15SE +/- 0.06, N = 14SE +/- 0.30, N = 1586.186.686.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.52, N = 15SE +/- 0.08, N = 14SE +/- 0.34, N = 1588.188.688.21. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.30, N = 3110.71111.32110.94

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3880160240320400SE +/- 1.21, N = 4SE +/- 1.82, N = 3SE +/- 0.76, N = 9378.2377.3376.21. 3.8.1.0

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 398.8199.3399.09

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.1122.2243.3364.4485.56SE +/- 0.0039, N = 7SE +/- 0.0032, N = 7SE +/- 0.0054, N = 74.91764.94124.94241. (CXX) g++ options: -O3 -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38200M400M600M800M1000MSE +/- 6476302.96, N = 3SE +/- 5535858.86, N = 3SE +/- 2740148.01, N = 38033100008001900008041466671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 0.72, N = 4SE +/- 1.06, N = 3SE +/- 0.99, N = 9639.0641.6642.01. 3.8.1.0

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381020304050SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 345.9845.8046.01

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38918273645SE +/- 0.05, N = 4SE +/- 0.12, N = 4SE +/- 0.10, N = 437.7637.8937.921. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.25982.51963.77945.03926.299SE +/- 0.003, N = 3SE +/- 0.018, N = 3SE +/- 0.010, N = 35.5775.5825.5991. (CXX) g++ options: -O3 -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38600M1200M1800M2400M3000MSE +/- 5372460.64, N = 3SE +/- 3268196.92, N = 3SE +/- 3347304.06, N = 32792400000278286666727937333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3870140210280350SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.41, N = 3344.7344.2343.4

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 323.5923.6723.59

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A382004006008001000SE +/- 2.15, N = 15SE +/- 1.73, N = 14SE +/- 2.00, N = 151115111611121. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38600M1200M1800M2400M3000MSE +/- 2577035.33, N = 3SE +/- 448454.13, N = 3SE +/- 266666.67, N = 33017933333302806666730281333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38612182430SE +/- 0.28, N = 8SE +/- 0.26, N = 9SE +/- 0.26, N = 926.9326.8826.84

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount SwivelNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.29, N = 3SE +/- 0.42, N = 3SE +/- 0.14, N = 3100.90100.72101.051. OpenSCAD version 2019.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.17630.35260.52890.70520.8815SE +/- 0.002596, N = 9SE +/- 0.002597, N = 9SE +/- 0.002274, N = 90.7818510.7834640.781060MIN: 0.72MIN: 0.72MIN: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.0860.1720.2580.3440.43SE +/- 0.00051, N = 3SE +/- 0.00041, N = 3SE +/- 0.00076, N = 30.381100.381640.38215

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.8191.6382.4573.2764.095SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.643.633.631. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 1.52, N = 3SE +/- 1.24, N = 3SE +/- 1.16, N = 3664.68666.48666.13MIN: 637.44MIN: 640.16MIN: 639.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.26570.53140.79711.06281.3285SE +/- 0.00196, N = 4SE +/- 0.00176, N = 4SE +/- 0.00303, N = 41.177961.178631.18087MIN: 1.1MIN: 1.12MIN: 1.11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.13, N = 15SE +/- 0.02, N = 14SE +/- 0.05, N = 1589.789.989.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38714212835SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 331.9231.8531.92

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.04, N = 15SE +/- 0.03, N = 14SE +/- 0.13, N = 1592.192.191.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38246810SE +/- 0.0055, N = 6SE +/- 0.0081, N = 6SE +/- 0.0064, N = 67.99258.00627.98981. (CXX) g++ options: -O3 -flto -pthread

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total TimeNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38110022003300440055005098.85099.25089.11. (CC) gcc options: -O3 -fomit-frame-pointer -lm

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38510152025SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 319.8419.8019.80

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38100200300400500SE +/- 1.14, N = 10SE +/- 1.26, N = 10SE +/- 1.32, N = 10471.60471.09470.691. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38100200300400500SE +/- 1.02, N = 10SE +/- 1.01, N = 10SE +/- 0.90, N = 10468.76468.59467.961. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 393.5993.7293.72

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38130260390520650SE +/- 0.62, N = 3SE +/- 0.72, N = 3SE +/- 0.49, N = 3591.8591.6591.0

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 380.9680.8680.92

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 0.56, N = 3SE +/- 1.29, N = 3SE +/- 1.18, N = 3665.23666.05665.53MIN: 638.88MIN: 638.64MIN: 639.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.16022.32043.48064.64085.801SE +/- 0.01747253, N = 7SE +/- 0.02227690, N = 7SE +/- 0.02393955, N = 75.156530725.150651595.150517671. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3111.61111.48111.53

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 393.593.693.5

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A383691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 311.3911.4011.40

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38300M600M900M1200M1500MSE +/- 3887729.99, N = 3SE +/- 3637917.60, N = 3SE +/- 2380709.51, N = 31613766667161493333316145666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38140280420560700SE +/- 1.05, N = 3SE +/- 1.25, N = 3SE +/- 0.96, N = 3664.99664.81665.27MIN: 636.72MIN: 637.03MIN: 637.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.1980.3960.5940.7920.99SE +/- 0.000667, N = 7SE +/- 0.000564, N = 7SE +/- 0.000819, N = 70.8800470.8797520.879480MIN: 0.84MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.63681.27361.91042.54723.184SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.832.832.831. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.2160.4320.6480.8641.08SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.960.961. (CXX) g++ options: -O3 -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.02930.05860.08790.11720.1465SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1300.1300.1301. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.0450.090.1350.180.225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.20.20.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381530456075Min: 41.5 / Avg: 56.86 / Max: 79.5Min: 41 / Avg: 59.01 / Max: 79.25Min: 40.25 / Avg: 51.96 / Max: 70.25

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 7.63, N = 15SE +/- 9.76, N = 14SE +/- 4.88, N = 1588.682.678.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A382004006008001000SE +/- 30.05, N = 15SE +/- 28.51, N = 14SE +/- 27.57, N = 151035105210441. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

SVT-VP9

MinAvgMaxNoctua NH-U9 TR4-SP347.049.758.5Dynatron A2647.550.658.0Dynatron A3844.046.554.5OpenBenchmarking.orgCelsius, Fewer Is BetterSVT-VP9 0.3CPU Temperature Monitor1632486480

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3880160240320400SE +/- 5.87, N = 15SE +/- 6.37, N = 15SE +/- 6.16, N = 15345.24347.78346.821. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

108 Results Shown

Xcompact3d Incompact3d
oneDNN
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 4K
Stockfish
AOM AV1
ViennaCL
srsLTE
oneDNN
AOM AV1
ViennaCL
AOM AV1
Xcompact3d Incompact3d
AOM AV1
Mobile Neural Network
ViennaCL
oneDNN
Timed Erlang/OTP Compilation
GNU Radio
OpenSCAD
SVT-AV1
oneDNN:
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
Mobile Neural Network
LuaRadio
OpenSCAD
oneDNN
OpenSCAD
Chaos Group V-RAY
AOM AV1
Mobile Neural Network
oneDNN
OpenSCAD
AOM AV1
ASTC Encoder
AOM AV1
Mobile Neural Network
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
SVT-AV1
simdjson
AOM AV1
SVT-HEVC
ViennaCL
IndigoBench
oneDNN
GNU Radio:
  FM Deemphasis Filter
  IIR Filter
srsLTE
GNU Radio
ViennaCL
oneDNN
SVT-HEVC
oneDNN
Mobile Neural Network
srsLTE
ViennaCL:
  CPU BLAS - dGEMM-NT
  CPU BLAS - dGEMM-NN
Timed Node.js Compilation
GNU Radio
Timed GDB GNU Debugger Compilation
ASTC Encoder
Liquid-DSP
GNU Radio
Blender
SVT-HEVC
GROMACS
Liquid-DSP
LuaRadio
Timed Apache Compilation
ViennaCL
Liquid-DSP
Timed Linux Kernel Compilation
OpenSCAD
oneDNN
NAMD
simdjson
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
ViennaCL
Blender
ViennaCL
ASTC Encoder
GNU GMP GMPbench
Timed Mesa Compilation
SVT-VP9:
  PSNR/SSIM Optimized - Bosphorus 1080p
  VMAF Optimized - Bosphorus 1080p
Blender
LuaRadio
Blender
oneDNN
Xcompact3d Incompact3d
Blender
LuaRadio
IndigoBench
Liquid-DSP
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
simdjson:
  Kostya
  LargeRand
SVT-AV1
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
CPU Temperature Monitor
ViennaCL:
  CPU BLAS - dGEMV-N
  CPU BLAS - sCOPY
SVT-VP9
SVT-VP9