Noctua SP6 Heatsinks AMD EPYC Siena Benchmarks

AMD EPYC 8534P 64-Core testing of various heatsinks/coolers.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2311164-NE-HSF59996100
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 4 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 3 Tests
Compression Tests 2 Tests
CPU Massive 9 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Game Development 4 Tests
HPC - High Performance Computing 5 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 2 Tests
Molecular Dynamics 2 Tests
Multi-Core 16 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 6 Tests
Renderers 2 Tests
Scientific Computing 3 Tests
Server 4 Tests
Server CPU Tests 7 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Dynatron A54
November 11 2023
  14 Hours, 27 Minutes
NH-D9 TR5-SP6 4U
November 13 2023
  15 Hours, 3 Minutes
Noctua NH-U14S TR5-SP6
November 14 2023
  14 Hours, 8 Minutes
Invert Hiding All Results Option
  14 Hours, 33 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Noctua SP6 Heatsinks AMD EPYC Siena BenchmarksOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 8534P 64-Core @ 2.30GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a4192GB800GB Micron_7450_MTFDKBA800TFSASPEED2 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-5-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7GCC 13.2.0ext4640x480ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionNoctua SP6 Heatsinks AMD EPYC Siena Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)- Python 3.11.5- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

Dynatron A54NH-D9 TR5-SP6 4UNoctua NH-U14S TR5-SP6Result OverviewPhoronix Test Suite100%102%103%105%107%C-BloscPostgreSQLOpenFOAMApache IoTDBTimed Gem5 CompilationTiDB Community ServerOpenVINOBlenderTimed Linux Kernel CompilationCoremarkQMCPACKCloverLeafTimed Godot Game Engine CompilationoneDNNTimed LLVM CompilationeasyWaveTimed Node.js CompilationQuantLib7-Zip CompressionDuckDBOpenVKLEmbreeCpuminer-OptRocksDBDaCapo BenchmarkOSPRay StudioIntel Open Image Denoise

Noctua SP6 Heatsinks AMD EPYC Siena Benchmarksopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timepgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writeblosc: blosclz shuffle - 64MBblosc: blosclz shuffle - 128MBblosc: blosclz shuffle - 32MBblosc: blosclz shuffle - 8MBblosc: blosclz shuffle - 256MBblosc: blosclz shuffle - 16MBblosc: blosclz bitshuffle - 256MBblosc: blosclz noshuffle - 128MBblosc: blosclz bitshuffle - 64MBblosc: blosclz bitshuffle - 128MBblosc: blosclz noshuffle - 256MBblosc: blosclz noshuffle - 16MBblosc: blosclz noshuffle - 32MBblosc: blosclz noshuffle - 64MBblosc: blosclz bitshuffle - 8MBblosc: blosclz bitshuffle - 32MBblosc: blosclz noshuffle - 8MBblosc: blosclz bitshuffle - 16MBtidb: oltp_update_non_index - 128apache-iotdb: 800 - 100 - 800 - 400tidb: oltp_update_index - 128apache-iotdb: 800 - 100 - 800 - 400openvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUdacapobench: Tradebeansdacapobench: Apache Lucene Search Enginedacapobench: H2 Database Engineqmcpack: LiH_ae_MSDblender: Fishy Cat - CPU-Onlyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdacapobench: GraphChirocksdb: Read Rand Write Randonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Person Detection FP32 - CPUdacapobench: Apache Kafkaopenvino: Person Detection FP32 - CPUdacapobench: BioJava Biological Data Frameworkbuild-gem5: Time To Compilecloverleaf: clover_bmblender: Pabellon Barcelona - CPU-Onlyonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUquantlib: Multi-Threadedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdacapobench: Zxing 1D/2D Barcode Image Processingdacapobench: Jythonopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUapache-iotdb: 800 - 100 - 500 - 400rocksdb: Read While Writingembree: Pathtracer - Crownopenvino: Face Detection FP16 - CPUdacapobench: Apache Xalan XSLTopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUcpuminer-opt: scryptbuild-linux-kernel: allmodconfigdacapobench: Tradesoaprocksdb: Update Randcpuminer-opt: Quad SHA-256, Pyriteopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUdacapobench: Batik SVG Toolkiteasywave: e2Asean Grid + BengkuluSept2007 Source - 1200openvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Spring Bootopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timebuild-llvm: Ninjaopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenvino: Person Vehicle Bike Detection FP16 - CPUblender: Classroom - CPU-Onlyospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUblender: BMW27 - CPU-Onlyonednn: IP Shapes 3D - f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objbuild-godot: Time To Compileopenvino: Person Vehicle Bike Detection FP16 - CPUospray-studio: 1 - 1080p - 32 - Path Tracer - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdacapobench: FOP Print Formattercoremark: CoreMark Size 666 - Iterations Per Secondospray-studio: 1 - 1080p - 16 - Path Tracer - CPUdacapobench: Apache Tomcatquantlib: Single-Threadeddacapobench: Apache Lucene Search Indexospray-studio: 1 - 1080p - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUduckdb: TPC-H Parquetqmcpack: FeCO6_b3lyp_gmsospray-studio: 3 - 1080p - 16 - Path Tracer - CPUbuild-llvm: Unix Makefilesdacapobench: Eclipseonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 1080p - 32 - Path Tracer - CPUospray-studio: 3 - 1080p - 1 - Path Tracer - CPUcompress-7zip: Compression Ratingembree: Pathtracer - Asian Dragoncpuminer-opt: Skeincoindacapobench: Apache Cassandraqmcpack: Li2_STO_aebuild-nodejs: Time To Compileospray-studio: 2 - 1080p - 16 - Path Tracer - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 240ospray-studio: 2 - 1080p - 1 - Path Tracer - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbuild-linux-kernel: defconfigonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUembree: Pathtracer ISPC - Crownospray-studio: 3 - 1080p - 32 - Path Tracer - CPUcpuminer-opt: Myriad-Groestlqmcpack: simple-H2Odacapobench: Avrora AVR Simulation Frameworkospray-studio: 2 - 4K - 1 - Path Tracer - CPUcompress-7zip: Decompression Ratingopenvkl: vklBenchmarkCPU ISPCcloverleaf: clover_bm64_shortcpuminer-opt: Deepcoinpgbench: 100 - 1000 - Read Onlyopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUpgbench: 100 - 1000 - Read Only - Average Latencyeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400tidb: oltp_point_select - 128cloverleaf: clover_bm16qmcpack: O_ae_pyscf_UHFcpuminer-opt: Ringcoinapache-iotdb: 800 - 100 - 500 - 400cpuminer-opt: LBC, LBRY Creditsonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUrocksdb: Rand Readonednn: Recurrent Neural Network Inference - f32 - CPUblender: Barbershop - CPU-Onlyonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUembree: Pathtracer - Asian Dragon Objopenvkl: vklBenchmarkCPU Scalarembree: Pathtracer ISPC - Asian Dragonospray-studio: 1 - 4K - 32 - Path Tracer - CPUcpuminer-opt: Garlicoinopenvino: Weld Porosity Detection FP16-INT8 - CPUcpuminer-opt: Triple SHA-256, Onecoinduckdb: IMDBopenvino: Weld Porosity Detection FP16-INT8 - CPUcpuminer-opt: Magiopenvino: Face Detection FP16-INT8 - CPUdacapobench: jMonkeyEngineonednn: Deconvolution Batch shapes_3d - f32 - CPUcpuminer-opt: Blake-2 Sopenvino: Face Detection FP16-INT8 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlytidb: oltp_read_write - 128qmcpack: H4_aeonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUdacapobench: H2O In-Memory Platform For Machine Learningdacapobench: PMD Source Code AnalyzerDynatron A54NH-D9 TR5-SP6 4UNoctua NH-U14S TR5-SP629.83345157.3305417.2415801412050.18925.614333.314908.16001.514860.55919.98347.211878.59041.55691.213207.212666.510874.015572.014307.013413.615140.757560256.803176911589986944.001456.8190341575261981.51028.440.36426537.751693.900.5356269.730.493279.5134332732194557.868153.605200208.157679214.83112.8767.871.348015.4711656.718043.88334103.960.66215906.70.9009053492.7118.3040565011165.3827.4134.27103175766790673771.4536928.8495657.65554.44802.59285.6905668484131153597368.0186.83170139.46474181.752387153.43208.3653.722761167.823679.554399.8854.1616583311194089.0121.781.7384674.0495130.7983231.74134170.7720387392609983.941616669915952626.4448641969819175.405117.397979260.329122211.13979807.50319721358049836153079.5544886805789113.98157.77967931.878425808.77840.3860.34678273.8429159603001330.47668771679429405141757.362141331037531307.7724.41807.9210.322111.275131834496.40202.707956.52179.25414638.60454551.3856033038094364693916551.683203.020.46197870.941759086.819459596125306870.14269273123.3739.301607.5467.0569101.32373364053475.521.012.132.129265112.941.1563027.667133.898142.509294001182528.738437142.2159916.8595931513190.09737.515492.016126.06481.615912.86352.98865.112587.39624.46041.513911.113354.511486.916452.115024.513984.915886.258585254.493175611615655142.511504.5787621623259679.94327.820.37312437.021727.360.5240599.530.483347.2834432714781548.021150.815202211.987781211.13512.7767.001.349105.3811851.028169.00328913.900.65218763.00.9132183539.2818.0541165051179.5127.0934.68103136072801704872.2820916.8095556.97561.08813.35282.2085625480021154743372.3085.84168239.03574980.372412152.49209.6453.825841166.193673.27739.8353.6716503281794607.9821.591.7384273.9042129.8643247.59133220.7657337332615136.672068665515952606.8446441669325176.494116.577941259.800122711.14632812.23719601354349636369979.7000891805790113.38156.99267641.869423811.00840.3800.34654174.1535158962998730.42569031673429921142057.182137031140151309.7224.37805.3980.321110.934131735494.92202.117949.67179.17413838.62775552.1916018338081365383171550.532202.640.46167270.820859186.834359620125136879.03268970123.2369.291609.0967.0069061.32315364077475.511.012.132.128810613.381.1201356.708844.152792.380053998181725.506367157.6975518.4775413913202.19755.015555.116080.96490.815989.66294.28907.912663.69638.06055.714003.813397.111496.216458.315080.914075.715881.855940265.573044911183102042.811493.8388641623255081.99327.740.36848736.921731.900.5238789.590.483326.9533792681867547.617151.705108210.727819212.96312.6566.721.326455.3911830.608174.44331243.900.65219225.40.9145533545.1018.0340565961182.2927.0334.75101753914796199372.4502916.1496856.88561.88808.84285.9635695485837155453372.2285.85169839.04174918.382399151.87210.5054.261054167.458677.587919.7953.7516433309494926.1921.621.7235073.4214130.9673258.82133060.7676557372594063.009916664815832610.0449741669499176.645116.967924258.537123021.13905807.19119601349849536335980.0116889105820113.96156.99567601.869423807.34340.5610.34527674.1070159102989330.54368811673430936142257.162144331035851305.3424.45807.3980.322111.278132132495.43202.367933.76179.67413508.62102550.8066031438008365498944550.941202.930.46250470.918259186.693659530125136874.04269280123.3189.291607.9467.0669051.32320364190475.351.012.132.129259813.551.1324696.984664.552772.4845744481824OpenBenchmarking.org

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5471421283525.5128.7429.831. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150157.70142.22157.331. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025SE +/- 0.23, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 318.4816.8617.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025Min: 18.06 / Avg: 18.48 / Max: 18.85Min: 16.76 / Avg: 16.86 / Max: 16.93Min: 16.88 / Avg: 17.24 / Max: 17.481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413K26K39K52K65KSE +/- 673.20, N = 3SE +/- 173.21, N = 3SE +/- 627.52, N = 35413959315580141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410K20K30K40K50KMin: 53041.87 / Avg: 54138.77 / Max: 55363.39Min: 59085.57 / Avg: 59315.14 / Max: 59654.6Min: 57217.08 / Avg: 58014.09 / Max: 59252.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C that focuses on compression of binary data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 16.58, N = 3SE +/- 7.60, N = 3SE +/- 19.22, N = 313202.113190.012050.11. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 13171.8 / Avg: 13202.1 / Max: 13228.9Min: 13180.3 / Avg: 13190.03 / Max: 13205Min: 12030.6 / Avg: 12050.07 / Max: 12088.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 12.99, N = 3SE +/- 0.67, N = 3SE +/- 12.34, N = 39755.09737.58925.61. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 9729.4 / Avg: 9755 / Max: 9771.6Min: 9736.2 / Avg: 9737.47 / Max: 9738.5Min: 8912.7 / Avg: 8925.63 / Max: 8950.31. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 3.37, N = 3SE +/- 18.39, N = 3SE +/- 5.28, N = 315555.115492.014333.31. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 15548.9 / Avg: 15555.1 / Max: 15560.5Min: 15457.2 / Avg: 15492 / Max: 15519.7Min: 14325.9 / Avg: 14333.27 / Max: 14343.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 29.31, N = 3SE +/- 7.57, N = 3SE +/- 12.43, N = 316080.916126.014908.11. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 16022.4 / Avg: 16080.93 / Max: 16112.9Min: 16113.2 / Avg: 16126.03 / Max: 16139.4Min: 14883.2 / Avg: 14908.07 / Max: 14920.81. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000SE +/- 2.27, N = 3SE +/- 2.55, N = 3SE +/- 29.07, N = 36490.86481.66001.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5411002200330044005500Min: 6488.4 / Avg: 6490.77 / Max: 6495.3Min: 6478.3 / Avg: 6481.57 / Max: 6486.6Min: 5947.3 / Avg: 6001.5 / Max: 6046.81. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 6.00, N = 3SE +/- 16.90, N = 3SE +/- 5.40, N = 315989.615912.814860.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz shuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 15983.4 / Avg: 15989.6 / Max: 16001.6Min: 15880 / Avg: 15912.83 / Max: 15936.2Min: 14852.7 / Avg: 14860.53 / Max: 14870.91. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000SE +/- 14.42, N = 3SE +/- 22.27, N = 3SE +/- 22.32, N = 36294.26352.95919.91. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5411002200330044005500Min: 6279 / Avg: 6294.17 / Max: 6323Min: 6308.7 / Avg: 6352.93 / Max: 6379.6Min: 5884.6 / Avg: 5919.9 / Max: 5961.21. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 11.56, N = 3SE +/- 7.45, N = 3SE +/- 4.99, N = 38907.98865.18347.21. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415003000450060007500Min: 8893.5 / Avg: 8907.93 / Max: 8930.8Min: 8854.1 / Avg: 8865.1 / Max: 8879.3Min: 8337.5 / Avg: 8347.23 / Max: 83541. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 10.69, N = 3SE +/- 5.89, N = 3SE +/- 2.42, N = 312663.612587.311878.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 12651.7 / Avg: 12663.57 / Max: 12684.9Min: 12576.4 / Avg: 12587.33 / Max: 12596.6Min: 11873.7 / Avg: 11878.5 / Max: 11881.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 12.64, N = 3SE +/- 11.45, N = 3SE +/- 8.06, N = 39638.09624.49041.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 128MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 9614.5 / Avg: 9638.03 / Max: 9657.8Min: 9605.2 / Avg: 9624.4 / Max: 9644.8Min: 9026.6 / Avg: 9041.5 / Max: 9054.31. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413002600390052006500SE +/- 2.72, N = 3SE +/- 3.64, N = 3SE +/- 2.90, N = 36055.76041.55691.21. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 256MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5411002200330044005500Min: 6051.2 / Avg: 6055.7 / Max: 6060.6Min: 6035.7 / Avg: 6041.5 / Max: 6048.2Min: 5685.5 / Avg: 5691.23 / Max: 5694.81. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 53.87, N = 3SE +/- 32.67, N = 3SE +/- 12.29, N = 314003.813911.113207.21. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 13899 / Avg: 14003.83 / Max: 14077.8Min: 13845.8 / Avg: 13911.13 / Max: 13943.8Min: 13183.1 / Avg: 13207.23 / Max: 13223.31. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 28.63, N = 3SE +/- 15.17, N = 3SE +/- 36.10, N = 313397.113354.512666.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 13341.6 / Avg: 13397.13 / Max: 13437Min: 13326.5 / Avg: 13354.5 / Max: 13378.6Min: 12595 / Avg: 12666.5 / Max: 12710.91. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 19.66, N = 3SE +/- 27.62, N = 3SE +/- 22.73, N = 311496.211486.910874.01. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 64MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 11470.3 / Avg: 11496.23 / Max: 11534.8Min: 11432.6 / Avg: 11486.93 / Max: 11522.7Min: 10849.8 / Avg: 10873.97 / Max: 10919.41. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544K8K12K16K20KSE +/- 16.10, N = 3SE +/- 16.14, N = 3SE +/- 12.66, N = 316458.316452.115572.01. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 16427 / Avg: 16458.27 / Max: 16480.6Min: 16432.6 / Avg: 16452.07 / Max: 16484.1Min: 15546.8 / Avg: 15572 / Max: 15586.71. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 11.58, N = 3SE +/- 27.46, N = 3SE +/- 11.14, N = 315080.915024.514307.01. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 32MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 15057.8 / Avg: 15080.93 / Max: 15093.5Min: 14978.3 / Avg: 15024.47 / Max: 15073.3Min: 14285.8 / Avg: 14307.03 / Max: 14323.51. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 16.71, N = 3SE +/- 26.20, N = 3SE +/- 16.66, N = 314075.713984.913413.61. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz noshuffle - Buffer Size: 8MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 14045.2 / Avg: 14075.67 / Max: 14102.8Min: 13933.3 / Avg: 13984.87 / Max: 14018.7Min: 13380.3 / Avg: 13413.6 / Max: 13431.11. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 23.64, N = 3SE +/- 28.42, N = 3SE +/- 26.60, N = 315881.815886.215140.71. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.11Test: blosclz bitshuffle - Buffer Size: 16MBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 15834.5 / Avg: 15881.77 / Max: 15905.9Min: 15837.9 / Avg: 15886.23 / Max: 15936.3Min: 15087.7 / Avg: 15140.73 / Max: 15170.81. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413K26K39K52K65KSE +/- 8.69, N = 2SE +/- 143.55, N = 3SE +/- 40.44, N = 3559405858557560
OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410K20K30K40K50KMin: 55931.27 / Avg: 55939.96 / Max: 55948.65Min: 58384.65 / Avg: 58585.15 / Max: 58863.36Min: 57492.63 / Avg: 57560.09 / Max: 57632.46

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460120180240300SE +/- 3.10, N = 3SE +/- 3.41, N = 3SE +/- 2.90, N = 3265.57254.49256.80MAX: 27237.61MAX: 26906.76MAX: 26991.92
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250Min: 259.7 / Avg: 265.57 / Max: 270.24Min: 251.08 / Avg: 254.49 / Max: 261.31Min: 252.14 / Avg: 256.8 / Max: 262.12

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547K14K21K28K35KSE +/- 105.69, N = 3SE +/- 227.17, N = 3SE +/- 163.31, N = 3304493175631769
OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546K12K18K24K30KMin: 30288.84 / Avg: 30449.05 / Max: 30648.57Min: 31305.76 / Avg: 31755.71 / Max: 32035.23Min: 31452.92 / Avg: 31768.86 / Max: 31998.54

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420M40M60M80M100MSE +/- 1061890.13, N = 3SE +/- 896265.80, N = 3SE +/- 839333.42, N = 3111831020116156551115899869
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420M40M60M80M100MMin: 110727954.74 / Avg: 111831019.78 / Max: 113954260.79Min: 114577923.82 / Avg: 116156550.77 / Max: 117681275.61Min: 114230706.9 / Avg: 115899869.2 / Max: 116888932.3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541020304050SE +/- 0.21, N = 3SE +/- 0.21, N = 3SE +/- 0.68, N = 1242.8142.5144.00MIN: 27.82 / MAX: 63.4MIN: 27.83 / MAX: 54.18MIN: 27.32 / MAX: 75.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54918273645Min: 42.49 / Avg: 42.81 / Max: 43.2Min: 42.27 / Avg: 42.51 / Max: 42.93Min: 43.1 / Avg: 44 / Max: 51.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 7.09, N = 3SE +/- 7.46, N = 3SE +/- 19.56, N = 121493.831504.571456.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1480.56 / Avg: 1493.83 / Max: 1504.82Min: 1489.71 / Avg: 1504.57 / Max: 1513.18Min: 1242.02 / Avg: 1456.81 / Max: 1483.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: TradebeansNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 103.48, N = 3SE +/- 102.15, N = 4SE +/- 96.66, N = 15886487629034
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: TradebeansNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5416003200480064008000Min: 8660 / Avg: 8864.33 / Max: 8995Min: 8548 / Avg: 8761.5 / Max: 9011Min: 8440 / Avg: 9034.4 / Max: 9660

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search EngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 5.52, N = 5SE +/- 5.97, N = 5SE +/- 5.70, N = 5162316231575
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search EngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1610 / Avg: 1622.8 / Max: 1641Min: 1601 / Avg: 1623 / Max: 1634Min: 1558 / Avg: 1574.8 / Max: 1589

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database EngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000SE +/- 19.86, N = 3SE +/- 16.56, N = 3SE +/- 8.25, N = 3255025962619
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database EngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A545001000150020002500Min: 2524 / Avg: 2550 / Max: 2589Min: 2577 / Avg: 2596 / Max: 2629Min: 2603 / Avg: 2618.67 / Max: 2631

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: LiH_ae_MSDNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100SE +/- 1.09, N = 3SE +/- 0.13, N = 3SE +/- 0.44, N = 381.9979.9481.511. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: LiH_ae_MSDNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480Min: 79.91 / Avg: 81.99 / Max: 83.58Min: 79.8 / Avg: 79.94 / Max: 80.21Min: 80.64 / Avg: 81.51 / Max: 82.091. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54714212835SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 327.7427.8228.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54612182430Min: 27.69 / Avg: 27.74 / Max: 27.77Min: 27.65 / Avg: 27.82 / Max: 28.04Min: 28.22 / Avg: 28.44 / Max: 28.74

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.0840.1680.2520.3360.42SE +/- 0.005219, N = 3SE +/- 0.002099, N = 3SE +/- 0.002898, N = 30.3684870.3731240.364265MIN: 0.33MIN: 0.33MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412345Min: 0.36 / Avg: 0.37 / Max: 0.38Min: 0.37 / Avg: 0.37 / Max: 0.38Min: 0.36 / Avg: 0.36 / Max: 0.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54918273645SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 336.9237.0237.75MIN: 22.21 / MAX: 59.67MIN: 22.21 / MAX: 59.58MIN: 24.92 / MAX: 64.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54816243240Min: 36.72 / Avg: 36.92 / Max: 37.11Min: 36.96 / Avg: 37.02 / Max: 37.12Min: 37.66 / Avg: 37.75 / Max: 37.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 5.31, N = 3SE +/- 2.37, N = 3SE +/- 2.25, N = 31731.901727.361693.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1723.09 / Avg: 1731.9 / Max: 1741.43Min: 1722.66 / Avg: 1727.36 / Max: 1730.29Min: 1690.52 / Avg: 1693.9 / Max: 1698.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.12050.2410.36150.4820.6025SE +/- 0.000722, N = 5SE +/- 0.001085, N = 5SE +/- 0.002350, N = 50.5238780.5240590.535626MIN: 0.46MIN: 0.46MIN: 0.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.53 / Avg: 0.54 / Max: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215SE +/- 0.07, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 39.599.539.73MIN: 4.52 / MAX: 27.12MIN: 4.8 / MAX: 33.9MIN: 4.59 / MAX: 26.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215Min: 9.47 / Avg: 9.59 / Max: 9.71Min: 9.53 / Avg: 9.53 / Max: 9.54Min: 9.67 / Avg: 9.73 / Max: 9.81. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.11030.22060.33090.44120.5515SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.480.480.49MIN: 0.26 / MAX: 19.92MIN: 0.26 / MAX: 32.61MIN: 0.26 / MAX: 25.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547001400210028003500SE +/- 24.03, N = 3SE +/- 1.55, N = 3SE +/- 12.47, N = 33326.953347.283279.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000Min: 3286.17 / Avg: 3326.95 / Max: 3369.35Min: 3344.28 / Avg: 3347.28 / Max: 3349.44Min: 3256.09 / Avg: 3279.51 / Max: 3298.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChiNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547001400210028003500SE +/- 24.53, N = 15SE +/- 15.95, N = 3SE +/- 28.51, N = 15337934433433
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChiNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000Min: 3217 / Avg: 3378.53 / Max: 3521Min: 3421 / Avg: 3443 / Max: 3474Min: 3252 / Avg: 3433.13 / Max: 3584

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54600K1200K1800K2400K3000KSE +/- 21486.01, N = 15SE +/- 29433.81, N = 5SE +/- 23452.89, N = 32681867271478127321941. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54500K1000K1500K2000K2500KMin: 2520813 / Avg: 2681867.4 / Max: 2773220Min: 2601567 / Avg: 2714780.8 / Max: 2768452Min: 2694575 / Avg: 2732193.67 / Max: 27752671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54120240360480600SE +/- 2.44, N = 3SE +/- 1.42, N = 3SE +/- 3.29, N = 3547.62548.02557.87MIN: 538.81MIN: 539.44MIN: 545.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500Min: 543.74 / Avg: 547.62 / Max: 552.12Min: 545.27 / Avg: 548.02 / Max: 549.99Min: 551.36 / Avg: 557.87 / Max: 561.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.58, N = 3SE +/- 0.94, N = 3SE +/- 0.33, N = 3151.70150.81153.60MIN: 55.39 / MAX: 212.78MIN: 53.46 / MAX: 212.6MIN: 63.54 / MAX: 2141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 150.54 / Avg: 151.7 / Max: 152.39Min: 148.93 / Avg: 150.81 / Max: 151.86Min: 152.95 / Avg: 153.6 / Max: 153.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache KafkaNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5411002200330044005500SE +/- 1.33, N = 3SE +/- 2.00, N = 3SE +/- 4.63, N = 3510852025200
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache KafkaNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A549001800270036004500Min: 5107 / Avg: 5108.33 / Max: 5111Min: 5200 / Avg: 5202 / Max: 5206Min: 5192 / Avg: 5199.67 / Max: 5208

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250SE +/- 0.79, N = 3SE +/- 1.36, N = 3SE +/- 0.42, N = 3210.72211.98208.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 209.81 / Avg: 210.72 / Max: 212.3Min: 210.46 / Avg: 211.98 / Max: 214.69Min: 207.65 / Avg: 208.15 / Max: 208.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data FrameworkNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 80.64, N = 3SE +/- 81.45, N = 3SE +/- 10.82, N = 3781977817679
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data FrameworkNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000Min: 7705 / Avg: 7819.33 / Max: 7975Min: 7658 / Avg: 7781 / Max: 7935Min: 7664 / Avg: 7679 / Max: 7700

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250SE +/- 2.46, N = 3SE +/- 2.32, N = 5SE +/- 0.72, N = 3212.96211.14214.83
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 208.11 / Avg: 212.96 / Max: 216.11Min: 207.27 / Avg: 211.13 / Max: 220.12Min: 213.52 / Avg: 214.83 / Max: 215.99

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bmNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215SE +/- 0.07, N = 4SE +/- 0.08, N = 4SE +/- 0.12, N = 412.6512.7712.871. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bmNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5448121620Min: 12.43 / Avg: 12.65 / Max: 12.75Min: 12.62 / Avg: 12.77 / Max: 12.98Min: 12.63 / Avg: 12.87 / Max: 13.191. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541530456075SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 366.7267.0067.87
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541326395265Min: 66.67 / Avg: 66.72 / Max: 66.76Min: 66.85 / Avg: 67 / Max: 67.14Min: 67.82 / Avg: 67.87 / Max: 67.94

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.30350.6070.91051.2141.5175SE +/- 0.00909, N = 3SE +/- 0.00892, N = 3SE +/- 0.00861, N = 31.326451.349101.34801MIN: 1.24MIN: 1.25MIN: 1.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.31 / Avg: 1.33 / Max: 1.34Min: 1.33 / Avg: 1.35 / Max: 1.36Min: 1.34 / Avg: 1.35 / Max: 1.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541.23082.46163.69244.92326.154SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.395.385.47MIN: 3.19 / MAX: 19.08MIN: 3.18 / MAX: 16.36MIN: 3.23 / MAX: 28.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 5.38 / Avg: 5.39 / Max: 5.39Min: 5.37 / Avg: 5.38 / Max: 5.39Min: 5.46 / Avg: 5.47 / Max: 5.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 9.54, N = 3SE +/- 13.35, N = 3SE +/- 11.83, N = 311830.6011851.0211656.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 11816.07 / Avg: 11830.6 / Max: 11848.58Min: 11832.22 / Avg: 11851.02 / Max: 11876.84Min: 11641.99 / Avg: 11656.71 / Max: 11680.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 10.57, N = 3SE +/- 12.62, N = 3SE +/- 9.37, N = 38174.448169.008043.881. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000Min: 8163.03 / Avg: 8174.44 / Max: 8195.56Min: 8143.82 / Avg: 8169 / Max: 8183.2Min: 8026.8 / Avg: 8043.88 / Max: 8059.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547K14K21K28K35KSE +/- 289.97, N = 3SE +/- 287.98, N = 3SE +/- 146.71, N = 3331243289133410
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546K12K18K24K30KMin: 32578 / Avg: 33124.33 / Max: 33566Min: 32521 / Avg: 32890.67 / Max: 33458Min: 33118 / Avg: 33410 / Max: 33581

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.8911.7822.6733.5644.455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.903.903.96MIN: 2.2 / MAX: 22.15MIN: 2.2 / MAX: 29.97MIN: 2.22 / MAX: 28.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 3.89 / Avg: 3.9 / Max: 3.9Min: 3.89 / Avg: 3.9 / Max: 3.91Min: 3.95 / Avg: 3.96 / Max: 3.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.14850.2970.44550.5940.7425SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.650.650.66MIN: 0.34 / MAX: 20.93MIN: 0.34 / MAX: 20.26MIN: 0.34 / MAX: 23.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.65 / Avg: 0.65 / Max: 0.65Min: 0.64 / Avg: 0.65 / Max: 0.65Min: 0.65 / Avg: 0.66 / Max: 0.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450K100K150K200K250KSE +/- 188.26, N = 3SE +/- 116.53, N = 3SE +/- 472.20, N = 3219225.4218763.0215906.71. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5440K80K120K160K200KMin: 218868.1 / Avg: 219225.4 / Max: 219506.9Min: 218556.4 / Avg: 218763 / Max: 218959.7Min: 215411.9 / Avg: 215906.67 / Max: 216850.71. (CXX) g++ options: -O3 -march=native -fPIE -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.20580.41160.61740.82321.029SE +/- 0.001480, N = 7SE +/- 0.001721, N = 7SE +/- 0.002277, N = 60.9145530.9132180.900905MIN: 0.84MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.91 / Avg: 0.91 / Max: 0.92Min: 0.91 / Avg: 0.91 / Max: 0.92Min: 0.9 / Avg: 0.9 / Max: 0.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A548001600240032004000SE +/- 1.00, N = 3SE +/- 1.77, N = 3SE +/- 1.78, N = 33545.103539.283492.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000Min: 3543.21 / Avg: 3545.1 / Max: 3546.61Min: 3536.05 / Avg: 3539.28 / Max: 3542.13Min: 3490.74 / Avg: 3492.71 / Max: 3496.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 318.0318.0518.30MIN: 9.7 / MAX: 29.11MIN: 9.42 / MAX: 37.14MIN: 9.51 / MAX: 69.81. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025Min: 18.02 / Avg: 18.03 / Max: 18.04Min: 18.04 / Avg: 18.05 / Max: 18.07Min: 18.28 / Avg: 18.3 / Max: 18.311. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image ProcessingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490180270360450SE +/- 5.23, N = 15SE +/- 5.03, N = 15SE +/- 4.30, N = 15405411405
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image ProcessingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5470140210280350Min: 383 / Avg: 405.4 / Max: 447Min: 385 / Avg: 411.47 / Max: 453Min: 378 / Avg: 405.13 / Max: 428

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: JythonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000SE +/- 17.03, N = 3SE +/- 14.26, N = 3SE +/- 82.91, N = 3659665056501
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: JythonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5411002200330044005500Min: 6566 / Avg: 6595.67 / Max: 6625Min: 6479 / Avg: 6505.33 / Max: 6528Min: 6345 / Avg: 6500.67 / Max: 6628

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 3.64, N = 3SE +/- 6.52, N = 3SE +/- 3.32, N = 31182.291179.511165.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000Min: 1175.62 / Avg: 1182.29 / Max: 1188.14Min: 1166.93 / Avg: 1179.51 / Max: 1188.75Min: 1159.76 / Avg: 1165.38 / Max: 1171.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54612182430SE +/- 0.08, N = 3SE +/- 0.15, N = 3SE +/- 0.08, N = 327.0327.0927.41MIN: 13.91 / MAX: 47.64MIN: 18.57 / MAX: 45.49MIN: 13.94 / MAX: 51.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54612182430Min: 26.89 / Avg: 27.03 / Max: 27.18Min: 26.87 / Avg: 27.09 / Max: 27.38Min: 27.28 / Avg: 27.41 / Max: 27.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54816243240SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 334.7534.6834.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54714212835Min: 34.67 / Avg: 34.75 / Max: 34.8Min: 34.65 / Avg: 34.68 / Max: 34.7Min: 34.23 / Avg: 34.27 / Max: 34.311. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420M40M60M80M100MSE +/- 600215.19, N = 3SE +/- 936061.82, N = 3SE +/- 673354.88, N = 3101753914103136072103175766
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420M40M60M80M100MMin: 100708710.26 / Avg: 101753913.68 / Max: 102787824.66Min: 101843331.5 / Avg: 103136072.31 / Max: 104955153.72Min: 102335227.2 / Avg: 103175766.47 / Max: 104507266.42

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542M4M6M8M10MSE +/- 49439.53, N = 3SE +/- 23377.20, N = 3SE +/- 49692.28, N = 37961993801704879067371. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541.4M2.8M4.2M5.6M7MMin: 7863373 / Avg: 7961993.33 / Max: 8017494Min: 7987089 / Avg: 8017047.67 / Max: 8063113Min: 7815023 / Avg: 7906736.67 / Max: 79857511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: CrownNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480SE +/- 0.13, N = 5SE +/- 0.10, N = 5SE +/- 0.11, N = 572.4572.2871.45MIN: 71.25 / MAX: 74.25MIN: 71.09 / MAX: 74.2MIN: 70.33 / MAX: 73.11
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: CrownNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541428425670Min: 72.16 / Avg: 72.45 / Max: 72.94Min: 72.08 / Avg: 72.28 / Max: 72.67Min: 71.16 / Avg: 71.45 / Max: 71.82

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 1.04, N = 3SE +/- 0.28, N = 3SE +/- 0.57, N = 3916.14916.80928.84MIN: 880.57 / MAX: 968.04MIN: 865.44 / MAX: 971.95MIN: 885.23 / MAX: 978.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54160320480640800Min: 914.2 / Avg: 916.14 / Max: 917.74Min: 916.29 / Avg: 916.8 / Max: 917.24Min: 928.18 / Avg: 928.84 / Max: 929.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLTNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 6.26, N = 8SE +/- 5.03, N = 8SE +/- 6.67, N = 8968955956
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLTNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000Min: 943 / Avg: 968 / Max: 993Min: 927 / Avg: 955 / Max: 972Min: 933 / Avg: 956.13 / Max: 978

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541326395265SE +/- 0.33, N = 3SE +/- 0.53, N = 3SE +/- 0.61, N = 356.8856.9757.65MIN: 17.04 / MAX: 77.08MIN: 18.25 / MAX: 87.32MIN: 17.43 / MAX: 102.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541122334455Min: 56.22 / Avg: 56.88 / Max: 57.25Min: 56.28 / Avg: 56.97 / Max: 58.02Min: 56.62 / Avg: 57.65 / Max: 58.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54120240360480600SE +/- 3.31, N = 3SE +/- 5.22, N = 3SE +/- 5.87, N = 3561.88561.08554.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500Min: 558.16 / Avg: 561.88 / Max: 568.48Min: 550.81 / Avg: 561.08 / Max: 567.85Min: 544.14 / Avg: 554.44 / Max: 564.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: scryptNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 3.96, N = 3SE +/- 0.27, N = 3SE +/- 0.83, N = 3808.84813.35802.591. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: scryptNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54140280420560700Min: 800.92 / Avg: 808.84 / Max: 812.83Min: 813.05 / Avg: 813.35 / Max: 813.88Min: 801.04 / Avg: 802.59 / Max: 803.871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460120180240300SE +/- 0.58, N = 3SE +/- 0.65, N = 3SE +/- 0.53, N = 3285.96282.21285.69
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250Min: 285.3 / Avg: 285.96 / Max: 287.11Min: 281.4 / Avg: 282.21 / Max: 283.49Min: 285.13 / Avg: 285.69 / Max: 286.76

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: TradesoapNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000SE +/- 41.30, N = 11SE +/- 43.72, N = 15SE +/- 53.41, N = 3569556255668
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: TradesoapNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410002000300040005000Min: 5447 / Avg: 5695.36 / Max: 5876Min: 5412 / Avg: 5625.13 / Max: 6031Min: 5576 / Avg: 5667.67 / Max: 5761

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100K200K300K400K500KSE +/- 375.32, N = 3SE +/- 102.40, N = 3SE +/- 462.43, N = 34858374800214841311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480K160K240K320K400KMin: 485271 / Avg: 485837 / Max: 486547Min: 479831 / Avg: 480021.33 / Max: 480182Min: 483346 / Avg: 484131 / Max: 4849471. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Quad SHA-256, PyriteNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430K60K90K120K150KSE +/- 1883.77, N = 3SE +/- 1498.36, N = 3SE +/- 55.48, N = 31554531547431535971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Quad SHA-256, PyriteNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430K60K90K120K150KMin: 153500 / Avg: 155453.33 / Max: 159220Min: 153230 / Avg: 154743.33 / Max: 157740Min: 153510 / Avg: 153596.67 / Max: 1537001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480160240320400SE +/- 0.78, N = 3SE +/- 0.49, N = 3SE +/- 0.50, N = 3372.22372.30368.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5470140210280350Min: 370.81 / Avg: 372.22 / Max: 373.5Min: 371.6 / Avg: 372.3 / Max: 373.25Min: 367.39 / Avg: 368.01 / Max: 368.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100SE +/- 0.18, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 385.8585.8486.83MIN: 41.81 / MAX: 127.31MIN: 42.72 / MAX: 128.74MIN: 46.22 / MAX: 127.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480Min: 85.56 / Avg: 85.85 / Max: 86.19Min: 85.63 / Avg: 85.84 / Max: 86Min: 86.61 / Avg: 86.83 / Max: 86.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG ToolkitNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 13.40, N = 6SE +/- 3.87, N = 6SE +/- 12.32, N = 6169816821701
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG ToolkitNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1674 / Avg: 1698.33 / Max: 1764Min: 1663 / Avg: 1681.83 / Max: 1688Min: 1666 / Avg: 1700.67 / Max: 1753

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54918273645SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 339.0439.0439.461. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54816243240Min: 38.92 / Avg: 39.04 / Max: 39.13Min: 38.9 / Avg: 39.04 / Max: 39.2Min: 39.38 / Avg: 39.46 / Max: 39.551. (CXX) g++ options: -O3 -fopenmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5416K32K48K64K80KSE +/- 97.06, N = 3SE +/- 95.58, N = 3SE +/- 106.60, N = 374918.3874980.3774181.751. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413K26K39K52K65KMin: 74724.26 / Avg: 74918.38 / Max: 75015.63Min: 74793.54 / Avg: 74980.37 / Max: 75108.79Min: 74060.56 / Avg: 74181.75 / Max: 74394.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring BootNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A545001000150020002500SE +/- 15.68, N = 15SE +/- 11.80, N = 4SE +/- 13.69, N = 4239924122387
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring BootNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000Min: 2255 / Avg: 2398.87 / Max: 2503Min: 2389 / Avg: 2412.25 / Max: 2445Min: 2354 / Avg: 2387 / Max: 2420

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.20, N = 3SE +/- 0.57, N = 3SE +/- 0.23, N = 3151.87152.49153.43MIN: 71.37 / MAX: 219.37MIN: 56.06 / MAX: 213.04MIN: 54.07 / MAX: 216.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 151.52 / Avg: 151.87 / Max: 152.22Min: 151.36 / Avg: 152.49 / Max: 153.17Min: 153.03 / Avg: 153.43 / Max: 153.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250SE +/- 0.27, N = 3SE +/- 0.78, N = 3SE +/- 0.33, N = 3210.50209.64208.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 210.05 / Avg: 210.5 / Max: 210.99Min: 208.71 / Avg: 209.64 / Max: 211.18Min: 207.79 / Avg: 208.36 / Max: 208.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54122436486054.2653.8353.721. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3167.46166.19167.82
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 167.37 / Avg: 167.46 / Max: 167.54Min: 166.15 / Avg: 166.19 / Max: 166.23Min: 167.72 / Avg: 167.82 / Max: 168

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54150300450600750677.59673.28679.551. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.799.839.88MIN: 5.6 / MAX: 23.25MIN: 5.83 / MAX: 27.83MIN: 5.9 / MAX: 36.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215Min: 9.69 / Avg: 9.79 / Max: 9.85Min: 9.81 / Avg: 9.83 / Max: 9.85Min: 9.87 / Avg: 9.88 / Max: 9.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541224364860SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 353.7553.6754.16
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541122334455Min: 53.65 / Avg: 53.75 / Max: 53.8Min: 53.65 / Avg: 53.67 / Max: 53.69Min: 54.15 / Avg: 54.16 / Max: 54.16

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 1.73, N = 3SE +/- 2.19, N = 3SE +/- 2.73, N = 3164316501658
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1640 / Avg: 1643 / Max: 1646Min: 1647 / Avg: 1649.67 / Max: 1654Min: 1654 / Avg: 1657.67 / Max: 1663

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547K14K21K28K35KSE +/- 50.53, N = 3SE +/- 249.34, N = 3SE +/- 85.70, N = 3330943281733111
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546K12K18K24K30KMin: 33023 / Avg: 33094.33 / Max: 33192Min: 32359 / Avg: 32816.67 / Max: 33217Min: 32941 / Avg: 33111.33 / Max: 33213

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420K40K60K80K100KSE +/- 82.69, N = 3SE +/- 253.03, N = 3SE +/- 123.77, N = 394926.1994607.9894089.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5416K32K48K64K80KMin: 94783.45 / Avg: 94926.19 / Max: 95069.88Min: 94131.02 / Avg: 94607.98 / Max: 94992.93Min: 93935.04 / Avg: 94089.01 / Max: 94333.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 321.6221.5921.78
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54510152025Min: 21.59 / Avg: 21.62 / Max: 21.65Min: 21.54 / Avg: 21.59 / Max: 21.61Min: 21.73 / Avg: 21.78 / Max: 21.86

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.39120.78241.17361.56481.956SE +/- 0.00654, N = 5SE +/- 0.00436, N = 5SE +/- 0.00533, N = 51.723501.738421.73846MIN: 1.57MIN: 1.59MIN: 1.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.7 / Avg: 1.72 / Max: 1.74Min: 1.73 / Avg: 1.74 / Max: 1.75Min: 1.73 / Avg: 1.74 / Max: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon ObjNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 373.4273.9074.05MIN: 72.69 / MAX: 74.45MIN: 73.22 / MAX: 74.9MIN: 73.28 / MAX: 75.58
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon ObjNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541428425670Min: 73.33 / Avg: 73.42 / Max: 73.48Min: 73.76 / Avg: 73.9 / Max: 74.08Min: 73.9 / Avg: 74.05 / Max: 74.19

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.26, N = 3SE +/- 0.51, N = 3SE +/- 0.14, N = 3130.97129.86130.80
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 130.46 / Avg: 130.97 / Max: 131.29Min: 128.86 / Avg: 129.86 / Max: 130.53Min: 130.52 / Avg: 130.8 / Max: 130.96

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547001400210028003500SE +/- 16.81, N = 3SE +/- 3.44, N = 3SE +/- 3.50, N = 33258.823247.593231.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000Min: 3241.1 / Avg: 3258.82 / Max: 3292.42Min: 3240.88 / Avg: 3247.59 / Max: 3252.28Min: 3224.75 / Avg: 3231.74 / Max: 3235.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 15.06, N = 3SE +/- 16.33, N = 3SE +/- 10.69, N = 3133061332213417
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 13287 / Avg: 13306.33 / Max: 13336Min: 13289 / Avg: 13321.67 / Max: 13338Min: 13403 / Avg: 13417 / Max: 13438

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.17370.34740.52110.69480.8685SE +/- 0.001168, N = 9SE +/- 0.000942, N = 9SE +/- 0.001650, N = 90.7676550.7657330.772038MIN: 0.76MIN: 0.76MIN: 0.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.76 / Avg: 0.77 / Max: 0.77Min: 0.76 / Avg: 0.77 / Max: 0.77Min: 0.77 / Avg: 0.77 / Max: 0.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print FormatterNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54160320480640800SE +/- 2.62, N = 8SE +/- 1.77, N = 8SE +/- 3.59, N = 8737733739
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print FormatterNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54130260390520650Min: 724 / Avg: 737.38 / Max: 749Min: 725 / Avg: 733 / Max: 739Min: 717 / Avg: 738.63 / Max: 749

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54600K1200K1800K2400K3000KSE +/- 26379.27, N = 5SE +/- 1080.25, N = 3SE +/- 7834.55, N = 32594063.012615136.672609983.941. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54500K1000K1500K2000K2500KMin: 2488698.78 / Avg: 2594063.01 / Max: 2625102.54Min: 2613044.81 / Avg: 2615136.67 / Max: 2616650.48Min: 2594638.42 / Avg: 2609983.94 / Max: 2620400.231. (CC) gcc options: -O2 -lrt" -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000SE +/- 10.90, N = 3SE +/- 0.58, N = 3SE +/- 6.66, N = 3664866556699
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000Min: 6627 / Avg: 6647.67 / Max: 6664Min: 6654 / Avg: 6655 / Max: 6656Min: 6688 / Avg: 6699 / Max: 6711

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache TomcatNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 12.23, N = 5SE +/- 5.49, N = 5SE +/- 10.85, N = 5158315951595
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache TomcatNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1543 / Avg: 1583.2 / Max: 1619Min: 1582 / Avg: 1595.4 / Max: 1612Min: 1563 / Avg: 1595.4 / Max: 1625

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-ThreadedNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546001200180024003000SE +/- 13.41, N = 3SE +/- 15.21, N = 3SE +/- 1.50, N = 32610.02606.82626.41. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-ThreadedNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A545001000150020002500Min: 2583.6 / Avg: 2610 / Max: 2627.3Min: 2576.4 / Avg: 2606.77 / Max: 2623.6Min: 2624.6 / Avg: 2626.43 / Max: 2629.41. (CXX) g++ options: -O3 -march=native -fPIE -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search IndexNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410002000300040005000SE +/- 9.96, N = 3SE +/- 9.82, N = 3SE +/- 3.51, N = 3449744644486
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search IndexNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A548001600240032004000Min: 4478 / Avg: 4496.67 / Max: 4512Min: 4447 / Avg: 4463.67 / Max: 4481Min: 4482 / Avg: 4486 / Max: 4493

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490180270360450SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 3416416419
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5470140210280350Min: 416 / Avg: 416 / Max: 416Min: 416 / Avg: 416.33 / Max: 417Min: 419 / Avg: 419 / Max: 419

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415K30K45K60K75KSE +/- 9.24, N = 3SE +/- 84.69, N = 3SE +/- 74.81, N = 3694996932569819
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412K24K36K48K60KMin: 69483 / Avg: 69498.67 / Max: 69515Min: 69158 / Avg: 69325 / Max: 69433Min: 69692 / Avg: 69819 / Max: 69951

DuckDB

DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200SE +/- 0.23, N = 3SE +/- 0.38, N = 3SE +/- 0.24, N = 3176.65176.49175.411. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 176.23 / Avg: 176.65 / Max: 177.02Min: 175.99 / Avg: 176.49 / Max: 177.25Min: 174.94 / Avg: 175.4 / Max: 175.751. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: FeCO6_b3lyp_gmsNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.27, N = 3SE +/- 0.32, N = 3SE +/- 0.49, N = 3116.96116.57117.391. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: FeCO6_b3lyp_gmsNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 116.46 / Avg: 116.96 / Max: 117.4Min: 116.22 / Avg: 116.57 / Max: 117.21Min: 116.78 / Avg: 117.39 / Max: 118.361. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 12.99, N = 3SE +/- 8.39, N = 3SE +/- 10.20, N = 3792479417979
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000Min: 7901 / Avg: 7923.67 / Max: 7946Min: 7926 / Avg: 7941 / Max: 7955Min: 7967 / Avg: 7978.67 / Max: 7999

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460120180240300SE +/- 1.07, N = 3SE +/- 0.42, N = 3SE +/- 0.10, N = 3258.54259.80260.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450100150200250Min: 256.93 / Avg: 258.54 / Max: 260.58Min: 258.97 / Avg: 259.8 / Max: 260.3Min: 260.15 / Avg: 260.33 / Max: 260.5

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: EclipseNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 123.96, N = 3SE +/- 75.24, N = 3SE +/- 134.12, N = 3123021227112221
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: EclipseNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 12063 / Avg: 12302.33 / Max: 12478Min: 12131 / Avg: 12270.67 / Max: 12389Min: 12049 / Avg: 12220.67 / Max: 12485

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.25790.51580.77371.03161.2895SE +/- 0.00083, N = 7SE +/- 0.00376, N = 7SE +/- 0.00377, N = 61.139051.146321.13979MIN: 1.06MIN: 1.06MIN: 1.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.13 / Avg: 1.15 / Max: 1.16Min: 1.13 / Avg: 1.14 / Max: 1.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 2.64, N = 3SE +/- 0.59, N = 3SE +/- 2.26, N = 3807.19812.24807.50MIN: 797.49MIN: 804.37MIN: 797.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54140280420560700Min: 801.94 / Avg: 807.19 / Max: 810.26Min: 811.22 / Avg: 812.24 / Max: 813.26Min: 803.06 / Avg: 807.5 / Max: 810.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 0.58, N = 3SE +/- 1.76, N = 3SE +/- 2.08, N = 3196019601972
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1959 / Avg: 1960 / Max: 1961Min: 1957 / Avg: 1960.33 / Max: 1963Min: 1968 / Avg: 1972 / Max: 1975

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 7.84, N = 3SE +/- 10.59, N = 3SE +/- 1.76, N = 3134981354313580
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 13485 / Avg: 13497.67 / Max: 13512Min: 13522 / Avg: 13542.67 / Max: 13557Min: 13577 / Avg: 13580.33 / Max: 13583

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54110220330440550SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3495496498
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490180270360450Min: 495 / Avg: 495.33 / Max: 496Min: 495 / Avg: 495.67 / Max: 496Min: 497 / Avg: 497.67 / Max: 499

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480K160K240K320K400KSE +/- 157.10, N = 3SE +/- 120.89, N = 3SE +/- 224.34, N = 33633593636993615301. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460K120K180K240K300KMin: 363083 / Avg: 363359.33 / Max: 363627Min: 363458 / Avg: 363698.67 / Max: 363839Min: 361207 / Avg: 361529.67 / Max: 3619611. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian DragonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100SE +/- 0.04, N = 5SE +/- 0.03, N = 5SE +/- 0.10, N = 580.0179.7079.55MIN: 79.34 / MAX: 80.87MIN: 79.03 / MAX: 80.61MIN: 78.61 / MAX: 80.51
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian DragonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541530456075Min: 79.9 / Avg: 80.01 / Max: 80.11Min: 79.61 / Avg: 79.7 / Max: 79.8Min: 79.16 / Avg: 79.55 / Max: 79.68

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: SkeincoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420K40K60K80K100KSE +/- 151.00, N = 3SE +/- 370.72, N = 3SE +/- 20.82, N = 38891089180886801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: SkeincoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415K30K45K60K75KMin: 88730 / Avg: 88910 / Max: 89210Min: 88770 / Avg: 89180 / Max: 89920Min: 88640 / Avg: 88680 / Max: 887101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache CassandraNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000SE +/- 32.51, N = 3SE +/- 34.53, N = 3SE +/- 3.48, N = 3582057905789
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache CassandraNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410002000300040005000Min: 5755 / Avg: 5820 / Max: 5854Min: 5727 / Avg: 5790 / Max: 5846Min: 5783 / Avg: 5788.67 / Max: 5795

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: Li2_STO_aeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.27, N = 3SE +/- 0.18, N = 3SE +/- 0.61, N = 3113.96113.38113.981. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: Li2_STO_aeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 113.64 / Avg: 113.96 / Max: 114.49Min: 113.03 / Avg: 113.38 / Max: 113.63Min: 112.97 / Avg: 113.98 / Max: 115.091. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 3157.00156.99157.78
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To CompileNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 156.75 / Avg: 156.99 / Max: 157.15Min: 156.7 / Avg: 156.99 / Max: 157.16Min: 157.7 / Avg: 157.78 / Max: 157.82

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415003000450060007500SE +/- 9.07, N = 3SE +/- 8.37, N = 3SE +/- 3.06, N = 3676067646793
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000Min: 6746 / Avg: 6760 / Max: 6777Min: 6750 / Avg: 6764.33 / Max: 6779Min: 6787 / Avg: 6793 / Max: 6797

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.42260.84521.26781.69042.113SE +/- 0.005, N = 9SE +/- 0.003, N = 9SE +/- 0.011, N = 91.8691.8691.8781. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.84 / Avg: 1.87 / Max: 1.89Min: 1.86 / Avg: 1.87 / Max: 1.88Min: 1.84 / Avg: 1.88 / Max: 1.961. (CXX) g++ options: -O3 -fopenmp

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490180270360450SE +/- 0.33, N = 3SE +/- 0.00, N = 3SE +/- 0.33, N = 3423423425
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480160240320400Min: 422 / Avg: 422.67 / Max: 423Min: 423 / Avg: 423 / Max: 423Min: 424 / Avg: 424.67 / Max: 425

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 2.76, N = 3SE +/- 2.70, N = 3SE +/- 0.30, N = 3807.34811.01808.78MIN: 795.77MIN: 800.64MIN: 800.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54140280420560700Min: 801.86 / Avg: 807.34 / Max: 810.63Min: 806.01 / Avg: 811.01 / Max: 815.29Min: 808.46 / Avg: 808.78 / Max: 809.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54918273645SE +/- 0.49, N = 4SE +/- 0.41, N = 5SE +/- 0.42, N = 540.5640.3840.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54816243240Min: 40.04 / Avg: 40.56 / Max: 42.03Min: 39.91 / Avg: 40.38 / Max: 42.01Min: 39.9 / Avg: 40.39 / Max: 42.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.0780.1560.2340.3120.39SE +/- 0.000642, N = 9SE +/- 0.000430, N = 9SE +/- 0.000899, N = 90.3452760.3465410.346782MIN: 0.34MIN: 0.34MIN: 0.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412345Min: 0.34 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480SE +/- 0.16, N = 5SE +/- 0.14, N = 5SE +/- 0.07, N = 574.1174.1573.84MIN: 72.48 / MAX: 76.13MIN: 72.69 / MAX: 75.98MIN: 72.65 / MAX: 75.71
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541428425670Min: 73.54 / Avg: 74.11 / Max: 74.51Min: 73.71 / Avg: 74.15 / Max: 74.49Min: 73.73 / Avg: 73.84 / Max: 74.13

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 29.82, N = 3SE +/- 29.28, N = 3SE +/- 18.50, N = 3159101589615960
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KMin: 15852 / Avg: 15910 / Max: 15951Min: 15862 / Avg: 15895.67 / Max: 15954Min: 15934 / Avg: 15960.33 / Max: 15996

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Myriad-GroestlNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A546K12K18K24K30KSE +/- 14.53, N = 3SE +/- 76.67, N = 3SE +/- 58.97, N = 32989329987300131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Myriad-GroestlNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A545K10K15K20K25KMin: 29870 / Avg: 29893.33 / Max: 29920Min: 29910 / Avg: 29986.67 / Max: 30140Min: 29940 / Avg: 30013.33 / Max: 301301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: simple-H2ONoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54714212835SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.18, N = 330.5430.4330.481. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: simple-H2ONoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54714212835Min: 30.3 / Avg: 30.54 / Max: 30.67Min: 30.21 / Avg: 30.43 / Max: 30.68Min: 30.19 / Avg: 30.48 / Max: 30.81. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation FrameworkNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415003000450060007500SE +/- 10.41, N = 3SE +/- 10.15, N = 3SE +/- 9.85, N = 3688169036877
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation FrameworkNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000Min: 6862 / Avg: 6880.67 / Max: 6898Min: 6890 / Avg: 6903 / Max: 6923Min: 6863 / Avg: 6877 / Max: 6896

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 0.88, N = 3SE +/- 1.86, N = 3SE +/- 3.06, N = 3167316731679
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1672 / Avg: 1673.33 / Max: 1675Min: 1669 / Avg: 1672.67 / Max: 1675Min: 1675 / Avg: 1679 / Max: 1685

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490K180K270K360K450KSE +/- 126.01, N = 3SE +/- 352.41, N = 3SE +/- 497.61, N = 34309364299214294051. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5470K140K210K280K350KMin: 430807 / Avg: 430936 / Max: 431188Min: 429257 / Avg: 429920.67 / Max: 430458Min: 428442 / Avg: 429405 / Max: 4301041. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 0.67, N = 3SE +/- 0.00, N = 3SE +/- 0.33, N = 3142214201417MIN: 114 / MAX: 17494MIN: 113 / MAX: 17461MIN: 114 / MAX: 17373
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000Min: 1421 / Avg: 1422.33 / Max: 1423Min: 1420 / Avg: 1420 / Max: 1420Min: 1417 / Avg: 1417.33 / Max: 1418

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541326395265SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 357.1657.1857.361. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541122334455Min: 57.09 / Avg: 57.16 / Max: 57.21Min: 57.15 / Avg: 57.18 / Max: 57.22Min: 57.33 / Avg: 57.36 / Max: 57.41. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: DeepcoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A545K10K15K20K25KSE +/- 48.42, N = 3SE +/- 5.77, N = 3SE +/- 35.28, N = 32144321370214131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: DeepcoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544K8K12K16K20KMin: 21390 / Avg: 21443.33 / Max: 21540Min: 21360 / Avg: 21370 / Max: 21380Min: 21360 / Avg: 21413.33 / Max: 214801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54700K1400K2100K2800K3500KSE +/- 22492.45, N = 3SE +/- 20993.48, N = 3SE +/- 21879.14, N = 33103585311401531037531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54500K1000K1500K2000K2500KMin: 3059038.87 / Avg: 3103585.48 / Max: 3131283.85Min: 3072336.87 / Avg: 3114015.19 / Max: 3139255.15Min: 3072873.28 / Avg: 3103753.22 / Max: 3146043.011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 3.73, N = 3SE +/- 10.16, N = 3SE +/- 15.06, N = 31305.341309.721307.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000Min: 1297.95 / Avg: 1305.34 / Max: 1309.9Min: 1289.72 / Avg: 1309.72 / Max: 1322.85Min: 1290.01 / Avg: 1307.77 / Max: 1337.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54612182430SE +/- 0.07, N = 3SE +/- 0.19, N = 3SE +/- 0.28, N = 324.4524.3724.41MIN: 8.96 / MAX: 51.37MIN: 9.52 / MAX: 57.66MIN: 9.88 / MAX: 54.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54612182430Min: 24.36 / Avg: 24.45 / Max: 24.59Min: 24.13 / Avg: 24.37 / Max: 24.74Min: 23.86 / Avg: 24.41 / Max: 24.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542004006008001000SE +/- 1.70, N = 3SE +/- 2.78, N = 3SE +/- 1.62, N = 3807.40805.40807.92MIN: 799.36MIN: 796.45MIN: 798.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54140280420560700Min: 804.03 / Avg: 807.4 / Max: 809.47Min: 801.73 / Avg: 805.4 / Max: 810.85Min: 806.17 / Avg: 807.92 / Max: 811.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.07250.1450.21750.290.3625SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.3220.3210.3221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412345Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.24, N = 3111.28110.93111.281. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 111.13 / Avg: 111.28 / Max: 111.37Min: 110.58 / Avg: 110.93 / Max: 111.34Min: 111 / Avg: 111.28 / Max: 111.751. (CXX) g++ options: -O3 -fopenmp

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430K60K90K120K150KSE +/- 388.36, N = 3SE +/- 46.36, N = 2SE +/- 187.31, N = 3132132131735131834
OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420K40K60K80K100KMin: 131387.04 / Avg: 132132.46 / Max: 132694.21Min: 131688.99 / Avg: 131735.35 / Max: 131781.7Min: 131555.37 / Avg: 131834.02 / Max: 132190.19

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54110220330440550SE +/- 0.87, N = 3SE +/- 0.38, N = 3SE +/- 0.63, N = 3495.43494.92496.401. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5490180270360450Min: 493.83 / Avg: 495.43 / Max: 496.8Min: 494.45 / Avg: 494.92 / Max: 495.67Min: 495.73 / Avg: 496.4 / Max: 497.651. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: O_ae_pyscf_UHFNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.22, N = 3202.36202.11202.701. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: O_ae_pyscf_UHFNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 201.58 / Avg: 202.36 / Max: 203.84Min: 200.36 / Avg: 202.11 / Max: 203.97Min: 202.27 / Avg: 202.7 / Max: 202.961. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: RingcoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KSE +/- 58.30, N = 11SE +/- 1.86, N = 3SE +/- 1.70, N = 37933.767949.677956.521. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: RingcoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5414002800420056007000Min: 7709.98 / Avg: 7933.76 / Max: 8432.11Min: 7946.89 / Avg: 7949.67 / Max: 7953.21Min: 7954.43 / Avg: 7956.52 / Max: 7959.891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200SE +/- 1.25, N = 3SE +/- 3.27, N = 3SE +/- 1.60, N = 3179.67179.17179.25MAX: 26880.71MAX: 27068.91MAX: 27281.2
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 178.08 / Avg: 179.67 / Max: 182.14Min: 172.8 / Avg: 179.17 / Max: 183.65Min: 176.13 / Avg: 179.25 / Max: 181.46

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: LBC, LBRY CreditsNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A549K18K27K36K45KSE +/- 5.77, N = 3SE +/- 17.64, N = 3SE +/- 6.67, N = 34135041383414631. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: LBC, LBRY CreditsNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547K14K21K28K35KMin: 41340 / Avg: 41350 / Max: 41360Min: 41350 / Avg: 41383.33 / Max: 41410Min: 41450 / Avg: 41463.33 / Max: 414701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810SE +/- 0.00704, N = 3SE +/- 0.03103, N = 3SE +/- 0.01034, N = 38.621028.627758.60454MIN: 8.01MIN: 8MIN: 7.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215Min: 8.61 / Avg: 8.62 / Max: 8.63Min: 8.57 / Avg: 8.63 / Max: 8.66Min: 8.58 / Avg: 8.6 / Max: 8.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54120240360480600SE +/- 1.16, N = 3SE +/- 3.84, N = 3SE +/- 0.86, N = 3550.81552.19551.39MIN: 543.57MIN: 541.87MIN: 543.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500Min: 548.81 / Avg: 550.81 / Max: 552.84Min: 547.73 / Avg: 552.19 / Max: 559.84Min: 549.67 / Avg: 551.39 / Max: 552.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413K26K39K52K65KSE +/- 105.01, N = 3SE +/- 63.01, N = 3SE +/- 196.68, N = 3603146018360330
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410K20K30K40K50KMin: 60172 / Avg: 60314 / Max: 60519Min: 60068 / Avg: 60183.33 / Max: 60285Min: 59938 / Avg: 60329.67 / Max: 60557

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A548K16K24K32K40KSE +/- 84.89, N = 3SE +/- 23.25, N = 3SE +/- 68.64, N = 3380083808138094
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A547K14K21K28K35KMin: 37918 / Avg: 38008.33 / Max: 38178Min: 38038 / Avg: 38080.67 / Max: 38118Min: 37957 / Avg: 38094 / Max: 38170

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480M160M240M320M400MSE +/- 78230.55, N = 3SE +/- 286715.49, N = 3SE +/- 805455.00, N = 33654989443653831713646939161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460M120M180M240M300MMin: 365342578 / Avg: 365498944.33 / Max: 365581843Min: 365031669 / Avg: 365383170.67 / Max: 365951289Min: 363084607 / Avg: 364693916 / Max: 3655607531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54120240360480600SE +/- 3.04, N = 3SE +/- 3.03, N = 3SE +/- 2.21, N = 3550.94550.53551.68MIN: 540.62MIN: 539.43MIN: 542.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500Min: 546.64 / Avg: 550.94 / Max: 556.8Min: 545.24 / Avg: 550.53 / Max: 555.72Min: 548.32 / Avg: 551.68 / Max: 555.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200SE +/- 0.24, N = 3SE +/- 0.16, N = 3SE +/- 0.15, N = 3202.93202.64203.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 202.55 / Avg: 202.93 / Max: 203.38Min: 202.45 / Avg: 202.64 / Max: 202.96Min: 202.77 / Avg: 203.02 / Max: 203.29

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.10410.20820.31230.41640.5205SE +/- 0.000671, N = 7SE +/- 0.001085, N = 7SE +/- 0.001306, N = 70.4625040.4616720.461978MIN: 0.44MIN: 0.44MIN: 0.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon ObjNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480SE +/- 0.41, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 370.9270.8270.94MIN: 69.53 / MAX: 72.34MIN: 70.01 / MAX: 71.92MIN: 70.26 / MAX: 71.81
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon ObjNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541428425670Min: 70.13 / Avg: 70.92 / Max: 71.51Min: 70.57 / Avg: 70.82 / Max: 71.21Min: 70.86 / Avg: 70.94 / Max: 71.07

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ScalarNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54130260390520650SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3591591590MIN: 42 / MAX: 10115MIN: 42 / MAX: 10116MIN: 42 / MAX: 10106
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ScalarNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500Min: 591 / Avg: 591 / Max: 591Min: 590 / Avg: 590.67 / Max: 591Min: 589 / Avg: 589.67 / Max: 590

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100SE +/- 0.13, N = 5SE +/- 0.05, N = 5SE +/- 0.11, N = 586.6986.8386.82MIN: 85.69 / MAX: 88.22MIN: 86.04 / MAX: 88.04MIN: 85.87 / MAX: 88.44
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541632486480Min: 86.29 / Avg: 86.69 / Max: 87.01Min: 86.67 / Avg: 86.83 / Max: 86.92Min: 86.5 / Avg: 86.82 / Max: 87.16

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5413K26K39K52K65KSE +/- 75.76, N = 3SE +/- 118.55, N = 3SE +/- 63.86, N = 3595305962059596
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410K20K30K40K50KMin: 59395 / Avg: 59530.33 / Max: 59657Min: 59480 / Avg: 59620.33 / Max: 59856Min: 59488 / Avg: 59595.67 / Max: 59709

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: GarlicoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543K6K9K12K15KSE +/- 27.28, N = 3SE +/- 31.80, N = 3SE +/- 20.82, N = 31251312513125301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: GarlicoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A542K4K6K8K10KMin: 12460 / Avg: 12513.33 / Max: 12550Min: 12450 / Avg: 12513.33 / Max: 12550Min: 12490 / Avg: 12530 / Max: 125601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415003000450060007500SE +/- 12.85, N = 3SE +/- 12.82, N = 3SE +/- 2.08, N = 36874.046879.036870.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000Min: 6861.19 / Avg: 6874.04 / Max: 6899.73Min: 6857.52 / Avg: 6879.03 / Max: 6901.88Min: 6867.82 / Avg: 6870.14 / Max: 6874.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Triple SHA-256, OnecoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460K120K180K240K300KSE +/- 126.62, N = 3SE +/- 90.74, N = 3SE +/- 239.75, N = 32692802689702692731. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Triple SHA-256, OnecoinNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5450K100K150K200K250KMin: 269090 / Avg: 269280 / Max: 269520Min: 268790 / Avg: 268970 / Max: 269080Min: 268990 / Avg: 269273.33 / Max: 2697501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

DuckDB

DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3123.32123.24123.371. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 123.12 / Avg: 123.32 / Max: 123.64Min: 123.11 / Avg: 123.24 / Max: 123.36Min: 123.28 / Avg: 123.37 / Max: 123.51. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 39.299.299.30MIN: 5.01 / MAX: 25.36MIN: 4.97 / MAX: 20.13MIN: 4.94 / MAX: 41.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215Min: 9.26 / Avg: 9.29 / Max: 9.31Min: 9.26 / Avg: 9.29 / Max: 9.32Min: 9.29 / Avg: 9.3 / Max: 9.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: MagiNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500SE +/- 1.67, N = 3SE +/- 3.52, N = 3SE +/- 0.71, N = 31607.941609.091607.541. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: MagiNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1605.44 / Avg: 1607.94 / Max: 1611.11Min: 1605.11 / Avg: 1609.09 / Max: 1616.11Min: 1606.53 / Avg: 1607.54 / Max: 1608.911. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541530456075SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 367.0667.0067.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541326395265Min: 67.02 / Avg: 67.06 / Max: 67.09Min: 66.92 / Avg: 67 / Max: 67.06Min: 66.9 / Avg: 67.05 / Max: 67.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5415003000450060007500SE +/- 1.76, N = 3SE +/- 3.00, N = 3SE +/- 1.67, N = 3690569066910
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngineNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5412002400360048006000Min: 6902 / Avg: 6904.67 / Max: 6908Min: 6903 / Avg: 6906 / Max: 6912Min: 6907 / Avg: 6910.33 / Max: 6912

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.29780.59560.89341.19121.489SE +/- 0.00098, N = 9SE +/- 0.00099, N = 9SE +/- 0.00125, N = 91.323201.323151.32373MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.32 / Avg: 1.32 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Blake-2 SNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480K160K240K320K400KSE +/- 140.00, N = 3SE +/- 3.33, N = 3SE +/- 23.33, N = 33641903640773640531. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 23.5Algorithm: Blake-2 SNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5460K120K180K240K300KMin: 364050 / Avg: 364190 / Max: 364470Min: 364070 / Avg: 364076.67 / Max: 364080Min: 364010 / Avg: 364053.33 / Max: 3640901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54100200300400500SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.70, N = 3475.35475.51475.52MIN: 436.16 / MAX: 492.52MIN: 246.55 / MAX: 497.4MIN: 425.59 / MAX: 494.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5480160240320400Min: 475.2 / Avg: 475.35 / Max: 475.53Min: 475.37 / Avg: 475.51 / Max: 475.64Min: 474.18 / Avg: 475.52 / Max: 476.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.22730.45460.68190.90921.1365SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.011.011.01
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.01 / Avg: 1.01 / Max: 1.01Min: 1.01 / Avg: 1.01 / Max: 1.01Min: 1.01 / Avg: 1.01 / Max: 1.01

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.47930.95861.43791.91722.3965SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 42.132.132.13
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 2.13 / Avg: 2.13 / Max: 2.13Min: 2.13 / Avg: 2.13 / Max: 2.13Min: 2.13 / Avg: 2.13 / Max: 2.13

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.4770.9541.4311.9082.385SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 42.122.122.12
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 2.12 / Avg: 2.12 / Max: 2.12Min: 2.12 / Avg: 2.12 / Max: 2.13Min: 2.12 / Avg: 2.12 / Max: 2.12

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420406080100Min: 25.5 / Avg: 41.58 / Max: 63.13Min: 25 / Avg: 42.69 / Max: 64.75Min: 30 / Avg: 63.81 / Max: 87.88

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A544080120160200Min: 6.05 / Avg: 121.3 / Max: 230.54Min: 6.16 / Avg: 119.77 / Max: 230.54Min: 6.93 / Avg: 126.41 / Max: 230.52

TiDB Community Server

OpenBenchmarking.orgCelsius, Fewer Is BetterTiDB Community Server 7.3CPU Temperature MonitorNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541530456075Min: 30.38 / Avg: 35.79 / Max: 45.13Min: 31.88 / Avg: 37.68 / Max: 48.75Min: 43.88 / Avg: 58.81 / Max: 79.63

OpenBenchmarking.orgWatts, Fewer Is BetterTiDB Community Server 7.3CPU Power Consumption MonitorNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54306090120150Min: 12.49 / Avg: 99.85 / Max: 163.87Min: 12.54 / Avg: 97.56 / Max: 165.55Min: 14.01 / Avg: 104.59 / Max: 173.1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5420K40K60K80K100KSE +/- 80.07, N = 3SE +/- 4551.87, N = 9SE +/- 88.74, N = 3925988810692651
OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128Noctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5416K32K48K64K80KMin: 92510.1 / Avg: 92598.06 / Max: 92757.93Min: 51698.39 / Avg: 88105.71 / Max: 93069.26Min: 92474.41 / Avg: 92651.27 / Max: 92752.63

QMCPACK

MinAvgMaxNoctua NH-U14S TR5-SP637.147.250.4NH-D9 TR5-SP6 4U39.950.453.3Dynatron A5461.174.380.9OpenBenchmarking.orgCelsius, Fewer Is BetterQMCPACK 3.17.1CPU Temperature Monitor20406080100

MinAvgMaxNoctua NH-U14S TR5-SP611.6156.0209.6NH-D9 TR5-SP6 4U12.6156.0210.7Dynatron A5413.9162.2222.8OpenBenchmarking.orgWatts, Fewer Is BetterQMCPACK 3.17.1CPU Power Consumption Monitor60120180240300

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: H4_aeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215SE +/- 0.15, N = 4SE +/- 0.25, N = 12SE +/- 0.14, N = 413.5513.3812.941. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: H4_aeNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5448121620Min: 13.11 / Avg: 13.55 / Max: 13.77Min: 12.04 / Avg: 13.38 / Max: 15.14Min: 12.58 / Avg: 12.94 / Max: 13.191. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

oneDNN

MinAvgMaxNoctua NH-U14S TR5-SP628.633.536.1NH-D9 TR5-SP6 4U29.434.436.9Dynatron A5441.947.351.1OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.3CPU Temperature Monitor1530456075

MinAvgMaxNoctua NH-U14S TR5-SP611.680.4171.4NH-D9 TR5-SP6 4U11.380.9171.5Dynatron A5412.482.5175.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.3CPU Power Consumption Monitor50100150200250

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.26020.52040.78061.04081.301SE +/- 0.022930, N = 15SE +/- 0.026240, N = 15SE +/- 0.020821, N = 121.1324691.1201351.156302MIN: 0.93MIN: 0.92MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 0.98 / Avg: 1.13 / Max: 1.2Min: 0.98 / Avg: 1.12 / Max: 1.21Min: 0.98 / Avg: 1.16 / Max: 1.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

MinAvgMaxNoctua NH-U14S TR5-SP628.431.633.0NH-D9 TR5-SP6 4U29.032.333.8Dynatron A5443.145.947.9OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.3CPU Temperature Monitor1428425670

MinAvgMaxNoctua NH-U14S TR5-SP612.175.8120.8NH-D9 TR5-SP6 4U12.075.9120.7Dynatron A5412.177.8121.7OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.3CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810SE +/- 0.50620, N = 15SE +/- 0.56116, N = 15SE +/- 0.51948, N = 156.984666.708847.66713MIN: 2.06MIN: 2.2MIN: 1.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A543691215Min: 3.61 / Avg: 6.98 / Max: 9Min: 3.48 / Avg: 6.71 / Max: 9.08Min: 3.27 / Avg: 7.67 / Max: 9.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

MinAvgMaxNoctua NH-U14S TR5-SP629.531.933.9NH-D9 TR5-SP6 4U30.632.834.6Dynatron A5445.947.551.3OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.3CPU Temperature Monitor1530456075

MinAvgMaxNoctua NH-U14S TR5-SP611.174.9126.1NH-D9 TR5-SP6 4U10.975.6121.7Dynatron A5411.979.3151.6OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.3CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A541.02442.04883.07324.09765.122SE +/- 0.21282, N = 15SE +/- 0.29938, N = 15SE +/- 0.31635, N = 124.552774.152793.89814MIN: 1.47MIN: 1.31MIN: 1.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 2.52 / Avg: 4.55 / Max: 5.71Min: 1.83 / Avg: 4.15 / Max: 5.41Min: 2 / Avg: 3.9 / Max: 5.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

MinAvgMaxNoctua NH-U14S TR5-SP630.636.240.8NH-D9 TR5-SP6 4U31.337.542.3Dynatron A5445.653.359.5OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.3CPU Temperature Monitor1632486480

MinAvgMaxNoctua NH-U14S TR5-SP611.992.1173.0NH-D9 TR5-SP6 4U11.993.2174.8Dynatron A5411.895.8174.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.3CPU Power Consumption Monitor50100150200250

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A540.56461.12921.69382.25842.823SE +/- 0.06453, N = 12SE +/- 0.07713, N = 15SE +/- 0.05467, N = 152.484572.380052.50929MIN: 1.58MIN: 1.56MIN: 1.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54246810Min: 1.84 / Avg: 2.48 / Max: 2.63Min: 1.78 / Avg: 2.38 / Max: 2.63Min: 1.9 / Avg: 2.51 / Max: 2.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

DaCapo Benchmark

MinAvgMaxNoctua NH-U14S TR5-SP627.030.631.8NH-D9 TR5-SP6 4U27.530.632.0Dynatron A5435.839.141.3OpenBenchmarking.orgCelsius, Fewer Is BetterDaCapo Benchmark 23.11CPU Temperature Monitor1122334455

MinAvgMaxNoctua NH-U14S TR5-SP611.759.082.9NH-D9 TR5-SP6 4U12.358.480.0Dynatron A5411.959.882.5OpenBenchmarking.orgWatts, Fewer Is BetterDaCapo Benchmark 23.11CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine LearningNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5410002000300040005000SE +/- 266.33, N = 15SE +/- 37.26, N = 3SE +/- 28.54, N = 3444839984001
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine LearningNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A548001600240032004000Min: 3886 / Avg: 4448.07 / Max: 7092Min: 3956 / Avg: 3997.67 / Max: 4072Min: 3970 / Avg: 4001 / Max: 4058

MinAvgMaxNoctua NH-U14S TR5-SP626.531.933.6NH-D9 TR5-SP6 4U27.933.034.8Dynatron A5434.441.945.8OpenBenchmarking.orgCelsius, Fewer Is BetterDaCapo Benchmark 23.11CPU Temperature Monitor1224364860

MinAvgMaxNoctua NH-U14S TR5-SP610.662.2118.8NH-D9 TR5-SP6 4U12.061.7120.0Dynatron A5410.263.1121.9OpenBenchmarking.orgWatts, Fewer Is BetterDaCapo Benchmark 23.11CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code AnalyzerNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A54400800120016002000SE +/- 28.43, N = 15SE +/- 25.71, N = 12SE +/- 29.96, N = 15182418171825
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code AnalyzerNoctua NH-U14S TR5-SP6NH-D9 TR5-SP6 4UDynatron A5430060090012001500Min: 1678 / Avg: 1824 / Max: 2029Min: 1691 / Avg: 1817.33 / Max: 1964Min: 1677 / Avg: 1824.67 / Max: 2025

205 Results Shown

OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Medium Mesh Size - Mesh Time
PostgreSQL:
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
C-Blosc:
  blosclz shuffle - 64MB
  blosclz shuffle - 128MB
  blosclz shuffle - 32MB
  blosclz shuffle - 8MB
  blosclz shuffle - 256MB
  blosclz shuffle - 16MB
  blosclz bitshuffle - 256MB
  blosclz noshuffle - 128MB
  blosclz bitshuffle - 64MB
  blosclz bitshuffle - 128MB
  blosclz noshuffle - 256MB
  blosclz noshuffle - 16MB
  blosclz noshuffle - 32MB
  blosclz noshuffle - 64MB
  blosclz bitshuffle - 8MB
  blosclz bitshuffle - 32MB
  blosclz noshuffle - 8MB
  blosclz bitshuffle - 16MB
TiDB Community Server
Apache IoTDB
TiDB Community Server
Apache IoTDB
OpenVINO:
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
DaCapo Benchmark:
  Tradebeans
  Apache Lucene Search Engine
  H2 Database Engine
QMCPACK
Blender
oneDNN
OpenVINO:
  Handwritten English Recognition FP16 - CPU:
    ms
    FPS
oneDNN
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
DaCapo Benchmark
RocksDB
oneDNN
OpenVINO
DaCapo Benchmark
OpenVINO
DaCapo Benchmark
Timed Gem5 Compilation
CloverLeaf
Blender
oneDNN
OpenVINO:
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16 - CPU:
    FPS
OSPRay Studio
OpenVINO:
  Face Detection Retail FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
QuantLib
oneDNN
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
DaCapo Benchmark:
  Zxing 1D/2D Barcode Image Processing
  Jython
OpenVINO:
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
    ms
  Face Detection FP16 - CPU:
    FPS
Apache IoTDB
RocksDB
Embree
OpenVINO
DaCapo Benchmark
OpenVINO:
  Road Segmentation ADAS FP16 - CPU:
    ms
    FPS
Cpuminer-Opt
Timed Linux Kernel Compilation
DaCapo Benchmark
RocksDB
Cpuminer-Opt
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
DaCapo Benchmark
easyWave
OpenVINO
DaCapo Benchmark
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
OpenFOAM
Timed LLVM Compilation
OpenFOAM
OpenVINO
Blender
OSPRay Studio:
  1 - 4K - 1 - Path Tracer - CPU
  1 - 4K - 16 - Path Tracer - CPU
OpenVINO
Blender
oneDNN
Embree
Timed Godot Game Engine Compilation
OpenVINO
OSPRay Studio
oneDNN
DaCapo Benchmark
Coremark
OSPRay Studio
DaCapo Benchmark
QuantLib
DaCapo Benchmark
OSPRay Studio:
  1 - 1080p - 1 - Path Tracer - CPU
  3 - 4K - 32 - Path Tracer - CPU
DuckDB
QMCPACK
OSPRay Studio
Timed LLVM Compilation
DaCapo Benchmark
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
OSPRay Studio:
  3 - 4K - 1 - Path Tracer - CPU
  2 - 1080p - 32 - Path Tracer - CPU
  3 - 1080p - 1 - Path Tracer - CPU
7-Zip Compression
Embree
Cpuminer-Opt
DaCapo Benchmark
QMCPACK
Timed Node.js Compilation
OSPRay Studio
easyWave
OSPRay Studio
oneDNN
Timed Linux Kernel Compilation
oneDNN
Embree
OSPRay Studio
Cpuminer-Opt
QMCPACK
DaCapo Benchmark
OSPRay Studio
7-Zip Compression
OpenVKL
CloverLeaf
Cpuminer-Opt
PostgreSQL
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
oneDNN
PostgreSQL
easyWave
TiDB Community Server
CloverLeaf
QMCPACK
Cpuminer-Opt
Apache IoTDB
Cpuminer-Opt
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
OSPRay Studio:
  2 - 4K - 32 - Path Tracer - CPU
  3 - 4K - 16 - Path Tracer - CPU
RocksDB
oneDNN
Blender
oneDNN
Embree
OpenVKL
Embree
OSPRay Studio
Cpuminer-Opt
OpenVINO
Cpuminer-Opt
DuckDB
OpenVINO
Cpuminer-Opt
OpenVINO
DaCapo Benchmark
oneDNN
Cpuminer-Opt
OpenVINO
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
CPU Temperature Monitor:
  Phoronix Test Suite System Monitoring:
    Celsius
    Watts
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
TiDB Community Server
QMCPACK:
  CPU Temp Monitor
  CPU Power Consumption Monitor
QMCPACK
oneDNN:
  CPU Temp Monitor
  CPU Power Consumption Monitor
oneDNN
oneDNN:
  CPU Temp Monitor
  CPU Power Consumption Monitor
oneDNN
oneDNN:
  CPU Temp Monitor
  CPU Power Consumption Monitor
oneDNN
oneDNN:
  CPU Temp Monitor
  CPU Power Consumption Monitor
oneDNN
DaCapo Benchmark:
  CPU Temp Monitor
  CPU Power Consumption Monitor
DaCapo Benchmark
DaCapo Benchmark:
  CPU Temp Monitor
  CPU Power Consumption Monitor
DaCapo Benchmark