AM5 Noctua Cooling Benchmarks for a future article by Michael Larabel.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2301314-NE-AM5NOCTUA33 Wraith Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Noctua NH-L9a-AM5 Processor: AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads), Motherboard: Gigabyte B650M DS3H (F4h BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 Plus, Graphics: Gigabyte AMD Raphael 512MB (2200/2400MHz), Audio: AMD Rembrandt Radeon HD Audio, Monitor: ASUS VP28U, Network: Realtek RTL8125 2.5GbE
OS: Ubuntu 22.10, Kernel: 6.2.0-060200rc5daily20230129-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4 + Wayland, OpenGL: 4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49), Vulkan: 1.3.235, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
AM5 Noctua Cooling OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads) Gigabyte B650M DS3H (F4h BIOS) AMD Device 14d8 32GB 1000GB Sabrent Rocket 4.0 Plus Gigabyte AMD Raphael 512MB (2200/2400MHz) AMD Rembrandt Radeon HD Audio ASUS VP28U Realtek RTL8125 2.5GbE Ubuntu 22.10 6.2.0-060200rc5daily20230129-generic (x86_64) GNOME Shell 43.0 X Server 1.21.1.4 + Wayland 4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49) 1.3.235 GCC 12.2.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution AM5 Noctua Cooling Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203 - Python 3.10.7 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Wraith vs. Noctua NH-L9a-AM5 Comparison Phoronix Test Suite Baseline +0.8% +0.8% +1.6% +1.6% +2.4% +2.4% 3.1% 3.1% 2.9% 2.7% 2.4% 2% 2% N.T.C.D.m - S.S.S N.T.C.D.m - S.S.S 100 - 800 - Read Only - Average Latency 100 - 800 - Read Only 100 - 100 - Read Only - Average Latency 2.6% 100 - 100 - Read Only 2.6% A.G.R.R.0.F - CPU 100 - 250 - Read Only 2.4% 1.R.H.D.F.R.C.C 2.3% 100 - 500 - Read Only - Average Latency 2.3% 100 - 500 - Read Only 2.3% 100 - 250 - Read Only - Average Latency 2.1% N.T.C.D.m - A.M.S N.T.C.D.m - A.M.S Neural Magic DeepSparse Neural Magic DeepSparse PostgreSQL PostgreSQL PostgreSQL PostgreSQL OpenVINO PostgreSQL ClickHouse PostgreSQL PostgreSQL PostgreSQL Neural Magic DeepSparse Neural Magic DeepSparse Wraith Noctua NH-L9a-AM5
AM5 Noctua Cooling deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream rocksdb: Seq Fill rocksdb: Rand Fill rocksdb: Rand Fill Sync rocksdb: Rand Read rocksdb: Read While Writing rocksdb: Read Rand Write Rand rocksdb: Update Rand selenium: Maze Solver - Google Chrome selenium: WASM collisionDetection - Google Chrome selenium: WASM imageConvolute - Google Chrome selenium: Kraken - Google Chrome openems: pyEMS Coupler openems: openEMS MSL_NotchFilter clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache clickhouse: 100M Rows Hits Dataset, Second Run clickhouse: 100M Rows Hits Dataset, Third Run pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 100 - Read Only pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 500 - Read Write pgbench: 100 - 500 - Read Write - Average Latency pgbench: 100 - 500 - Read Only pgbench: 100 - 500 - Read Only - Average Latency pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 800 - Read Only pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 100 - 1000 - Read Only pgbench: 100 - 1000 - Read Only - Average Latency uvg266: Bosphorus 1080p - Slow uvg266: Bosphorus 1080p - Medium uvg266: Bosphorus 1080p - Very Fast uvg266: Bosphorus 1080p - Super Fast uvg266: Bosphorus 1080p - Ultra Fast uvg266: Bosphorus 4K - Slow uvg266: Bosphorus 4K - Medium uvg266: Bosphorus 4K - Very Fast uvg266: Bosphorus 4K - Super Fast uvg266: Bosphorus 4K - Ultra Fast kvazaar: Bosphorus 1080p - Slow kvazaar: Bosphorus 1080p - Medium kvazaar: Bosphorus 1080p - Very Fast kvazaar: Bosphorus 1080p - Super Fast kvazaar: Bosphorus 1080p - Ultra Fast kvazaar: Bosphorus 4K - Slow kvazaar: Bosphorus 4K - Medium kvazaar: Bosphorus 4K - Very Fast kvazaar: Bosphorus 4K - Super Fast kvazaar: Bosphorus 4K - Ultra Fast brl-cad: VGR Performance Metric openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU Wraith Noctua NH-L9a-AM5 43.9769 22.7372 64.9493 92.3699 86.0013 11.6240 128.0667 46.8367 127.1538 7.8579 202.3210 29.6412 8.0975 123.4910 8.7362 684.2819 51.7417 19.3204 77.9564 76.9390 64.2223 15.5501 81.4259 73.6222 8.0881 123.6341 8.7343 684.9321 138.8455 7.1968 298.6334 20.0754 24.6066 40.6243 27.8248 215.6065 1226077 1166613 15722 105090210 3144715 2519954 768655 6.3 230.81 18.24 376.7 43.83 63.76 205.19 228.14 235.24 15519 6.460 1304928 0.077 16748 14.928 1295403 0.193 16437 30.423 1268375 0.394 15520 51.552 1235664 0.648 15485 64.734 1252946 0.798 45.49 50.68 120.57 133.17 157.72 9.54 10.67 30.46 32.58 38.51 60.87 63.59 117.17 177.25 239.06 14.77 15.08 36.28 45.75 59.63 285194 9.45 631.51 18.61 321.98 28590.25 0.42 16963.59 0.7 5.10 1172.39 5.06 1177.31 1886.01 6.36 944.86 6.35 1195.04 5.02 605.52 9.9 1101.33 5.44 93.68 64.02 2.23570 5.43017 2.06395 1.10886 1.76178 0.301826 1632.36 834.030 44.1370 22.6547 65.3169 91.8463 88.6954 11.2704 130.6213 45.9209 126.5379 7.8962 204.1332 29.3785 8.1273 123.0374 8.7275 685.0746 51.9510 19.2420 78.1921 76.7125 64.0981 15.5800 80.7746 74.2171 8.1135 123.2474 8.7367 684.5231 138.7057 7.2039 298.9896 20.0515 24.6534 40.5473 27.9707 214.4849 1214111 1162966 15953 105081082 3114168 2503567 773461 6.2 230.82 18.03 377.6 44.29 63.29 200.54 232.45 235.09 15681 6.378 1271918 0.079 16831 14.854 1265126 0.197 16478 30.372 1240365 0.403 15605 51.272 1269351 0.630 15435 64.793 1245750 0.803 45.38 50.58 120.31 133.13 157.91 9.53 10.69 30.44 32.62 38.49 61.31 63.92 115.14 177.09 238.53 14.78 15.11 36.44 45.78 59.49 281795 9.48 629.79 18.64 321.36 28734.23 0.41 17032.32 0.7 5.12 1164.82 5.00 1193.03 1892.32 6.34 948.99 6.32 1201.74 4.99 609.33 9.84 1110.01 5.40 93.82 63.91 2.21247 5.39296 2.05485 1.10190 1.76405 0.301346 1626.60 829.470 OpenBenchmarking.org
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream Wraith Noctua NH-L9a-AM5 15 30 45 60 75 SE +/- 0.39, N = 3 SE +/- 0.06, N = 3 64.95 65.32
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream Wraith Noctua NH-L9a-AM5 2 4 6 8 10 SE +/- 0.0151, N = 3 SE +/- 0.0053, N = 3 8.0975 8.1273
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream Noctua NH-L9a-AM5 Wraith 2 4 6 8 10 SE +/- 0.0051, N = 3 SE +/- 0.0094, N = 3 8.7275 8.7362
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream Wraith Noctua NH-L9a-AM5 12 24 36 48 60 SE +/- 0.13, N = 3 SE +/- 0.06, N = 3 51.74 51.95
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream Wraith Noctua NH-L9a-AM5 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.49, N = 3 77.96 78.19
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream Wraith Noctua NH-L9a-AM5 2 4 6 8 10 SE +/- 0.0049, N = 3 SE +/- 0.0095, N = 3 8.0881 8.1135
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream Wraith Noctua NH-L9a-AM5 2 4 6 8 10 SE +/- 0.0075, N = 3 SE +/- 0.0069, N = 3 8.7343 8.7367
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream Noctua NH-L9a-AM5 Wraith 30 60 90 120 150 SE +/- 0.06, N = 3 SE +/- 0.40, N = 3 138.71 138.85
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream Wraith Noctua NH-L9a-AM5 70 140 210 280 350 SE +/- 0.37, N = 3 SE +/- 0.36, N = 3 298.63 298.99
Benchmark: PSPDFKit WASM - Browser: Google Chrome
Wraith: The test quit with a non-zero exit status. E: NameError: name 'StaleElementReferenceException' is not defined
Noctua NH-L9a-AM5: The test quit with a non-zero exit status. E: NameError: name 'StaleElementReferenceException' is not defined
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache Noctua NH-L9a-AM5 Wraith 50 100 150 200 250 SE +/- 1.23, N = 3 SE +/- 2.04, N = 5 200.54 205.19 MIN: 14 / MAX: 8571.43 MIN: 13.88 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run Wraith Noctua NH-L9a-AM5 50 100 150 200 250 SE +/- 1.81, N = 5 SE +/- 1.43, N = 3 228.14 232.45 MIN: 14.25 / MAX: 10000 MIN: 14.21 / MAX: 10000
Result
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Noctua NH-L9a-AM5 Wraith 50 100 150 200 250 SE +/- 3.31, N = 3 SE +/- 1.63, N = 5 235.09 235.24 MIN: 14.22 / MAX: 10000 MIN: 14.23 / MAX: 8571.43
Queries Per Minute, Geo Mean Per Watt
OpenBenchmarking.org Queries Per Minute, Geo Mean Per Watt, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Noctua NH-L9a-AM5 Wraith 0.7292 1.4584 2.1876 2.9168 3.646 3.127 3.241
CPU Peak Freq (Highest CPU Core Frequency
Min Avg Max Noctua NH-L9a-AM5 3000 5016 5397 Wraith 3000 5031 5440 OpenBenchmarking.org Megahertz, More Is Better ClickHouse 22.12.3.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 1400 2800 4200 5600 7000
CPU Power Consumption
Min Avg Max Noctua NH-L9a-AM5 8.8 75.2 90.2 Wraith 11.4 72.6 90.2 OpenBenchmarking.org Watts, Fewer Is Better ClickHouse 22.12.3.5 CPU Power Consumption Monitor 20 40 60 80 100 1. Noctua NH-L9a-AM5: Approximate power consumption of 18068 Joules per run. 2. Wraith: Approximate power consumption of 18202 Joules per run.
CPU Temp
OpenBenchmarking.org Celsius, Fewer Is Better ClickHouse 22.12.3.5 CPU Temperature Monitor Wraith Noctua NH-L9a-AM5 20 40 60 80 100 Min: 51.13 / Avg: 84.97 / Max: 92.5 Min: 50.13 / Avg: 83.27 / Max: 90.13
PostgreSQL This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write Wraith Noctua NH-L9a-AM5 3K 6K 9K 12K 15K SE +/- 234.08, N = 12 SE +/- 116.23, N = 3 15519 15681 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only Noctua NH-L9a-AM5 Wraith 300K 600K 900K 1200K 1500K SE +/- 1401.49, N = 3 SE +/- 10312.47, N = 3 1271918 1304928 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write Wraith Noctua NH-L9a-AM5 4K 8K 12K 16K 20K SE +/- 91.73, N = 3 SE +/- 96.50, N = 3 16748 16831 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only Noctua NH-L9a-AM5 Wraith 300K 600K 900K 1200K 1500K SE +/- 14728.11, N = 3 SE +/- 2942.90, N = 3 1265126 1295403 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write Wraith Noctua NH-L9a-AM5 4K 8K 12K 16K 20K SE +/- 120.21, N = 3 SE +/- 149.35, N = 12 16437 16478 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only Noctua NH-L9a-AM5 Wraith 300K 600K 900K 1200K 1500K SE +/- 12175.03, N = 3 SE +/- 11732.69, N = 3 1240365 1268375 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write Wraith Noctua NH-L9a-AM5 3K 6K 9K 12K 15K SE +/- 118.84, N = 3 SE +/- 111.95, N = 3 15520 15605 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only Wraith Noctua NH-L9a-AM5 300K 600K 900K 1200K 1500K SE +/- 2024.35, N = 3 SE +/- 5877.09, N = 3 1235664 1269351 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write Noctua NH-L9a-AM5 Wraith 3K 6K 9K 12K 15K SE +/- 92.72, N = 3 SE +/- 217.62, N = 12 15435 15485 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only Noctua NH-L9a-AM5 Wraith 300K 600K 900K 1200K 1500K SE +/- 8627.67, N = 3 SE +/- 8347.19, N = 3 1245750 1252946 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Scaling Factor: 100 - Clients: 5000 - Mode: Read Write
Wraith: The test run did not produce a result. E: pgbench: error: need at least 5003 open files, but system limit is 1024
Noctua NH-L9a-AM5: The test run did not produce a result. E: pgbench: error: need at least 5003 open files, but system limit is 1024
Scaling Factor: 100 - Clients: 5000 - Mode: Read Only
Wraith: The test run did not produce a result. E: pgbench: error: need at least 5003 open files, but system limit is 1024
Noctua NH-L9a-AM5: The test run did not produce a result. E: pgbench: error: need at least 5003 open files, but system limit is 1024
Kvazaar This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 9.45 9.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU Wraith Noctua NH-L9a-AM5 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 18.61 18.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 6K 12K 18K 24K 30K SE +/- 57.74, N = 3 SE +/- 41.32, N = 3 28590.25 28734.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU Wraith Noctua NH-L9a-AM5 4K 8K 12K 16K 20K SE +/- 22.38, N = 3 SE +/- 28.68, N = 3 16963.59 17032.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 1.152 2.304 3.456 4.608 5.76 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 5.10 5.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU Noctua NH-L9a-AM5 Wraith 1.1385 2.277 3.4155 4.554 5.6925 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 5.00 5.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU Wraith Noctua NH-L9a-AM5 400 800 1200 1600 2000 SE +/- 3.15, N = 3 SE +/- 2.47, N = 3 1886.01 1892.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 200 400 600 800 1000 SE +/- 1.62, N = 3 SE +/- 1.43, N = 3 944.86 948.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU Wraith Noctua NH-L9a-AM5 300 600 900 1200 1500 SE +/- 2.68, N = 3 SE +/- 2.53, N = 3 1195.04 1201.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 130 260 390 520 650 SE +/- 3.68, N = 3 SE +/- 4.07, N = 3 605.52 609.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 200 400 600 800 1000 SE +/- 1.48, N = 3 SE +/- 1.08, N = 3 1101.33 1110.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU Wraith Noctua NH-L9a-AM5 20 40 60 80 100 SE +/- 0.23, N = 3 SE +/- 0.09, N = 3 93.68 93.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
CPU Peak Freq (Highest CPU Core Frequency) Monitor OpenBenchmarking.org Megahertz CPU Peak Freq (Highest CPU Core Frequency) Monitor Phoronix Test Suite System Monitoring Noctua NH-L9a-AM5 Wraith 900 1800 2700 3600 4500 Min: 2990 / Avg: 4541.6 / Max: 5440 Min: 2971 / Avg: 4519.91 / Max: 5443
CPU Power Consumption Monitor OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring Noctua NH-L9a-AM5 Wraith 20 40 60 80 100 Min: 5.72 / Avg: 68.56 / Max: 118.2 Min: 5.96 / Avg: 65.84 / Max: 116.13
CPU Temperature Monitor OpenBenchmarking.org Celsius CPU Temperature Monitor Phoronix Test Suite System Monitoring Wraith Noctua NH-L9a-AM5 20 40 60 80 100 Min: 36.63 / Avg: 73.44 / Max: 92.5 Min: 37.13 / Avg: 73.09 / Max: 90.13
Wraith Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 January 2023 13:04 by user phoronix.
Noctua NH-L9a-AM5 Processor: AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads), Motherboard: Gigabyte B650M DS3H (F4h BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 Plus, Graphics: Gigabyte AMD Raphael 512MB (2200/2400MHz), Audio: AMD Rembrandt Radeon HD Audio, Monitor: ASUS VP28U, Network: Realtek RTL8125 2.5GbE
OS: Ubuntu 22.10, Kernel: 6.2.0-060200rc5daily20230129-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4 + Wayland, OpenGL: 4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49), Vulkan: 1.3.235, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 January 2023 15:01 by user phoronix.