2 x INTEL XEON PLATINUM 8592 testing by Michael Larabel for a future article.
Default Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161Java Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Optimized Power Mode Processor: 2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS), Chipset: Intel Device 1bce, Memory: 1008GB, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T
OS: Ubuntu 23.10, Kernel: 6.5.0-13-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080
Intel Optimized Power Mode Xeon Platinum Benchmarks OpenBenchmarking.org Phoronix Test Suite 2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads) Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS) Intel Device 1bce 1008GB 3201GB Micron_7450_MTFDKCB3T2TFS ASPEED 2 x Intel X710 for 10GBASE-T Ubuntu 23.10 6.5.0-13-generic (x86_64) GCC 13.2.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution Intel Optimized Power Mode Xeon Platinum Benchmarks Performance System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161 - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) - Python 3.11.6 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Default vs. Optimized Power Mode Comparison Phoronix Test Suite Baseline +13.5% +13.5% +27% +27% +40.5% +40.5% +54% +54% 15% 10.7% 8.4% 4.6% 4.2% 3.7% 2.7% 2.5% 1:10 54.1% CPU - 256 - ResNet-50 41.2% CPU - 256 - ResNet-152 39.3% CPU - 64 - ResNet-152 38.5% 500 35.1% CPU - 64 - ResNet-50 33.7% motorBike - Execution Time 33.3% 1000 31.7% GET - 50 24.5% 12 - Compression Speed 22.6% CPU - 64 - Efficientnet_v2_l 20.2% GhostRider - 1M 19.4% Bosphorus 4K - Faster 19.1% Bosphorus 4K - Ultra Fast 18.3% CPU - 256 - Efficientnet_v2_l 18.3% 19 - Compression Speed 17.9% 10 - G.M.O.A.Q 16.9% e.G.B.S - 1200 16.5% 19, Long Mode - Compression Speed 15.7% Barbershop - CPU-Only Preset 12 - Bosphorus 1080p 14.4% e.G.B.S - 2400 14.4% IMDB 14.2% Bosphorus 4K - Super Fast 14.2% Bosphorus 4K - Very Fast 14.2% Create - 100 - 100000 14% 500 12.4% 19 - D.S 12.3% Preset 8 - Bosphorus 1080p 12.3% Preset 8 - Bosphorus 4K 12.2% libx265 - Live 11.5% Bosphorus 1080p - Faster 10.9% 12 - D.S 10.9% 1B 32 - 256 - 57 10.5% Bosphorus 1080p - Fast 10.3% VMAF Optimized - Bosphorus 1080p 10.1% 64 - 256 - 57 9.9% Preset 12 - Bosphorus 4K 9.9% Preset 4 - Bosphorus 4K 9.8% Preset 13 - Bosphorus 4K 9.6% Preset 13 - Bosphorus 1080p 9.4% R.O.R.S.I 9.4% Bosphorus 4K - Ultra Fast 9.4% Bosphorus 1080p - Very Fast 9.2% Bosphorus 4K - Fast 9.2% Bosphorus 1080p - Ultra Fast 8.7% Bosphorus 1080p - Super Fast 8.6% d.S.M.S - Mesh Time 19, Long Mode - D.S 8.2% P.S.O - Bosphorus 1080p 8% Time To Compile 7.5% 100 - 1000 - Read Write - Average Latency 7.2% 100 - 1000 - Read Write 7.1% V.Q.O - Bosphorus 1080p 6.9% Bosphorus 1080p - Ultra Fast 6.9% Bosphorus 4K - Very Fast 6.8% Preset 4 - Bosphorus 1080p 6.5% 1000 5.9% Unix Makefiles 5.9% TPC-H Parquet 5.8% Bosphorus 4K - Super Fast 5.7% Compression Rating 5.4% libx265 - Platform 5.2% 1:100 5.1% Bosphorus 1080p - Super Fast 5% 128 - 256 - 57 5% Bosphorus 1080p - Very Fast 4.8% SET - 500 4.8% R.5.S.I - A.M.S 4.7% R.5.S.I - A.M.S 4.7% Classroom - CPU-Only libx265 - Video On Demand 4.2% RT.ldr_alb_nrm.3840x2160 - CPU-Only libx265 - Upload 3.8% 128 - 256 - 32 A.G.R.R.0.F.I - CPU 3.6% 32 - 256 - 512 3.4% 256 - 256 - 57 3.2% defconfig 3% GET - 500 2.8% 128 - 256 - 512 A.G.R.R.0.F.I - CPU 2.5% Bumper Beam Ninja 2.5% 10 - Q01 19% 10 - Q02 40.2% 10 - Q03 19.3% 10 - Q04 13.7% 10 - Q05 39.4% 10 - Q06 26.8% 10 - Q07 22.5% 10 - Q08 30.5% 10 - Q09 23.3% 10 - Q10 18.4% 10 - Q11 33.9% 10 - Q12 26.8% 10 - Q13 15.8% 10 - Q14 13.3% 10 - Q15 8.7% 10 - Q16 13.8% 10 - Q17 3.4% 10 - Q18 10.1% 10 - Q19 12.6% 10 - Q20 19.6% 10 - Q21 17.7% 10 - Q22 10.5% Memcached PyTorch PyTorch PyTorch nginx PyTorch OpenFOAM nginx Redis Zstd Compression PyTorch Xmrig VVenC uvg266 PyTorch Zstd Compression Apache Spark TPC-H easyWave Zstd Compression Blender SVT-AV1 easyWave DuckDB uvg266 uvg266 Apache Hadoop Apache HTTP Server Zstd Compression SVT-AV1 SVT-AV1 FFmpeg VVenC Zstd Compression Y-Cruncher Liquid-DSP VVenC SVT-VP9 Liquid-DSP SVT-AV1 SVT-AV1 SVT-AV1 SVT-AV1 OpenRadioss Kvazaar uvg266 VVenC uvg266 uvg266 OpenFOAM Zstd Compression SVT-VP9 Timed GCC Compilation PostgreSQL PostgreSQL SVT-VP9 Kvazaar Kvazaar SVT-AV1 Apache HTTP Server Timed LLVM Compilation DuckDB Kvazaar 7-Zip Compression FFmpeg Memcached Kvazaar Liquid-DSP Kvazaar Redis Neural Magic DeepSparse Neural Magic DeepSparse Blender FFmpeg Intel Open Image Denoise FFmpeg Liquid-DSP OpenVINO Liquid-DSP Liquid-DSP Timed Linux Kernel Compilation Redis Liquid-DSP OpenVINO OpenRadioss Timed LLVM Compilation Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Default Optimized Power Mode
Intel Optimized Power Mode Xeon Platinum Benchmarks xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M quantlib: Multi-Threaded quantlib: Single-Threaded openradioss: Bumper Beam openradioss: Chrysler Neon 1M openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 64 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU y-cruncher: 1B openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openfoam: motorBike - Execution Time openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time qmcpack: Li2_STO_ae compress-7zip: Compression Rating compress-7zip: Decompression Rating build-llvm: Ninja build-llvm: Unix Makefiles compress-zstd: 12 - Compression Speed compress-zstd: 12 - Decompression Speed compress-zstd: 19 - Compression Speed compress-zstd: 19 - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19, Long Mode - Decompression Speed build-gcc: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig kvazaar: Bosphorus 4K - Slow kvazaar: Bosphorus 4K - Medium kvazaar: Bosphorus 1080p - Slow kvazaar: Bosphorus 1080p - Medium kvazaar: Bosphorus 4K - Very Fast kvazaar: Bosphorus 4K - Super Fast kvazaar: Bosphorus 4K - Ultra Fast kvazaar: Bosphorus 1080p - Very Fast kvazaar: Bosphorus 1080p - Super Fast kvazaar: Bosphorus 1080p - Ultra Fast svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 1080p svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Barbershop - CPU-Only ffmpeg: libx265 - Live ffmpeg: libx265 - Upload ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand uvg266: Bosphorus 4K - Slow uvg266: Bosphorus 4K - Medium uvg266: Bosphorus 1080p - Slow uvg266: Bosphorus 1080p - Medium uvg266: Bosphorus 4K - Very Fast uvg266: Bosphorus 4K - Super Fast uvg266: Bosphorus 4K - Ultra Fast uvg266: Bosphorus 1080p - Very Fast uvg266: Bosphorus 1080p - Super Fast uvg266: Bosphorus 1080p - Ultra Fast vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only easywave: e2Asean Grid + BengkuluSept2007 Source - 2400 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 liquid-dsp: 32 - 256 - 32 liquid-dsp: 32 - 256 - 57 liquid-dsp: 64 - 256 - 32 liquid-dsp: 64 - 256 - 57 liquid-dsp: 128 - 256 - 32 liquid-dsp: 128 - 256 - 57 liquid-dsp: 256 - 256 - 32 liquid-dsp: 256 - 256 - 57 liquid-dsp: 32 - 256 - 512 liquid-dsp: 64 - 256 - 512 liquid-dsp: 128 - 256 - 512 liquid-dsp: 256 - 256 - 512 spark-tpch: 10 - Geometric Mean Of All Queries duckdb: IMDB duckdb: TPC-H Parquet nginx: 500 nginx: 1000 apache: 500 apache: 1000 hadoop: Create - 100 - 100000 memcached: 1:10 memcached: 1:100 redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: SET - 500 pgbench: 100 - 1000 - Read Only pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Write - Average Latency spark-tpch: 10 - Q01 spark-tpch: 10 - Q02 spark-tpch: 10 - Q03 spark-tpch: 10 - Q04 spark-tpch: 10 - Q05 spark-tpch: 10 - Q06 spark-tpch: 10 - Q07 spark-tpch: 10 - Q08 spark-tpch: 10 - Q09 spark-tpch: 10 - Q10 spark-tpch: 10 - Q11 spark-tpch: 10 - Q12 spark-tpch: 10 - Q13 spark-tpch: 10 - Q14 spark-tpch: 10 - Q15 spark-tpch: 10 - Q16 spark-tpch: 10 - Q17 spark-tpch: 10 - Q18 spark-tpch: 10 - Q19 spark-tpch: 10 - Q20 spark-tpch: 10 - Q21 spark-tpch: 10 - Q22 Default Optimized Power Mode 70383.2 70469.8 75002.3 16522.5 70573.0 69677.8 388447.2 3650.5 84.78 85.82 25.74 111.23 81.69 97.87 137.3439 463.0854 4580.6318 13.9451 1778.5945 35.9314 11244.7941 5.6732 834.1767 76.5445 160.4064 397.7731 1791.2737 35.6695 862.1382 74.0722 1235.4063 51.7033 182.1591 350.9008 1899.2223 33.6385 137.3540 462.9932 43.91 44.58 17.67 17.85 2.62 2.59 748.10 42.72 539.69 236.70 8930.00 14.32 24795.17 5.199 5.15 2390.25 53.51 1106.32 28.85 48795.97 2.45 10205.61 12.51 3330.19 38.41 121577.55 0.40 4.13288 30.04223 23.469822 94.384 689516 637311 96.518 177.431 333.6 1422.7 19.1 1173.5 9.90 1183.8 706.259 23.756 149.792 41.45 41.99 127.25 131.14 69.36 71.05 77.53 268.49 263.88 282.05 542.98 542.17 463.27 7.297 69.466 218.452 218.763 20.952 142.265 502.707 628.755 12.89 34.94 149.49 131.81 27.00 53.98 53.63 27.20 29.79 80.30 88.46 56.06 57.87 59.59 160.63 158.23 166.47 6.921 10.836 19.915 32.091 4.53 89.984 36.546 1211766667 1429100000 2476033333 2810166667 3611866667 4329866667 6185600000 5835500000 508245000 1012336667 1473900000 2144566667 8.35919683 121.967 148.55 273684.71 243384.19 90823.06 80826.71 5490 3360511.12 3441439.71 4744838.9 3060186.42 3240001.58 2503177.67 893089 1.124 63962 15.634 8.84185569 7.69811408 12.59076977 8.17697223 12.94428349 2.56617117 11.47461828 12.05303923 17.56128820 11.87912210 5.99766175 7.93393326 5.64402056 5.77844492 4.24561580 4.97244104 12.60073821 12.89416568 5.09458462 8.56171989 26.48187955 4.49565951 69196.7 70298.8 75701.7 13832.4 70338.6 69847.9 391468.4 3641.4 82.73 86.30 25.37 111.73 89.37 98.49 138.0114 461.1469 4590.3375 13.9095 1768.9108 36.1286 10738.1352 5.9385 834.2706 76.5042 162.5895 392.4515 1776.5252 35.9561 863.3474 73.9896 1237.0257 51.6176 183.7104 346.7411 1883.6573 33.9032 138.1497 460.6851 32.83 31.58 12.76 12.81 2.18 2.19 750.43 42.59 538.20 237.36 8943.88 14.30 24922.40 4.697 5.13 2392.38 53.45 1100.76 28.99 49494.06 2.42 10168.06 12.55 3297.81 38.79 117369.55 0.41 5.50897 27.70873 23.348833 92.672 654459 638136 98.905 187.837 272.2 1282.7 16.2 1044.6 8.56 1094.2 759.216 24.476 151.276 41.13 41.69 125.18 130.64 64.97 67.21 70.87 256.19 251.33 263.96 493.26 502.05 433.53 6.645 61.914 198.742 199.549 19.669 126.697 439.477 574.599 12.67 33.40 129.94 118.24 26.02 51.31 51.45 26.71 29.62 78.93 86.78 49.10 50.68 50.36 147.08 145.75 153.17 6.340 9.096 18.053 28.926 4.72 102.928 42.559 1204500000 1293333333 2460866667 2556573333 3745233333 4125000000 6301483333 5656600000 491426667 999663333 1513466667 2141000000 9.77304127 139.326 157.101 202507.42 184861.99 80817.73 76315.65 4814 2180648.21 3274425.32 3811819.50 3050498.00 3152359.75 2389636.67 878468 1.141 59695 16.752 10.52531964 10.79421711 15.01887308 9.29476929 18.04154778 3.25317642 14.05770493 15.72322082 21.65995761 14.06280640 8.03043699 10.05769416 6.53481947 6.54833514 4.61490004 5.65648528 13.02843149 14.19987583 5.73738200 10.24267319 31.16119112 4.96957016 OpenBenchmarking.org
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 30 60 90 120 150 SE +/- 0.12, N = 3 SE +/- 0.21, N = 3 138.01 137.34
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 1000 2000 3000 4000 5000 SE +/- 2.64, N = 3 SE +/- 6.99, N = 3 4590.34 4580.63
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 1.36, N = 3 162.59 160.41
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 400 800 1200 1600 2000 SE +/- 3.31, N = 3 SE +/- 7.80, N = 3 1776.53 1791.27
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 8 16 24 32 40 SE +/- 0.07, N = 3 SE +/- 0.16, N = 3 35.96 35.67
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 200 400 600 800 1000 SE +/- 0.10, N = 3 SE +/- 0.30, N = 3 863.35 862.14
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 300 600 900 1200 1500 SE +/- 2.51, N = 3 SE +/- 1.85, N = 3 1237.03 1235.41
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 40 80 120 160 200 SE +/- 0.02, N = 3 SE +/- 0.43, N = 3 183.71 182.16
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 400 800 1200 1600 2000 SE +/- 0.88, N = 3 SE +/- 1.67, N = 3 1883.66 1899.22
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream Optimized Power Mode Default 30 60 90 120 150 SE +/- 0.30, N = 3 SE +/- 0.09, N = 3 138.15 137.35
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU Optimized Power Mode Default 160 320 480 640 800 SE +/- 0.90, N = 3 SE +/- 0.71, N = 3 750.43 748.10 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU Optimized Power Mode Default 120 240 360 480 600 SE +/- 1.35, N = 3 SE +/- 0.65, N = 3 538.20 539.69 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU Optimized Power Mode Default 2K 4K 6K 8K 10K SE +/- 4.32, N = 3 SE +/- 5.30, N = 3 8943.88 8930.00 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU Optimized Power Mode Default 5K 10K 15K 20K 25K SE +/- 6.86, N = 3 SE +/- 42.09, N = 3 24922.40 24795.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU Optimized Power Mode Default 500 1000 1500 2000 2500 SE +/- 7.16, N = 3 SE +/- 3.42, N = 3 2392.38 2390.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU Optimized Power Mode Default 200 400 600 800 1000 SE +/- 3.05, N = 3 SE +/- 9.12, N = 3 1100.76 1106.32 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU Optimized Power Mode Default 11K 22K 33K 44K 55K SE +/- 229.19, N = 3 SE +/- 490.25, N = 3 49494.06 48795.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU Optimized Power Mode Default 2K 4K 6K 8K 10K SE +/- 6.61, N = 3 SE +/- 7.22, N = 3 10168.06 10205.61 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU Optimized Power Mode Default 700 1400 2100 2800 3500 SE +/- 6.73, N = 3 SE +/- 7.06, N = 3 3297.81 3330.19 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU Optimized Power Mode Default 30K 60K 90K 120K 150K SE +/- 1361.46, N = 3 SE +/- 775.18, N = 3 117369.55 121577.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time Optimized Power Mode Default 7 14 21 28 35 27.71 30.04 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
Zstd Compression This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 12 - Compression Speed Optimized Power Mode Default 70 140 210 280 350 SE +/- 2.98, N = 5 SE +/- 3.46, N = 3 272.2 333.6 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19 - Compression Speed Optimized Power Mode Default 5 10 15 20 25 SE +/- 0.19, N = 3 SE +/- 0.06, N = 3 16.2 19.1 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed Optimized Power Mode Default 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 8.56 9.90 1. (CC) gcc options: -O3 -pthread -lz -llzma
Timed Linux Kernel Compilation This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.
Kvazaar This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
Tuning: VMAF Optimized - Input: Bosphorus 4K
Default: The test quit with a non-zero exit status.
Optimized Power Mode: The test quit with a non-zero exit status.
Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K
Default: The test quit with a non-zero exit status.
Optimized Power Mode: The test quit with a non-zero exit status.
Tuning: Visual Quality Optimized - Input: Bosphorus 4K
Default: The test quit with a non-zero exit status.
Optimized Power Mode: The test quit with a non-zero exit status.
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
Apache Spark TPC-H This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.
Result
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Geometric Mean Of All Queries Optimized Power Mode Default 3 6 9 12 15 SE +/- 0.09145601, N = 7 SE +/- 0.07178690, N = 3 9.77304127 8.35919683 MIN: 4.46 / MAX: 32.81 MIN: 4.15 / MAX: 27.27
10Q01
10Q02
10Q03
10Q04
10Q05
10Q06
10Q07
10Q08
10Q09
10Q10
10Q11
10Q12
10Q13
10Q14
10Q15
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q15 Optimized Power Mode Default 1.0384 2.0768 3.1152 4.1536 5.192 SE +/- 0.07270001, N = 7 SE +/- 0.05108329, N = 3 4.61490004 4.24561580
10Q16
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q16 Optimized Power Mode Default 1.2727 2.5454 3.8181 5.0908 6.3635 SE +/- 0.10743013, N = 7 SE +/- 0.28343850, N = 3 5.65648528 4.97244104
10Q17
10Q18
10Q19
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q19 Optimized Power Mode Default 1.2909 2.5818 3.8727 5.1636 6.4545 SE +/- 0.14465657, N = 7 SE +/- 0.17083033, N = 3 5.73738200 5.09458462
10Q20
10Q21
10Q22
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q22 Optimized Power Mode Default 1.1182 2.2364 3.3546 4.4728 5.591 SE +/- 0.13159559, N = 7 SE +/- 0.16829529, N = 3 4.96957016 4.49565951
CPU Peak Freq (Highest CPU Core Frequency
OpenBenchmarking.org Megahertz, More Is Better Apache Spark TPC-H 3.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor Optimized Power Mode Default 1000 2000 3000 4000 5000 Min: 800 / Avg: 3833.92 / Max: 4663 Min: 800 / Avg: 3836.91 / Max: 5474
CPU Power Consumption
OpenBenchmarking.org Watts, Fewer Is Better Apache Spark TPC-H 3.5 CPU Power Consumption Monitor Optimized Power Mode Default 110 220 330 440 550 Min: 92.82 / Avg: 308.18 / Max: 608.02 Min: 145.89 / Avg: 378.9 / Max: 600.02 1. Optimized Power Mode: Approximate power consumption of 87523 Joules per run. 2. Default: Approximate power consumption of 99525 Joules per run.
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
PostgreSQL This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only Optimized Power Mode Default 200K 400K 600K 800K 1000K SE +/- 11877.48, N = 12 SE +/- 17460.42, N = 10 878468 893089 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write Optimized Power Mode Default 14K 28K 42K 56K 70K SE +/- 214.64, N = 3 SE +/- 70.87, N = 3 59695 63962 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
CPU Peak Freq (Highest CPU Core Frequency) Monitor OpenBenchmarking.org Megahertz CPU Peak Freq (Highest CPU Core Frequency) Monitor Phoronix Test Suite System Monitoring Optimized Power Mode Default 1000 2000 3000 4000 5000 Min: 800 / Avg: 3437 / Max: 5154 Min: 500 / Avg: 3374.53 / Max: 5474
CPU Power Consumption Monitor OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring Optimized Power Mode Default 140 280 420 560 700 Min: 88.73 / Avg: 366.23 / Max: 802.53 Min: 101.15 / Avg: 445.93 / Max: 802.3
Default Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161Java Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 14 December 2023 16:53 by user phoronix.
Optimized Power Mode Processor: 2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS), Chipset: Intel Device 1bce, Memory: 1008GB, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T
OS: Ubuntu 23.10, Kernel: 6.5.0-13-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161Java Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 15 December 2023 10:44 by user phoronix.