AMD EPYC 9754 Bergamo AVX-512 AMD EPYC 9754 1P benchmarks with AVX-512 benchmarking and then AVX-512 disabled. Tests by Michael Larabel for a future article.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2307197-NE-AMDBERGAM43 AVX512 On Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa0010bPython Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
AVX512 Off Processor: AMD EPYC 9754 128-Core @ 2.25GHz (128 Cores / 256 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 768GB, Disk: 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 22.04, Kernel: 5.19.0-41-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1024x768
AMD EPYC 9754 Bergamo AVX-512 OpenBenchmarking.org Phoronix Test Suite AMD EPYC 9754 128-Core @ 2.25GHz (128 Cores / 256 Threads) AMD Titanite_4G (RTI1007B BIOS) AMD Device 14a4 768GB 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 ASPEED Broadcom NetXtreme BCM5720 PCIe Ubuntu 22.04 5.19.0-41-generic (x86_64) GNOME Shell 42.5 X Server 1.21.1.4 1.3.224 GCC 11.3.0 ext4 1024x768 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution AMD EPYC 9754 Bergamo AVX-512 Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa0010b - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
AVX512 On vs. AVX512 Off Comparison Phoronix Test Suite Baseline +346.4% +346.4% +692.8% +692.8% +1039.2% +1039.2% CPU - 512 - GoogLeNet 789.3% CPU - 64 - AlexNet 778.7% CPU - 32 - AlexNet 562.1% CPU - 256 - ResNet-50 499.9% CPU - 64 - GoogLeNet 499% CPU - 512 - ResNet-50 491% CPU - 64 - ResNet-50 423.7% CPU - 16 - AlexNet 389.1% CPU - 32 - ResNet-50 309.7% CPU - 32 - GoogLeNet 306.6% CPU - 16 - ResNet-50 171% CPU - 16 - GoogLeNet 164.3% W.P.D.F - CPU 138.2% W.P.D.F - CPU 138% F.D.F - CPU 132% F.D.F - CPU 131.2% M.T.E.T.D.F - CPU 130.3% M.T.E.T.D.F - CPU 130.2% W.P.D.F.I - CPU 107.7% W.P.D.F.I - CPU 107.6% F.D.F.I - CPU 103.6% F.D.F.I - CPU 102.6% P.V.B.D.F - CPU 100.2% P.V.B.D.F - CPU 100.1% CPU - 512 - AlexNet 1385.6% CPU - 256 - AlexNet 1238.3% CPU - 256 - GoogLeNet 986.6% LBC, LBRY Credits 98.6% A.G.R.R.0.F - CPU 92.9% N.S.A.8.P.Q.B.B.U - A.M.S 92.3% N.S.A.8.P.Q.B.B.U - A.M.S 92.2% C.S.9.P.Y.P - A.M.S 83.3% C.S.9.P.Y.P - A.M.S 81.9% P.D.F - CPU 80.2% P.D.F - CPU 79.8% P.D.F - CPU 78.2% P.D.F - CPU 77.9% A.G.R.R.0.F - CPU 76.2% gravity_spheres_volume/dim_512/scivis/real_time 75.8% Q.S.2.P 71.1% gravity_spheres_volume/dim_512/ao/real_time 70.8% x25x 65.4% Blake-2 S 58.9% scrypt 47.2% V.D.F.I - CPU 43.9% V.D.F.I - CPU 43.6% gravity_spheres_volume/dim_512/pathtracer/real_time 43.1% 256 38.4% Garlicoin 34.5% OpenMP - BM2 31.9% OpenMP - BM2 31.9% OpenMP - BM1 26.7% OpenMP - BM1 26.7% Skeincoin 25.3% N.T.C.B.b.u.S - A.M.S 21.2% N.T.C.B.b.u.S - A.M.S 21.1% Myriad-Groestl 20.7% V.D.F - CPU 20.1% V.D.F - CPU 20% N.D.C.o.b.u.o.I - A.M.S 19.8% N.T.C.D.m - A.M.S 19.7% N.T.C.D.m - A.M.S 19.6% N.T.C.B.b.u.c - A.M.S 19.6% N.T.C.B.b.u.c - A.M.S 19.3% N.D.C.o.b.u.o.I - A.M.S 19.2% vklBenchmark ISPC 18.6% Pathtracer ISPC - Asian Dragon 18.6% Pathtracer ISPC - Asian Dragon Obj 17% N.Q.A.B.b.u.S.1.P - A.M.S 16.1% N.Q.A.B.b.u.S.1.P - A.M.S 14.8% R.N.N.T - bf16bf16bf16 - CPU 12.2% Pathtracer ISPC - Crown 11.9% A.G.R.R.0.F.I - CPU 11.4% C.C.R.5.I - A.M.S 11.4% C.C.R.5.I - A.M.S 11.3% A.G.R.R.0.F.I - CPU 10.6% 128 4.6% TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow TensorFlow OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO TensorFlow TensorFlow TensorFlow Cpuminer-Opt OpenVINO Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OSPRay Cpuminer-Opt OSPRay Cpuminer-Opt Cpuminer-Opt Cpuminer-Opt OpenVINO OpenVINO OSPRay libxsmm Cpuminer-Opt miniBUDE miniBUDE miniBUDE miniBUDE Cpuminer-Opt Neural Magic DeepSparse Neural Magic DeepSparse Cpuminer-Opt OpenVINO OpenVINO Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse OpenVKL Embree Embree Neural Magic DeepSparse Neural Magic DeepSparse oneDNN Embree OpenVINO Neural Magic DeepSparse Neural Magic DeepSparse OpenVINO libxsmm AVX512 On AVX512 Off
AMD EPYC 9754 Bergamo AVX-512 minibude: OpenMP - BM1 minibude: OpenMP - BM2 openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU embree: Pathtracer ISPC - Crown embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj minibude: OpenMP - BM1 minibude: OpenMP - BM2 libxsmm: 256 libxsmm: 128 tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet tensorflow: CPU - 512 - ResNet-50 openvkl: vklBenchmark ISPC ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream cpuminer-opt: scrypt cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: x25x cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream AVX512 On AVX512 Off 237.027 238.887 60.73 27.08 27.01 1430.45 118.00 5690.34 6073.22 580.40 11818.33 6638.71 110240.89 73970.12 125.5414 157.6450 134.8396 5925.670 5972.187 3342.5 2690.7 96.63 342.88 562.48 857.55 1422.36 1632.40 104.77 43.11 180.77 71.62 277.24 501.67 119.32 417.25 122.81 1398 32.7557 31.6753 27.9715 73.5393 1381.5669 247.9653 970.0568 624.7370 127.0611 316.1679 73.1459 2993.21 1174953 8628.76 4977.89 7238650 53090 660660 1498937 1174.75 1048.37 2334.29 2339.81 44.84 540.35 11.26 10.52 110.35 10.82 9.63 0.99 1.58 858.4029 46.2608 259.6468 65.8909 102.2238 498.4779 201.5964 859.7119 187.107 181.132 26.18 15.06 14.99 1190.91 57.95 3954.90 2551.70 251.97 5692.99 3317.34 62564.16 66895.49 112.2274 132.9504 115.2682 4677.682 4528.305 2415.3 2573.3 18.45 70.10 84.96 97.59 106.28 109.88 39.64 15.91 44.46 17.48 46.28 46.17 19.89 46.92 20.78 1179 19.1731 18.0203 19.5463 61.3689 718.3032 213.6393 870.9541 522.1104 69.3341 260.7671 61.1755 2033.15 937977 7149.95 3010.37 4555580 39473 332667 876137 1317.57 2423.39 4153.59 4170.49 53.79 1094.52 16.17 25.06 254.01 22.47 19.28 1.91 1.76 1023.1690 88.9093 298.0628 73.3502 122.2295 906.9414 244.1513 1025.4806 OpenBenchmarking.org
CPU Temperature Monitor OpenBenchmarking.org Celsius CPU Temperature Monitor Phoronix Test Suite System Monitoring AVX512 On AVX512 Off 15 30 45 60 75 Min: 23.25 / Avg: 51.4 / Max: 74.25 Min: 20.75 / Avg: 44.22 / Max: 76.13
CPU Peak Freq (Highest CPU Core Frequency) Monitor OpenBenchmarking.org Megahertz CPU Peak Freq (Highest CPU Core Frequency) Monitor Phoronix Test Suite System Monitoring AVX512 Off AVX512 On 600 1200 1800 2400 3000 Min: 2203 / Avg: 2979.69 / Max: 3559 Min: 2250 / Avg: 2918.06 / Max: 3532
CPU Power Consumption Monitor OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring AVX512 On AVX512 Off 70 140 210 280 350 Min: 10.25 / Avg: 231.36 / Max: 398.39 Min: 10.15 / Avg: 179.15 / Max: 378.14
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU AVX512 Off AVX512 On 14 28 42 56 70 SE +/- 0.36, N = 3 SE +/- 0.06, N = 3 26.18 60.73 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU AVX512 Off AVX512 On 6 12 18 24 30 SE +/- 0.21, N = 3 SE +/- 0.30, N = 12 15.06 27.08 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU AVX512 Off AVX512 On 6 12 18 24 30 SE +/- 0.16, N = 5 SE +/- 0.18, N = 12 14.99 27.01 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU AVX512 Off AVX512 On 300 600 900 1200 1500 SE +/- 15.59, N = 14 SE +/- 22.93, N = 15 1190.91 1430.45 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU AVX512 Off AVX512 On 30 60 90 120 150 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 57.95 118.00 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU AVX512 Off AVX512 On 1200 2400 3600 4800 6000 SE +/- 2.12, N = 3 SE +/- 89.26, N = 15 3954.90 5690.34 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU AVX512 Off AVX512 On 1300 2600 3900 5200 6500 SE +/- 0.56, N = 3 SE +/- 1.32, N = 3 2551.70 6073.22 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU AVX512 Off AVX512 On 130 260 390 520 650 SE +/- 3.15, N = 15 SE +/- 7.10, N = 15 251.97 580.40 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU AVX512 Off AVX512 On 3K 6K 9K 12K 15K SE +/- 1.65, N = 3 SE +/- 1.17, N = 3 5692.99 11818.33 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU AVX512 Off AVX512 On 1400 2800 4200 5600 7000 SE +/- 26.07, N = 15 SE +/- 13.51, N = 3 3317.34 6638.71 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU AVX512 Off AVX512 On 20K 40K 60K 80K 100K SE +/- 278.13, N = 3 SE +/- 314.35, N = 3 62564.16 110240.89 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU AVX512 Off AVX512 On 16K 32K 48K 64K 80K SE +/- 20.10, N = 3 SE +/- 95.74, N = 3 66895.49 73970.12 1. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
Embree OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown AVX512 Off AVX512 On 0.1512 0.3024 0.4536 0.6048 0.756 0.560 0.672
OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon AVX512 Off AVX512 On 0.207 0.414 0.621 0.828 1.035 0.715 0.920
OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj AVX512 Off AVX512 On 0.2408 0.4816 0.7224 0.9632 1.204 0.842 1.070
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 AVX512 Off AVX512 On 1300 2600 3900 5200 6500 SE +/- 18.03, N = 8 SE +/- 2.27, N = 9 4677.68 5925.67 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 AVX512 Off AVX512 On 1300 2600 3900 5200 6500 SE +/- 15.49, N = 3 SE +/- 0.44, N = 3 4528.31 5972.19 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
libxsmm OpenBenchmarking.org GFLOPS/s Per Watt, More Is Better libxsmm 2-1.17-3645 M N K: 128 AVX512 Off AVX512 On 3 6 9 12 15 12.53 13.07
TensorFlow OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet AVX512 Off AVX512 On 0.6674 1.3348 2.0022 2.6696 3.337 0.576 2.966
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet AVX512 Off AVX512 On 0.9846 1.9692 2.9538 3.9384 4.923 0.691 4.376
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet AVX512 Off AVX512 On 1.3361 2.6722 4.0083 5.3444 6.6805 0.778 5.938
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet AVX512 Off AVX512 On 2 4 6 8 10 0.839 6.984
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet AVX512 Off AVX512 On 2 4 6 8 10 0.865 7.097
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet AVX512 Off AVX512 On 0.1742 0.3484 0.5226 0.6968 0.871 0.335 0.774
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 AVX512 Off AVX512 On 0.0662 0.1324 0.1986 0.2648 0.331 0.113 0.294
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet AVX512 Off AVX512 On 0.2561 0.5122 0.7683 1.0244 1.2805 0.360 1.138
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 AVX512 Off AVX512 On 0.0916 0.1832 0.2748 0.3664 0.458 0.124 0.407
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet AVX512 Off AVX512 On 0.3321 0.6642 0.9963 1.3284 1.6605 0.369 1.476
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet AVX512 Off AVX512 On 0.4676 0.9352 1.4028 1.8704 2.338 0.362 2.078
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 AVX512 Off AVX512 On 0.1208 0.2416 0.3624 0.4832 0.604 0.143 0.537
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet AVX512 Off AVX512 On 0.3974 0.7948 1.1922 1.5896 1.987 0.369 1.766
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 AVX512 Off AVX512 On 0.122 0.244 0.366 0.488 0.61 0.148 0.542
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 16 32 48 64 80 SE +/- 0.02, N = 3 SE +/- 0.14, N = 3 61.37 73.54
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 300 600 900 1200 1500 SE +/- 5.33, N = 3 SE +/- 1.58, N = 3 718.30 1381.57
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 50 100 150 200 250 SE +/- 0.14, N = 3 SE +/- 7.49, N = 15 213.64 247.97
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 200 400 600 800 1000 SE +/- 0.45, N = 3 SE +/- 0.42, N = 3 870.95 970.06
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 130 260 390 520 650 SE +/- 0.58, N = 3 SE +/- 0.52, N = 3 522.11 624.74
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 30 60 90 120 150 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 69.33 127.06
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 70 140 210 280 350 SE +/- 0.63, N = 3 SE +/- 0.78, N = 3 260.77 316.17
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream AVX512 Off AVX512 On 16 32 48 64 80 SE +/- 0.13, N = 3 SE +/- 0.19, N = 3 61.18 73.15
Cpuminer-Opt OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x AVX512 Off AVX512 On 4 8 12 16 20 10.81 16.85
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt AVX512 Off AVX512 On 2 4 6 8 10 6.476 8.913
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S AVX512 Off AVX512 On 5K 10K 15K 20K 25K 14551.24 22160.82
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin AVX512 Off AVX512 On 60 120 180 240 300 202.32 272.43
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin AVX512 Off AVX512 On 800 1600 2400 3200 4000 2892.87 3691.56
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl AVX512 Off AVX512 On 12 24 36 48 60 44.67 51.65
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits AVX512 Off AVX512 On 400 800 1200 1600 2000 1052.19 2003.50
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite AVX512 Off AVX512 On 900 1800 2700 3600 4500 2825.31 4347.58
oneDNN Min Avg Max AVX512 Off 2250 2824 3100 AVX512 On 2250 2998 3097 OpenBenchmarking.org Megahertz, More Is Better oneDNN 3.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Neural Magic DeepSparse Min Avg Max AVX512 Off 2250 2933 3136 AVX512 On 2250 3011 3136 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 3015 3127 AVX512 On 2250 3026 3124 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 On 2250 3002 3416 AVX512 Off 2250 3023 3152 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2962 3101 AVX512 On 2250 3012 3134 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2931 3101 AVX512 On 2250 3034 3099 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 On 2250 3006 3115 AVX512 Off 2250 3048 3116 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2940 3122 AVX512 On 2250 3013 3127 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2952 3146 AVX512 On 2250 3006 3135 OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.5 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
OpenVINO Min Avg Max AVX512 Off 2250 2491 3096 AVX512 On 2250 2708 3101 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2649 3132 AVX512 On 2250 2931 3185 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2645 3103 AVX512 On 2250 2936 3426 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2241 2918 3105 AVX512 On 2250 3029 3257 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2688 3124 AVX512 On 2250 2696 3116 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2659 3098 AVX512 On 2250 2705 3121 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2373 3112 AVX512 On 2250 2718 3095 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2214 2537 3101 AVX512 On 2250 2661 3111 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 On 2250 2619 3137 AVX512 Off 2250 2671 3102 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2483 3131 AVX512 On 2250 2679 3096 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2590 3101 AVX512 On 2250 2892 3097 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
Min Avg Max AVX512 Off 2250 2626 3099 AVX512 On 2250 2761 3119 OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor 800 1600 2400 3200 4000
oneDNN Min Avg Max AVX512 Off 24.5 39.8 45.1 AVX512 On 30.4 39.2 45.3 OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 3.1 CPU Temperature Monitor 12 24 36 48 60
Neural Magic DeepSparse Min Avg Max AVX512 On 34.6 57.5 71.4 AVX512 Off 30.6 54.5 70.5 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 39.4 58.1 67.6 AVX512 Off 38.1 53.8 64.3 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 38.6 53.1 66.5 AVX512 Off 38.1 52.9 65.8 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 40.0 65.2 72.9 AVX512 On 39.8 62.8 70.4 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 41.0 60.7 68.5 AVX512 On 39.1 60.4 67.1 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 40.0 54.0 69.1 AVX512 On 38.1 53.1 66.1 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 38.4 58.3 70.5 AVX512 On 36.0 57.7 70.9 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 38.9 58.3 70.9 AVX512 On 37.3 57.6 70.9 OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.5 CPU Temperature Monitor 20 40 60 80 100
OpenVINO Min Avg Max AVX512 On 38.6 63.1 70.0 AVX512 Off 39.8 59.3 68.1 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 41.3 61.2 73.4 AVX512 Off 37.6 57.4 67.6 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 37.6 60.6 72.9 AVX512 Off 36.5 58.0 72.4 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 36.8 53.9 62.3 AVX512 On 36.8 51.7 57.5 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 35.8 65.5 76.1 AVX512 On 34.4 59.5 70.5 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 42.1 67.7 72.8 AVX512 On 40.0 59.2 66.6 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 36.8 64.2 69.9 AVX512 Off 41.9 61.1 64.3 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 40.5 57.3 65.8 AVX512 Off 41.0 56.7 64.6 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 36.8 67.4 73.3 AVX512 On 36.8 61.3 66.1 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 40.5 60.6 66.6 AVX512 Off 41.0 57.1 64.8 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 On 41.0 64.6 70.0 AVX512 Off 37.3 61.8 68.0 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
Min Avg Max AVX512 Off 41.0 63.9 69.0 AVX512 On 41.4 63.4 68.1 OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.3 CPU Temperature Monitor 20 40 60 80 100
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
AVX512 On Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa0010bPython Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 July 2023 06:38 by user phoronix.
AVX512 Off Processor: AMD EPYC 9754 128-Core @ 2.25GHz (128 Cores / 256 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 768GB, Disk: 2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 22.04, Kernel: 5.19.0-41-generic (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa0010bPython Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 July 2023 14:04 by user phoronix.