Benchmarks for a future article. Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (1402 BIOS) and ASUS Intel RKL GT1 31GB on Ubuntu 22.10 via the Phoronix Test Suite.
i9-11900K: AVX-512 On Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
i9-11900K: AVX-512 Off Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -mno-avx512f" CFLAGS="-O3 -march=native -mno-avx512f"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
i9-11900K: AVX-512 On 512 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (1402 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 32GB Flash Drive, Graphics: ASUS Intel RKL GT1 31GB (1300MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.10, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -mprefer-vector-width=512" CFLAGS="-O3 -march=native -mprefer-vector-width=512"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
AVX-512 Core i9 Intel Rocket Lake OpenBenchmarking.org Phoronix Test Suite Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads) ASUS ROG MAXIMUS XIII HERO (1402 BIOS) Intel Tiger Lake-H 32GB 2000GB Corsair Force MP600 + 32GB Flash Drive ASUS Intel RKL GT1 31GB (1300MHz) Intel Tiger Lake-H HD Audio ASUS MG28U 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 22.10 5.19.0-21-generic (x86_64) GNOME Shell 43.0 X Server + Wayland 4.6 Mesa 22.2.1 1.3.224 GCC 12.2.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution AVX-512 Core I9 Intel Rocket Lake Benchmarks System Logs - Transparent Huge Pages: madvise - i9-11900K: AVX-512 On: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native" - i9-11900K: AVX-512 Off: CXXFLAGS="-O3 -march=native -mno-avx512f" CFLAGS="-O3 -march=native -mno-avx512f" - i9-11900K: AVX-512 On 512: CXXFLAGS="-O3 -march=native -mprefer-vector-width=512" CFLAGS="-O3 -march=native -mprefer-vector-width=512" - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1 - Python 3.10.7 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
dav1d: Summer Nature 1080p dav1d: Summer Nature 4K dav1d: Chimera 1080p dav1d: Chimera 1080p 10-bit openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Crown simdjson: PartialTweets simdjson: LargeRand simdjson: Kostya simdjson: DistinctUserID simdjson: TopTweet xmrig: Monero - 1M xmrig: Wownero - 1M oidn: RT.hdr_alb_nrm.3840x2160 oidn: RT.ldr_alb_nrm.3840x2160 oidn: RTLightmap.hdr.4096x4096 tensorflow: CPU - 16 - VGG-16 tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 32 - VGG-16 tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 64 - VGG-16 tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 256 - VGG-16 tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 512 - GoogLeNet onnx: yolov4 - CPU - Standard onnx: yolov4 - CPU - Parallel onnx: fcn-resnet101-11 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: bertsquad-12 - CPU - Parallel onnx: GPT-2 - CPU - Standard onnx: GPT-2 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Parallel openvkl: vklBenchmark ISPC ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream cpuminer-opt: Triple SHA-256, Onecoin cpuminer-opt: Magi cpuminer-opt: Ringcoin cpuminer-opt: Deepcoin cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Myriad-Groestl cpuminer-opt: Blake-2 S cpuminer-opt: x25x lczero: BLAS lczero: Eigen gromacs: MPI CPU - water_GMX50_bare ai-benchmark: Device Inference Score ai-benchmark: Device Training Score ai-benchmark: Device AI Score numpy: ospray-studio: 1 - 1080p - 1 - Path Tracer ospray-studio: 1 - 1080p - 16 - Path Tracer ospray-studio: 1 - 1080p - 32 - Path Tracer ospray-studio: 1 - 4K - 1 - Path Tracer ospray-studio: 1 - 4K - 16 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer ospray-studio: 3 - 1080p - 1 - Path Tracer ospray-studio: 3 - 1080p - 16 - Path Tracer ospray-studio: 3 - 1080p - 32 - Path Tracer ospray-studio: 3 - 4K - 1 - Path Tracer ospray-studio: 3 - 4K - 16 - Path Tracer ospray-studio: 3 - 4K - 32 - Path Tracer onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet tnn: CPU - DenseNet tnn: CPU - MobileNet v2 tnn: CPU - SqueezeNet v1.1 tnn: CPU - SqueezeNet v2 openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: Cell Phone Drop Test openradioss: Bumper Beam openradioss: INIVOL and Fluid Structure Interaction Drop Container openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time i9-11900K AVX-512 On AVX-512 Off AVX-512 On 512 894.73 207.14 734.61 510.88 2.97 12.23 9379.27 23074.82 1.78 1.76 1240.56 320.80 623.17 108.61 286.43 35.62 16.5129 14.4298 14.6252 8.52 1.47 4.54 8.78 8.82 2239.5 4015.0 0.43 0.43 0.21 7.76 20.46 21.86 80.75 63.34 8.32 21.24 112.10 64.22 8.73 136.92 65.02 9.42 22.20 163.87 66.63 173.67 67.68 529 319 102 73 7356 4950 1008 519 6954 5548 2249 1388 113 4.42072 4.37042 5.36461 4.95822 4.98037 173.134 35.3834 40.8668 68.7607 80.6747 132.9124 155.3852 8.8759 8.9516 43.6753 36.5979 49.6651 50.7670 8.9135 8.9588 231700 452.82 2152.94 11148.96 167500 4384.38 111730 77083 40573 862110 387.72 1268 1186 1.031 1376 2069 3445 631.24 1958 31054 65723 7743 127267 251261 2358 37662 78746 9373 152903 302582 14.3619 12.1099 16.1640 8.55193 0.840660 20.8058 4.33063 1.47049 17.3084 4.03554 0.720702 8.55274 10.6347 3.01377 4.81904 3.26773 1.32134 3.81793 3079.96 3088.89 3093.58 1814.65 1817.59 1814.03 7.468 0.938 1.670 10.295 3.073 2.131 1.972 19.580 12.28 3.73 2.68 2.33 2.62 4.86 0.80 10.16 44.74 8.69 6.98 15.99 20.99 17.19 7.05 87.29 2.98 2704.934 245.952 230.819 47.193 1346.99 326.76 0.85 0.34 2235.54 2250.02 6.43 24.92 6.41 36.80 13.95 112.27 28.2567 97.8481 14.5381 49.5682 7.5173 25.7285 112.6756 446.8814 22.8900 109.2808 20.1259 78.7761 112.1891 446.4889 309.21 250.33 120.62 177.40 759.23 45.488465 538.97618 879.37 192.71 691.43 487.09 2.60 4.88 7956.78 14973.88 1.54 1.54 533.79 252.69 310.38 141.30 325.95 31.70 15.8952 14.0227 13.8618 6.20 1.36 3.86 6.33 6.35 2256.1 4076.4 0.43 0.43 0.21 5.69 12.29 12.37 67.80 36.22 5.95 12.32 87.79 36.49 6.11 103.31 36.94 6.22 12.48 117.43 37.37 107.72 33.92 517 315 100 72 7279 4515 1042 515 6954 5557 1951 1278 87 2.82821 2.72378 4.05745 6.02446 5.91636 164.963 28.9677 35.5207 56.5928 69.1005 85.9777 101.6365 7.8089 8.1213 25.5407 26.5261 43.3933 43.2306 7.8337 8.1425 106137 453.39 2130.49 9966.10 64663 2381.45 70720 27270 16150 393213 359.01 1240 1062 1.020 1414 1282 2696 575.79 2134 34112 72145 8450 139519 275693 2561 40948 85854 10148 166508 329527 15.6093 14.8411 8.39772 1.77832 5.39327 2.31183 3.22680 1.24396 11.6771 2.18670 3.67360 1.40330 3271.06 3270.55 3271.27 2000.12 2003.54 2006.02 7.139 0.942 2.023 18.043 3.694 1.934 1.798 23.418 11.43 3.34 2.49 2.33 2.37 4.35 0.76 9.55 45.68 8.24 6.33 15.40 18.80 13.89 6.70 110.38 2.85 2703.558 241.314 228.683 46.360 1525.18 817.17 1.00 0.53 2563.45 2563.87 14.98 15.81 12.88 28.29 12.26 126.09 34.5145 112.5989 17.6640 57.8764 11.6244 39.3450 128.0523 492.3767 39.1448 150.7668 23.0354 92.5054 127.6469 491.1153 311.82 243.47 128.16 174.60 713.35 46.532622 538.29062 902.92 190.82 727.76 477.09 2.95 12.17 9460.45 22962.10 1.78 1.77 1242.45 321.51 630.58 109.09 285.75 35.55 16.4541 14.3502 14.6198 8.51 1.47 4.53 8.77 8.82 2252.6 4075.5 0.43 0.43 0.21 7.76 20.55 21.88 82.10 63.83 8.29 21.29 113.25 64.42 8.69 137.75 65.15 9.42 22.13 163.09 66.49 172.66 67.67 527 319 103 73 6824 4626 1012 524 6962 5545 2250 1368 112 4.46906 4.36772 5.35301 4.95280 4.95148 173.739 35.4238 40.8021 68.7715 81.0216 133.7435 156.8121 8.9905 9.0886 43.4408 36.8617 49.9913 51.3573 9.0118 9.0792 230687 449.42 2190.91 11270 164790 4392.29 112403 76393 40663 895513 379.94 1273 1194 1.035 1363 2076 3439 631.86 1951 31180 65447 7734 127540 250698 2364 37620 78809 9376 153101 303966 14.2835 12.0178 16.1605 8.50891 0.846376 20.2612 4.32564 1.46948 17.2687 4.03294 0.715925 8.55374 9.95113 2.75246 4.64814 3.21742 1.30906 3.66945 3062.36 3073.17 2986.72 1739.49 1738.00 1715.75 7.527 0.963 1.675 10.368 3.114 2.125 1.989 19.403 12.28 3.77 2.71 2.33 2.58 4.79 0.80 9.96 44.59 8.64 6.69 16.24 22.01 17.34 7.10 70.39 2.98 2695.646 240.410 228.424 45.679 1352.08 328.08 0.84 0.35 2227.69 2244.25 6.42 24.86 6.33 36.64 13.99 112.48 28.2249 98.0286 14.5355 49.3593 7.4707 25.4949 111.2249 440.0994 23.0134 108.4890 19.9939 77.8730 110.9617 440.5608 315.86 240.62 118.89 175.69 755.60 45.3321 535.10825 OpenBenchmarking.org
CPU Temperature Monitor OpenBenchmarking.org Celsius CPU Temperature Monitor Phoronix Test Suite System Monitoring AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 27 / Avg: 72.98 / Max: 95 Min: 29 / Avg: 78.17 / Max: 100 Min: 31 / Avg: 79.47 / Max: 100
CPU Peak Freq (Highest CPU Core Frequency) Monitor OpenBenchmarking.org Megahertz CPU Peak Freq (Highest CPU Core Frequency) Monitor Phoronix Test Suite System Monitoring AVX-512 Off AVX-512 On AVX-512 On 512 1000 2000 3000 4000 5000 Min: 3500 / Avg: 4781.84 / Max: 5621 Min: 3350 / Avg: 4733.96 / Max: 5541 Min: 2700 / Avg: 4722.22 / Max: 5323
CPU Power Consumption Monitor OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring AVX-512 Off AVX-512 On AVX-512 On 512 50 100 150 200 250 Min: 6.36 / Avg: 173.58 / Max: 251.93 Min: 6.37 / Avg: 188.45 / Max: 283.94 Min: 6.3 / Avg: 192.29 / Max: 280.23
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 0.6683 1.3366 2.0049 2.6732 3.3415 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 2.60 2.97 2.95 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.88 12.23 12.17 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 2K 4K 6K 8K 10K SE +/- 34.03, N = 3 SE +/- 94.72, N = 3 SE +/- 54.72, N = 3 7956.78 9379.27 9460.45 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 5K 10K 15K 20K 25K SE +/- 47.04, N = 3 SE +/- 411.52, N = 15 SE +/- 387.31, N = 15 14973.88 23074.82 22962.10 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 0.4005 0.801 1.2015 1.602 2.0025 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.54 1.78 1.78 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 0.3983 0.7966 1.1949 1.5932 1.9915 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 1.54 1.76 1.77 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 300 600 900 1200 1500 SE +/- 2.18, N = 3 SE +/- 5.38, N = 3 SE +/- 4.58, N = 3 533.79 1240.56 1242.45 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 70 140 210 280 350 SE +/- 0.05, N = 3 SE +/- 0.40, N = 3 SE +/- 1.00, N = 3 252.69 320.80 321.51 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 140 280 420 560 700 SE +/- 1.58, N = 3 SE +/- 3.77, N = 3 SE +/- 1.77, N = 3 310.38 623.17 630.58 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 30 60 90 120 150 SE +/- 1.44, N = 3 SE +/- 0.12, N = 3 SE +/- 0.25, N = 3 141.30 108.61 109.09 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 70 140 210 280 350 SE +/- 3.32, N = 6 SE +/- 2.42, N = 3 SE +/- 2.59, N = 7 325.95 286.43 285.75 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU AVX-512 Off AVX-512 On AVX-512 On 512 8 16 24 32 40 SE +/- 0.04, N = 3 SE +/- 0.27, N = 3 SE +/- 0.18, N = 3 31.70 35.62 35.55 -mno-avx512f -ldl -ldl 1. (CXX) g++ options: -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie
Embree OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 3.13 Binary: Pathtracer ISPC - Model: Asian Dragon AVX-512 Off AVX-512 On AVX-512 On 512 0.0203 0.0406 0.0609 0.0812 0.1015 0.090 0.078 0.077
OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 3.13 Binary: Pathtracer ISPC - Model: Asian Dragon Obj AVX-512 Off AVX-512 On AVX-512 On 512 0.02 0.04 0.06 0.08 0.1 0.089 0.077 0.076
OpenBenchmarking.org Frames Per Second Per Watt, More Is Better Embree 3.13 Binary: Pathtracer ISPC - Model: Crown AVX-512 Off AVX-512 On AVX-512 On 512 0.0164 0.0328 0.0492 0.0656 0.082 0.073 0.063 0.063
simdjson OpenBenchmarking.org GB/s Per Watt, More Is Better simdjson 2.0 Throughput Test: PartialTweets AVX-512 Off AVX-512 On AVX-512 On 512 0.0304 0.0608 0.0912 0.1216 0.152 0.104 0.135 0.135
OpenBenchmarking.org GB/s Per Watt, More Is Better simdjson 2.0 Throughput Test: LargeRandom AVX-512 Off AVX-512 On AVX-512 On 512 0.0059 0.0118 0.0177 0.0236 0.0295 0.025 0.026 0.026
Xmrig OpenBenchmarking.org H/s Per Watt, More Is Better Xmrig 6.12.1 Variant: Monero - Hash Count: 1M AVX-512 Off AVX-512 On AVX-512 On 512 4 8 12 16 20 16.68 16.08 12.40
OpenBenchmarking.org H/s Per Watt, More Is Better Xmrig 6.12.1 Variant: Wownero - Hash Count: 1M AVX-512 Off AVX-512 On AVX-512 On 512 6 12 18 24 30 27.04 26.07 20.54
TensorFlow OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: VGG-16 AVX-512 Off AVX-512 On AVX-512 On 512 0.0088 0.0176 0.0264 0.0352 0.044 0.033 0.039 0.039
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 AVX-512 Off AVX-512 On AVX-512 On 512 0.0259 0.0518 0.0777 0.1036 0.1295 0.072 0.115 0.115
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet AVX-512 Off AVX-512 On AVX-512 On 512 0.1296 0.2592 0.3888 0.5184 0.648 0.460 0.569 0.576
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet AVX-512 Off AVX-512 On AVX-512 On 512 0.0853 0.1706 0.2559 0.3412 0.4265 0.219 0.379 0.379
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: VGG-16 AVX-512 Off AVX-512 On AVX-512 On 512 0.0092 0.0184 0.0276 0.0368 0.046 0.034 0.041 0.041
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 AVX-512 Off AVX-512 On AVX-512 On 512 0.0261 0.0522 0.0783 0.1044 0.1305 0.071 0.115 0.116
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet AVX-512 Off AVX-512 On AVX-512 On 512 0.1663 0.3326 0.4989 0.6652 0.8315 0.552 0.731 0.739
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet AVX-512 Off AVX-512 On AVX-512 On 512 0.0803 0.1606 0.2409 0.3212 0.4015 0.212 0.357 0.356
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: VGG-16 AVX-512 Off AVX-512 On AVX-512 On 512 0.0095 0.019 0.0285 0.038 0.0475 0.035 0.042 0.042
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 AVX-512 Off AVX-512 On AVX-512 On 512 0.0261 0.0522 0.0783 0.1044 0.1305 0.071 0.115 0.116
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet AVX-512 Off AVX-512 On AVX-512 On 512 0.1933 0.3866 0.5799 0.7732 0.9665 0.614 0.833 0.859
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet AVX-512 Off AVX-512 On AVX-512 On 512 0.0783 0.1566 0.2349 0.3132 0.3915 0.209 0.346 0.348
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: VGG-16 AVX-512 Off AVX-512 On AVX-512 On 512 0.0097 0.0194 0.0291 0.0388 0.0485 0.032 0.043 0.043
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: ResNet-50 AVX-512 Off AVX-512 On AVX-512 On 512 0.0259 0.0518 0.0777 0.1036 0.1295 0.071 0.115 0.115
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet AVX-512 Off AVX-512 On AVX-512 On 512 0.2048 0.4096 0.6144 0.8192 1.024 0.654 0.909 0.910
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet AVX-512 Off AVX-512 On AVX-512 On 512 0.0763 0.1526 0.2289 0.3052 0.3815 0.206 0.338 0.339
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: AlexNet AVX-512 Off AVX-512 On AVX-512 On 512 0.207 0.414 0.621 0.828 1.035 0.594 0.920 0.920
OpenBenchmarking.org images/sec Per Watt, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: GoogLeNet AVX-512 Off AVX-512 On AVX-512 On 512 0.0767 0.1534 0.2301 0.3068 0.3835 0.188 0.341 0.339
Device: CPU - Batch Size: 512 - Model: VGG-16
i9-11900K: AVX-512 On: The test quit with a non-zero exit status.
i9-11900K: AVX-512 Off: The test quit with a non-zero exit status. E: Fatal Python error: Aborted
i9-11900K: AVX-512 On 512: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault
Device: CPU - Batch Size: 512 - Model: ResNet-50
i9-11900K: AVX-512 On: The test quit with a non-zero exit status.
i9-11900K: AVX-512 Off: The test quit with a non-zero exit status.
i9-11900K: AVX-512 On 512: The test quit with a non-zero exit status.
ONNX Runtime OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 0.6352 1.2704 1.9056 2.5408 3.176 2.823 2.616 2.598
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 0.1082 0.2164 0.3246 0.4328 0.541 0.481 0.470 0.472
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel AVX-512 Off AVX-512 On AVX-512 On 512 0.0925 0.185 0.2775 0.37 0.4625 0.411 0.375 0.374
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 8 16 24 32 40 33.21 31.68 29.78
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 7.698 7.013 6.988
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel AVX-512 Off AVX-512 On AVX-512 On 512 6 12 18 24 30 26.34 23.55 22.48
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 1.0991 2.1982 3.2973 4.3964 5.4955 4.885 4.542 4.528
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel AVX-512 Off AVX-512 On AVX-512 On 512 0.6912 1.3824 2.0736 2.7648 3.456 3.072 2.600 2.626
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 10 20 30 40 50 45.41 39.75 39.57
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel AVX-512 Off AVX-512 On AVX-512 On 512 8 16 24 32 40 36.65 31.82 31.65
OpenBenchmarking.org Inferences Per Minute Per Watt, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 9.881 10.451 10.464
OpenVKL OpenBenchmarking.org Items / Sec Per Watt, More Is Better OpenVKL 1.0 Benchmark: vklBenchmark ISPC AVX-512 Off AVX-512 On AVX-512 On 512 0.1314 0.2628 0.3942 0.5256 0.657 0.525 0.584 0.578
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 8 16 24 32 40 SE +/- 0.02, N = 3 SE +/- 0.07, N = 3 SE +/- 0.11, N = 3 28.97 35.38 35.42
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 9 18 27 36 45 SE +/- 0.02, N = 3 SE +/- 0.12, N = 3 SE +/- 0.22, N = 3 35.52 40.87 40.80
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 56.59 68.76 68.77
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 69.10 80.67 81.02
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 30 60 90 120 150 SE +/- 0.04, N = 3 SE +/- 0.48, N = 3 SE +/- 0.12, N = 3 85.98 132.91 133.74
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 30 60 90 120 150 SE +/- 0.14, N = 3 SE +/- 0.92, N = 3 SE +/- 0.68, N = 3 101.64 155.39 156.81
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.0071, N = 3 SE +/- 0.0785, N = 3 SE +/- 0.0301, N = 3 7.8089 8.8759 8.9905
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.0060, N = 3 SE +/- 0.0648, N = 3 SE +/- 0.0140, N = 3 8.1213 8.9516 9.0886
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.20, N = 3 SE +/- 0.19, N = 3 25.54 43.68 43.44
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 8 16 24 32 40 SE +/- 0.15, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 26.53 36.60 36.86
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 11 22 33 44 55 SE +/- 0.01, N = 3 SE +/- 0.15, N = 3 SE +/- 0.04, N = 3 43.39 49.67 49.99
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.15, N = 3 SE +/- 0.37, N = 3 43.23 50.77 51.36
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.0041, N = 3 SE +/- 0.0463, N = 3 SE +/- 0.0265, N = 3 7.8337 8.9135 9.0118
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.0056, N = 3 SE +/- 0.0383, N = 3 SE +/- 0.0265, N = 3 8.1425 8.9588 9.0792
Cpuminer-Opt OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Triple SHA-256, Onecoin AVX-512 Off AVX-512 On AVX-512 On 512 300 600 900 1200 1500 820.32 1366.28 1361.66
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Quad SHA-256, Pyrite AVX-512 Off AVX-512 On AVX-512 On 512 200 400 600 800 1000 460.38 953.89 930.90
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Myriad-Groestl AVX-512 Off AVX-512 On AVX-512 On 512 40 80 120 160 200 101.32 198.30 197.49
Result
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.18 Algorithm: Triple SHA-256, Onecoin AVX-512 Off AVX-512 On AVX-512 On 512 50K 100K 150K 200K 250K SE +/- 627.84, N = 3 SE +/- 1895.81, N = 3 SE +/- 1461.35, N = 3 106137 231700 230687 -mno-avx512f 1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
kH/s Per Watt
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Triple SHA-256, Onecoin AVX-512 Off AVX-512 On AVX-512 On 512 300 600 900 1200 1500 820.32 1366.28 1361.66
CPU Peak Freq (Highest CPU Core Frequency
OpenBenchmarking.org Megahertz, More Is Better Cpuminer-Opt 3.18 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4834.78 / Max: 5300 Min: 4700 / Avg: 4759.14 / Max: 5300 Min: 4700 / Avg: 4753.44 / Max: 5300
CPU Power Consumption
OpenBenchmarking.org Watts, Fewer Is Better Cpuminer-Opt 3.18 CPU Power Consumption Monitor AVX-512 Off AVX-512 On AVX-512 On 512 50 100 150 200 250 Min: 12.4 / Avg: 129.38 / Max: 169.44 Min: 12.5 / Avg: 169.58 / Max: 237.02 Min: 12.41 / Avg: 169.42 / Max: 278.89
CPU Temp
OpenBenchmarking.org Celsius, Fewer Is Better Cpuminer-Opt 3.18 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 64.76 / Max: 73 Min: 40 / Avg: 74.89 / Max: 92 Min: 42 / Avg: 75.38 / Max: 94
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Blake-2 S AVX-512 Off AVX-512 On AVX-512 On 512 1200 2400 3600 4800 6000 3255.01 5427.52 5541.06
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: x25x AVX-512 Off AVX-512 On AVX-512 On 512 0.5148 1.0296 1.5444 2.0592 2.574 2.288 1.846 1.847
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Garlicoin AVX-512 Off AVX-512 On AVX-512 On 512 5 10 15 20 25 13.64 19.40 19.90
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Skeincoin AVX-512 Off AVX-512 On AVX-512 On 512 140 280 420 560 700 493.49 647.91 651.49
Result
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.18 Algorithm: Quad SHA-256, Pyrite AVX-512 Off AVX-512 On AVX-512 On 512 40K 80K 120K 160K 200K SE +/- 42.56, N = 3 SE +/- 280.42, N = 3 SE +/- 346.55, N = 3 64663 167500 164790 -mno-avx512f 1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
kH/s Per Watt
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: Quad SHA-256, Pyrite AVX-512 Off AVX-512 On AVX-512 On 512 200 400 600 800 1000 460.38 953.89 930.90
CPU Peak Freq (Highest CPU Core Frequency
OpenBenchmarking.org Megahertz, More Is Better Cpuminer-Opt 3.18 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4831.54 / Max: 5312 Min: 4700 / Avg: 4750.65 / Max: 5300 Min: 4700 / Avg: 4753.87 / Max: 5290
CPU Power Consumption
OpenBenchmarking.org Watts, Fewer Is Better Cpuminer-Opt 3.18 CPU Power Consumption Monitor AVX-512 Off AVX-512 On AVX-512 On 512 50 100 150 200 250 Min: 12.24 / Avg: 140.46 / Max: 186.51 Min: 12.43 / Avg: 175.6 / Max: 252.11 Min: 12.25 / Avg: 177.02 / Max: 245.52
CPU Temp
OpenBenchmarking.org Celsius, Fewer Is Better Cpuminer-Opt 3.18 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 36 / Avg: 65.33 / Max: 73 Min: 38 / Avg: 73.74 / Max: 92 Min: 39 / Avg: 75.84 / Max: 93
Result
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.18 Algorithm: LBC, LBRY Credits AVX-512 Off AVX-512 On AVX-512 On 512 17K 34K 51K 68K 85K SE +/- 20.00, N = 3 SE +/- 84.13, N = 3 SE +/- 116.81, N = 3 27270 77083 76393 -mno-avx512f 1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
kH/s Per Watt
OpenBenchmarking.org kH/s Per Watt, More Is Better Cpuminer-Opt 3.18 Algorithm: LBC, LBRY Credits AVX-512 Off AVX-512 On AVX-512 On 512 90 180 270 360 450 165.49 397.33 393.49
CPU Peak Freq (Highest CPU Core Frequency
OpenBenchmarking.org Megahertz, More Is Better Cpuminer-Opt 3.18 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4769.7 / Max: 5300 Min: 4700 / Avg: 4754.62 / Max: 5300 Min: 4700 / Avg: 4753.32 / Max: 5289
CPU Power Consumption
OpenBenchmarking.org Watts, Fewer Is Better Cpuminer-Opt 3.18 CPU Power Consumption Monitor AVX-512 Off AVX-512 On AVX-512 On 512 50 100 150 200 250 Min: 12.49 / Avg: 164.79 / Max: 203.1 Min: 12.52 / Avg: 194 / Max: 268.78 Min: 12.24 / Avg: 194.14 / Max: 277.39
CPU Temp
OpenBenchmarking.org Celsius, Fewer Is Better Cpuminer-Opt 3.18 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 37 / Avg: 68.15 / Max: 76 Min: 39 / Avg: 77.28 / Max: 95 Min: 40 / Avg: 77.84 / Max: 95
AI Benchmark Alpha OpenBenchmarking.org Megahertz, More Is Better AI Benchmark Alpha 0.1.2 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4852.05 / Max: 5308 Min: 3500 / Avg: 4834.52 / Max: 5302 Min: 3475 / Avg: 4839.25 / Max: 5311
Neural Magic DeepSparse OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4850.7 / Max: 5301 Min: 4651 / Avg: 4793.56 / Max: 5300 Min: 4669 / Avg: 4825.21 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4864.38 / Max: 5300 Min: 4550 / Avg: 4788.85 / Max: 5300 Min: 3500 / Avg: 4771.09 / Max: 5310
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4829.28 / Max: 5301 Min: 4700 / Avg: 4769.5 / Max: 5309 Min: 4700 / Avg: 4800.61 / Max: 5310
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4802.88 / Max: 5301 Min: 4700 / Avg: 4777.02 / Max: 5300 Min: 4700 / Avg: 4777.6 / Max: 5301
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4780.8 / Max: 5312 Min: 3556 / Avg: 4766.8 / Max: 5302 Min: 4631 / Avg: 4780.28 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4768.47 / Max: 5302 Min: 3500 / Avg: 4648.5 / Max: 5302 Min: 3500 / Avg: 4682.19 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4847.56 / Max: 5310 Min: 3500 / Avg: 4747.88 / Max: 5310 Min: 4575 / Avg: 4802.05 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4862.17 / Max: 5301 Min: 3500 / Avg: 4711.32 / Max: 5304 Min: 4389 / Avg: 4809.78 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4884.74 / Max: 5310 Min: 4700 / Avg: 4810.87 / Max: 5300 Min: 4700 / Avg: 4865.03 / Max: 5312
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4899.77 / Max: 5314 Min: 4700 / Avg: 4803.42 / Max: 5300 Min: 4700 / Avg: 4813.34 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4792.9 / Max: 5300 Min: 4700 / Avg: 4785.87 / Max: 5309 Min: 4700 / Avg: 4796.86 / Max: 5302
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4785.41 / Max: 5300 Min: 3578 / Avg: 4752.98 / Max: 5300 Min: 4700 / Avg: 4783.64 / Max: 5301
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4854.95 / Max: 5363 Min: 3500 / Avg: 4768.23 / Max: 5302 Min: 4587 / Avg: 4805.51 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better Neural Magic DeepSparse 1.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4864.76 / Max: 5300 Min: 3500 / Avg: 4712.83 / Max: 5302 Min: 3500 / Avg: 4792.63 / Max: 5301
OpenRadioss OpenBenchmarking.org Megahertz, More Is Better OpenRadioss 2022.10.13 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4764.39 / Max: 5301 Min: 4700 / Avg: 4738.14 / Max: 5300 Min: 4700 / Avg: 4752.2 / Max: 5302
OpenBenchmarking.org Megahertz, More Is Better OpenRadioss 2022.10.13 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4789.33 / Max: 5302 Min: 4700 / Avg: 4744.78 / Max: 5300 Min: 4700 / Avg: 4746.34 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenRadioss 2022.10.13 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4779.68 / Max: 5309 Min: 4700 / Avg: 4776.23 / Max: 5300 Min: 4700 / Avg: 4779.74 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenRadioss 2022.10.13 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4725.71 / Max: 5303 Min: 4700 / Avg: 4728.36 / Max: 5300 Min: 4700 / Avg: 4723.82 / Max: 5299
OpenBenchmarking.org Megahertz, More Is Better OpenRadioss 2022.10.13 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4796.02 / Max: 5300 Min: 4700 / Avg: 4786.9 / Max: 5311 Min: 4700 / Avg: 4786.28 / Max: 5301
OpenFOAM OpenBenchmarking.org Megahertz, More Is Better OpenFOAM 10 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4731 / Avg: 4807.69 / Max: 5300 Min: 4700 / Avg: 4805.98 / Max: 5300 Min: 4700 / Avg: 4808.62 / Max: 5301
LeelaChessZero OpenBenchmarking.org Megahertz, More Is Better LeelaChessZero 0.28 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4730.95 / Max: 5300 Min: 4700 / Avg: 4714.39 / Max: 5302 Min: 4700 / Avg: 4715.02 / Max: 5309
OpenBenchmarking.org Megahertz, More Is Better LeelaChessZero 0.28 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4742.26 / Max: 5300 Min: 4700 / Avg: 4716.16 / Max: 5309 Min: 4700 / Avg: 4714.46 / Max: 5300
OSPRay Studio OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4759.8 / Max: 5300 Min: 3500 / Avg: 4719.74 / Max: 5310 Min: 4572 / Avg: 4702.26 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4759.97 / Max: 5300 Min: 4567 / Avg: 4716.26 / Max: 5300 Min: 3453 / Avg: 4694.88 / Max: 5301
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4791.69 / Max: 5308 Min: 3426 / Avg: 4746.5 / Max: 5300 Min: 4600 / Avg: 4754.09 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4745.23 / Max: 5313 Min: 3461 / Avg: 4676.99 / Max: 5300 Min: 4570 / Avg: 4680.41 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4747.54 / Max: 5300 Min: 3477 / Avg: 4691.4 / Max: 5300 Min: 3446 / Avg: 4670.53 / Max: 5310
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4726.06 / Max: 5300 Min: 3450 / Avg: 4627.77 / Max: 5304 Min: 4546 / Avg: 4639.07 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4753.65 / Max: 5300 Min: 4576 / Avg: 4701.44 / Max: 5300 Min: 4571 / Avg: 4680.45 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4753.09 / Max: 5326 Min: 4561 / Avg: 4700.89 / Max: 5300 Min: 4570 / Avg: 4692.8 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4778.13 / Max: 5310 Min: 3465 / Avg: 4730.1 / Max: 5300 Min: 4570 / Avg: 4723.71 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4785.08 / Max: 5300 Min: 4564 / Avg: 4690.88 / Max: 5300 Min: 3500 / Avg: 4665.37 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4740.31 / Max: 5310 Min: 4564 / Avg: 4689.08 / Max: 5307 Min: 3500 / Avg: 4671.61 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OSPRay Studio 0.11 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4722.23 / Max: 5309 Min: 4557 / Avg: 4641.27 / Max: 5300 Min: 4550 / Avg: 4626.21 / Max: 5300
oneDNN OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4800 / Avg: 4945.85 / Max: 5300 Min: 4700 / Avg: 4896.54 / Max: 5300 Min: 4700 / Avg: 4879.13 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4800 / Avg: 4940.32 / Max: 5300 Min: 4700 / Avg: 4892.18 / Max: 5300 Min: 4700 / Avg: 4884.79 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4894.72 / Max: 5300 Min: 4700 / Avg: 4872.99 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4788.28 / Max: 5100 Min: 4700 / Avg: 4787.19 / Max: 5300 Min: 4700 / Avg: 4787.63 / Max: 5228
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4797.64 / Max: 5309 Min: 3426 / Avg: 4750.48 / Max: 5310 Min: 3430 / Avg: 4699.55 / Max: 5139
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4772.18 / Max: 5298 Min: 4700 / Avg: 4768.42 / Max: 5101
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4721 / Avg: 4966.06 / Max: 5300 Min: 4700 / Avg: 4987.92 / Max: 5312 Min: 4700 / Avg: 4931.38 / Max: 5267
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4953.98 / Max: 5300 Min: 4700 / Avg: 4984.13 / Max: 5309 Min: 4700 / Avg: 4983.09 / Max: 5312
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4987.32 / Max: 5308 Min: 4700 / Avg: 4940.08 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4801.83 / Max: 5299 Min: 4700 / Avg: 4798.47 / Max: 5302 Min: 4700 / Avg: 4796.3 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4797.83 / Max: 5306 Min: 4700 / Avg: 4792.74 / Max: 5301 Min: 4700 / Avg: 4794.07 / Max: 5308
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4799.03 / Max: 5300 Min: 4700 / Avg: 4796.12 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4789 / Avg: 4907.45 / Max: 5306 Min: 4700 / Avg: 4834.06 / Max: 5301 Min: 4700 / Avg: 4835.06 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4753 / Avg: 4904.44 / Max: 5300 Min: 4700 / Avg: 4847.06 / Max: 5310 Min: 4700 / Avg: 4828.02 / Max: 5149
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4840.23 / Max: 5303 Min: 4700 / Avg: 4825.21 / Max: 5297
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4800 / Avg: 4883.85 / Max: 5307 Min: 4700 / Avg: 4817.98 / Max: 5300 Min: 4700 / Avg: 4805.17 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4857.73 / Max: 5270 Min: 4700 / Avg: 4831.27 / Max: 5300 Min: 4700 / Avg: 4824.57 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4826.46 / Max: 5302 Min: 4700 / Avg: 4807.8 / Max: 5221
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4733.47 / Max: 5312 Min: 4649 / Avg: 4725.14 / Max: 5300 Min: 3500 / Avg: 4710.26 / Max: 5301
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4726.9 / Max: 5300 Min: 3500 / Avg: 4678.34 / Max: 5300 Min: 3500 / Avg: 4667.49 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4729.18 / Max: 5303 Min: 3500 / Avg: 4672.78 / Max: 5309 Min: 3500 / Avg: 4674.91 / Max: 5181
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4732.36 / Max: 5300 Min: 3500 / Avg: 4683.06 / Max: 5300 Min: 4573 / Avg: 4707.8 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4725.04 / Max: 5303 Min: 3524 / Avg: 4668.41 / Max: 5291 Min: 3500 / Avg: 4673.77 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better oneDNN 2.7 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4729.07 / Max: 5300 Min: 3545 / Avg: 4698.05 / Max: 5302 Min: 3500 / Avg: 4678.88 / Max: 5300
Mobile Neural Network OpenBenchmarking.org Megahertz, More Is Better Mobile Neural Network 2.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4718.12 / Max: 5300 Min: 3483 / Avg: 4653.2 / Max: 5308 Min: 3500 / Avg: 4644.38 / Max: 5300
NCNN OpenBenchmarking.org Megahertz, More Is Better NCNN 20220729 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4724 / Avg: 4816.34 / Max: 5306 Min: 4700 / Avg: 4729.61 / Max: 5300 Min: 4700 / Avg: 4737.32 / Max: 5298
TNN OpenBenchmarking.org Megahertz, More Is Better TNN 0.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4800 / Avg: 5120.83 / Max: 5304 Min: 4800 / Avg: 5103.01 / Max: 5306 Min: 4800 / Avg: 5095.33 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better TNN 0.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4800 / Avg: 5114.03 / Max: 5304 Min: 4800 / Avg: 5109.16 / Max: 5300 Min: 4800 / Avg: 5072.03 / Max: 5313
OpenBenchmarking.org Megahertz, More Is Better TNN 0.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 5100 / Avg: 5253.76 / Max: 5300 Min: 5100 / Avg: 5282.12 / Max: 5304 Min: 5099 / Avg: 5261.41 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better TNN 0.3 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 5100 / Avg: 5198.88 / Max: 5304 Min: 5099 / Avg: 5219.48 / Max: 5310 Min: 5100 / Avg: 5205.02 / Max: 5305
OpenVINO OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 3689 / Avg: 4731.52 / Max: 5300 Min: 3429 / Avg: 4599.8 / Max: 5310 Min: 3424 / Avg: 4605.82 / Max: 5143
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 3600 / Avg: 4630.48 / Max: 5300 Min: 3486 / Avg: 4562.25 / Max: 5300 Min: 3417 / Avg: 4571.49 / Max: 5308
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4741.63 / Max: 5304 Min: 3467 / Avg: 4665.94 / Max: 5302 Min: 3400 / Avg: 4621.94 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4738.5 / Max: 5300 Min: 3400 / Avg: 4624.71 / Max: 5302 Min: 3408 / Avg: 4610.19 / Max: 5310
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4762.25 / Max: 5300 Min: 3434 / Avg: 4623.02 / Max: 5300 Min: 3439 / Avg: 4662.38 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4766.42 / Max: 5300 Min: 3400 / Avg: 4531.88 / Max: 5300 Min: 3400 / Avg: 4584.47 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 3600 / Avg: 4614.52 / Max: 5300 Min: 3500 / Avg: 4589.65 / Max: 5300 Min: 3500 / Avg: 4603.65 / Max: 5199
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4007 / Avg: 4734.73 / Max: 5300 Min: 3500 / Avg: 4571.82 / Max: 5303 Min: 3000 / Avg: 4570.85 / Max: 5186
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 3648 / Avg: 4621.73 / Max: 5300 Min: 3451 / Avg: 4568.22 / Max: 5300 Min: 2700 / Avg: 4559.24 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 1000 2000 3000 4000 5000 Min: 4700 / Avg: 4746.45 / Max: 5621 Min: 4700 / Avg: 4730.05 / Max: 5300 Min: 4700 / Avg: 4729.11 / Max: 5314
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4735.41 / Max: 5311 Min: 4700 / Avg: 4733.54 / Max: 5300 Min: 4700 / Avg: 4730.67 / Max: 5300
OpenBenchmarking.org Megahertz, More Is Better OpenVINO 2022.2.dev CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4731.38 / Max: 5300 Min: 3422 / Avg: 4604.88 / Max: 5300 Min: 3403 / Avg: 4632.52 / Max: 5130
Numpy Benchmark OpenBenchmarking.org Megahertz, More Is Better Numpy Benchmark CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 5086 / Avg: 5292.12 / Max: 5306 Min: 4905 / Avg: 5065.67 / Max: 5305 Min: 4910 / Avg: 5058.87 / Max: 5300
GROMACS OpenBenchmarking.org Megahertz, More Is Better GROMACS 2022.1 CPU Peak Freq (Highest CPU Core Frequency) Monitor AVX-512 Off AVX-512 On AVX-512 On 512 900 1800 2700 3600 4500 Min: 4700 / Avg: 4781.57 / Max: 5267 Min: 4700 / Avg: 4784 / Max: 5300 Min: 4700 / Avg: 4778.77 / Max: 5300
LeelaChessZero OpenBenchmarking.org Nodes Per Second Per Watt, More Is Better LeelaChessZero 0.28 Backend: BLAS AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 7.295 6.080 6.061
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
Meta Performance Per Watts OpenBenchmarking.org Performance Per Watts, More Is Better Meta Performance Per Watts Performance Per Watts AVX-512 Off AVX-512 On AVX-512 On 512 40 80 120 160 200 133.30 170.75 170.21
AI Benchmark Alpha OpenBenchmarking.org Score Per Watt, More Is Better AI Benchmark Alpha 0.1.2 Device AI Score AVX-512 Off AVX-512 On AVX-512 On 512 5 10 15 20 25 16.76 21.17 21.26
Numpy Benchmark OpenBenchmarking.org Score Per Watt, More Is Better Numpy Benchmark AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 9.735 10.211 10.293
AI Benchmark Alpha AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
Neural Magic DeepSparse OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 35 / Avg: 69.17 / Max: 81 Min: 39 / Avg: 79.78 / Max: 91 Min: 38 / Avg: 77.04 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 72.32 / Max: 85 Min: 40 / Avg: 78.33 / Max: 92 Min: 39 / Avg: 78.25 / Max: 94
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 38 / Avg: 66.44 / Max: 73 Min: 40 / Avg: 72.9 / Max: 79 Min: 38 / Avg: 69.89 / Max: 79
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 16 32 48 64 80 Min: 39 / Avg: 69.87 / Max: 78 Min: 40 / Avg: 73.39 / Max: 81 Min: 39 / Avg: 73.25 / Max: 82
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 74.18 / Max: 83 Min: 38 / Avg: 79.49 / Max: 92 Min: 39 / Avg: 81.82 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 78.22 / Max: 86 Min: 40 / Avg: 82.42 / Max: 94 Min: 41 / Avg: 82.64 / Max: 96
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 71.71 / Max: 85 Min: 39 / Avg: 78.7 / Max: 94 Min: 39 / Avg: 79.14 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 72.83 / Max: 87 Min: 41 / Avg: 76.64 / Max: 95 Min: 40 / Avg: 78.69 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 64.6 / Max: 73 Min: 39 / Avg: 75.07 / Max: 90 Min: 38 / Avg: 73.65 / Max: 89
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 65.01 / Max: 75 Min: 40 / Avg: 74.99 / Max: 88 Min: 37 / Avg: 73.66 / Max: 88
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 73.22 / Max: 82 Min: 39 / Avg: 80.23 / Max: 93 Min: 39 / Avg: 80.25 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 74.07 / Max: 85 Min: 40 / Avg: 80.82 / Max: 93 Min: 39 / Avg: 80.84 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 71.94 / Max: 85 Min: 38 / Avg: 79.14 / Max: 93 Min: 39 / Avg: 79.22 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better Neural Magic DeepSparse 1.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 72.53 / Max: 86 Min: 39 / Avg: 77.07 / Max: 96 Min: 40 / Avg: 78.84 / Max: 95
OpenRadioss OpenBenchmarking.org Celsius, Fewer Is Better OpenRadioss 2022.10.13 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 38 / Avg: 70.2 / Max: 74 Min: 39 / Avg: 70.71 / Max: 75 Min: 40 / Avg: 70.47 / Max: 75
OpenBenchmarking.org Celsius, Fewer Is Better OpenRadioss 2022.10.13 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 38 / Avg: 69.59 / Max: 73 Min: 38 / Avg: 70.36 / Max: 75 Min: 38 / Avg: 70.17 / Max: 75
OpenBenchmarking.org Celsius, Fewer Is Better OpenRadioss 2022.10.13 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 39 / Avg: 69.57 / Max: 75 Min: 39 / Avg: 69.91 / Max: 76 Min: 39 / Avg: 69.64 / Max: 75
OpenBenchmarking.org Celsius, Fewer Is Better OpenRadioss 2022.10.13 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 39 / Avg: 72.15 / Max: 77 Min: 39 / Avg: 72.25 / Max: 77 Min: 39 / Avg: 72.17 / Max: 77
OpenBenchmarking.org Celsius, Fewer Is Better OpenRadioss 2022.10.13 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 38 / Avg: 69.5 / Max: 75 Min: 39 / Avg: 70.02 / Max: 75 Min: 38 / Avg: 69.98 / Max: 76
OpenFOAM OpenBenchmarking.org Celsius, Fewer Is Better OpenFOAM 10 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 38 / Avg: 60.32 / Max: 71 Min: 38 / Avg: 61.46 / Max: 73 Min: 38 / Avg: 61.09 / Max: 71
OSPRay Studio OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 78.1 / Max: 84 Min: 38 / Avg: 85.39 / Max: 96 Min: 38 / Avg: 86.37 / Max: 93
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 78.59 / Max: 84 Min: 41 / Avg: 86.33 / Max: 94 Min: 41 / Avg: 86.12 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 76.62 / Max: 83 Min: 42 / Avg: 84.38 / Max: 94 Min: 41 / Avg: 84.26 / Max: 91
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 41 / Avg: 79.36 / Max: 84 Min: 40 / Avg: 87.2 / Max: 96 Min: 41 / Avg: 87.44 / Max: 93
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 79.03 / Max: 84 Min: 41 / Avg: 86.52 / Max: 93 Min: 42 / Avg: 87.16 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 80.21 / Max: 84 Min: 42 / Avg: 88.05 / Max: 96 Min: 41 / Avg: 88.34 / Max: 94
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 78.14 / Max: 88 Min: 41 / Avg: 86.94 / Max: 91 Min: 42 / Avg: 87.08 / Max: 93
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 78.9 / Max: 84 Min: 41 / Avg: 86.58 / Max: 95 Min: 42 / Avg: 86.55 / Max: 92
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 76.99 / Max: 83 Min: 41 / Avg: 84.31 / Max: 94 Min: 42 / Avg: 85.88 / Max: 91
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 76.25 / Max: 85 Min: 41 / Avg: 87.29 / Max: 92 Min: 41 / Avg: 87.49 / Max: 93
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 78.83 / Max: 83 Min: 41 / Avg: 87.07 / Max: 91 Min: 41 / Avg: 87.32 / Max: 93
OpenBenchmarking.org Celsius, Fewer Is Better OSPRay Studio 0.11 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 80.67 / Max: 84 Min: 42 / Avg: 88.64 / Max: 93 Min: 43 / Avg: 88.73 / Max: 96
oneDNN OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 13 26 39 52 65 Min: 39 / Avg: 54 / Max: 63 Min: 40 / Avg: 56.59 / Max: 68 Min: 41 / Avg: 57.77 / Max: 69
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 13 26 39 52 65 Min: 36 / Avg: 50.61 / Max: 60 Min: 38 / Avg: 53.03 / Max: 64 Min: 37 / Avg: 53.8 / Max: 65
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 36 / Avg: 59.56 / Max: 76 Min: 36 / Avg: 59.45 / Max: 76
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 32 / Avg: 63.64 / Max: 79 Min: 36 / Avg: 73.12 / Max: 90 Min: 38 / Avg: 73.45 / Max: 90
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 37 / Avg: 68.37 / Max: 81 Min: 39 / Avg: 76.57 / Max: 97 Min: 40 / Avg: 76.51 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 74.22 / Max: 90 Min: 39 / Avg: 74.99 / Max: 90
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 16 32 48 64 80 Min: 34 / Avg: 51.76 / Max: 71 Min: 39 / Avg: 56.03 / Max: 81 Min: 39 / Avg: 58.3 / Max: 82
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 35 / Avg: 52.66 / Max: 74 Min: 35 / Avg: 53.89 / Max: 78 Min: 36 / Avg: 55.78 / Max: 79
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 34 / Avg: 49.81 / Max: 71 Min: 36 / Avg: 53.02 / Max: 71
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 31 / Avg: 66.53 / Max: 80 Min: 34 / Avg: 71.48 / Max: 87 Min: 35 / Avg: 71.51 / Max: 88
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 67.95 / Max: 80 Min: 39 / Avg: 75.8 / Max: 90 Min: 40 / Avg: 76.38 / Max: 90
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 39 / Avg: 68.31 / Max: 79 Min: 41 / Avg: 68.97 / Max: 80
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 34 / Avg: 54.35 / Max: 70 Min: 39 / Avg: 59.67 / Max: 73 Min: 40 / Avg: 59.91 / Max: 74
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 35 / Avg: 56.98 / Max: 70 Min: 38 / Avg: 61.41 / Max: 78 Min: 38 / Avg: 61.88 / Max: 79
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 37 / Avg: 62.76 / Max: 75 Min: 38 / Avg: 63.8 / Max: 78
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 32 / Avg: 54.62 / Max: 65 Min: 38 / Avg: 62.72 / Max: 76 Min: 39 / Avg: 63.95 / Max: 76
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 14 28 42 56 70 Min: 36 / Avg: 60.25 / Max: 74 Min: 38 / Avg: 64.16 / Max: 75 Min: 38 / Avg: 63.12 / Max: 74
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 On AVX-512 On 512 15 30 45 60 75 Min: 38 / Avg: 63.03 / Max: 77 Min: 39 / Avg: 64.2 / Max: 78
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 33 / Avg: 78.87 / Max: 87 Min: 37 / Avg: 84.81 / Max: 94 Min: 38 / Avg: 84.54 / Max: 94
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 80.93 / Max: 87 Min: 40 / Avg: 84.18 / Max: 95 Min: 41 / Avg: 85.38 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 80.45 / Max: 88 Min: 41 / Avg: 83.9 / Max: 95 Min: 41 / Avg: 84.88 / Max: 94
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 79.82 / Max: 87 Min: 41 / Avg: 83.26 / Max: 94 Min: 42 / Avg: 85.26 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 79.86 / Max: 87 Min: 40 / Avg: 84.09 / Max: 94 Min: 42 / Avg: 84.75 / Max: 94
OpenBenchmarking.org Celsius, Fewer Is Better oneDNN 2.7 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 39 / Avg: 79.8 / Max: 88 Min: 41 / Avg: 83.69 / Max: 93 Min: 41 / Avg: 85.51 / Max: 95
Mobile Neural Network OpenBenchmarking.org Celsius, Fewer Is Better Mobile Neural Network 2.1 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 38 / Avg: 78.24 / Max: 84 Min: 39 / Avg: 85.37 / Max: 95 Min: 41 / Avg: 86.3 / Max: 95
NCNN OpenBenchmarking.org Celsius, Fewer Is Better NCNN 20220729 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 16 32 48 64 80 Min: 40 / Avg: 65.4 / Max: 71 Min: 40 / Avg: 70.36 / Max: 80 Min: 42 / Avg: 71.88 / Max: 87
TNN OpenBenchmarking.org Celsius, Fewer Is Better TNN 0.3 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 13 26 39 52 65 Min: 36 / Avg: 57.3 / Max: 67 Min: 36 / Avg: 57.79 / Max: 68 Min: 36 / Avg: 58.88 / Max: 69
OpenBenchmarking.org Celsius, Fewer Is Better TNN 0.3 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 13 26 39 52 65 Min: 33 / Avg: 51.92 / Max: 64 Min: 33 / Avg: 53.27 / Max: 64 Min: 35 / Avg: 53.49 / Max: 67
OpenBenchmarking.org Celsius, Fewer Is Better TNN 0.3 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 12 24 36 48 60 Min: 31 / Avg: 48.23 / Max: 58 Min: 31 / Avg: 48.98 / Max: 58 Min: 33 / Avg: 50.06 / Max: 61
OpenBenchmarking.org Celsius, Fewer Is Better TNN 0.3 CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 13 26 39 52 65 Min: 30 / Avg: 44.53 / Max: 64 Min: 30 / Avg: 45.12 / Max: 64 Min: 32 / Avg: 46.23 / Max: 65
OpenVINO OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 61 / Avg: 84.02 / Max: 93 Min: 35 / Avg: 84.78 / Max: 98 Min: 39 / Avg: 86.41 / Max: 99
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 60 / Avg: 84.66 / Max: 94 Min: 40 / Avg: 85.93 / Max: 98 Min: 40 / Avg: 82.79 / Max: 99
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 60 / Avg: 83.29 / Max: 89 Min: 39 / Avg: 87.24 / Max: 97 Min: 38 / Avg: 85.65 / Max: 96
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 61 / Avg: 85.95 / Max: 90 Min: 40 / Avg: 87.04 / Max: 95 Min: 41 / Avg: 86.73 / Max: 95
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 61 / Avg: 84.05 / Max: 95 Min: 40 / Avg: 86.76 / Max: 97 Min: 41 / Avg: 87.35 / Max: 100
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 59 / Avg: 84.96 / Max: 95 Min: 40 / Avg: 86.02 / Max: 100 Min: 41 / Avg: 86.88 / Max: 100
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 60 / Avg: 87.45 / Max: 94 Min: 41 / Avg: 86.74 / Max: 97 Min: 41 / Avg: 87 / Max: 97
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 61 / Avg: 86.8 / Max: 92 Min: 42 / Avg: 86.73 / Max: 97 Min: 40 / Avg: 87.33 / Max: 96
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 63 / Avg: 85.86 / Max: 93 Min: 41 / Avg: 87.16 / Max: 98 Min: 41 / Avg: 87.07 / Max: 99
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 16 32 48 64 80 Min: 60 / Avg: 73.94 / Max: 80 Min: 41 / Avg: 76.53 / Max: 85 Min: 40 / Avg: 76.21 / Max: 84
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 44 / Avg: 76.67 / Max: 83 Min: 40 / Avg: 80.87 / Max: 89 Min: 40 / Avg: 81.03 / Max: 89
OpenBenchmarking.org Celsius, Fewer Is Better OpenVINO 2022.2.dev CPU Temperature Monitor AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 Min: 40 / Avg: 80.98 / Max: 89 Min: 40 / Avg: 84.75 / Max: 97 Min: 40 / Avg: 85.67 / Max: 95
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
i9-11900K: AVX-512 Off: The test run did not produce a result.
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 SE +/- 0.056, N = 3 SE +/- 0.077, N = 15 SE +/- 0.087, N = 15 7.139 7.468 7.527 -mno-avx512f - MIN: 6.92 / MAX: 7.99 MIN: 6.73 / MAX: 13.9 MIN: 6.72 / MAX: 17.92 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 AVX-512 Off AVX-512 On AVX-512 On 512 0.2167 0.4334 0.6501 0.8668 1.0835 SE +/- 0.003, N = 3 SE +/- 0.005, N = 15 SE +/- 0.012, N = 15 0.942 0.938 0.963 -mno-avx512f - MIN: 0.92 / MAX: 1.64 MIN: 0.89 / MAX: 3.62 MIN: 0.89 / MAX: 10.54 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 AVX-512 Off AVX-512 On AVX-512 On 512 0.4552 0.9104 1.3656 1.8208 2.276 SE +/- 0.014, N = 3 SE +/- 0.013, N = 15 SE +/- 0.014, N = 15 2.023 1.670 1.675 -mno-avx512f - MIN: 1.97 / MAX: 2.72 MIN: 1.54 / MAX: 11.33 MIN: 1.55 / MAX: 7.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 AVX-512 Off AVX-512 On AVX-512 On 512 4 8 12 16 20 SE +/- 0.08, N = 3 SE +/- 0.05, N = 15 SE +/- 0.04, N = 15 18.04 10.30 10.37 -mno-avx512f - MIN: 17.67 / MAX: 24.46 MIN: 9.74 / MAX: 33.88 MIN: 10.05 / MAX: 22.64 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 AVX-512 Off AVX-512 On AVX-512 On 512 0.8312 1.6624 2.4936 3.3248 4.156 SE +/- 0.032, N = 3 SE +/- 0.020, N = 15 SE +/- 0.022, N = 15 3.694 3.073 3.114 -mno-avx512f - MIN: 3.57 / MAX: 4.37 MIN: 2.89 / MAX: 8.65 MIN: 2.9 / MAX: 9.42 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 AVX-512 Off AVX-512 On AVX-512 On 512 0.4795 0.959 1.4385 1.918 2.3975 SE +/- 0.019, N = 3 SE +/- 0.014, N = 15 SE +/- 0.014, N = 15 1.934 2.131 2.125 -mno-avx512f - MIN: 1.84 / MAX: 11.59 MIN: 2.03 / MAX: 8.46 MIN: 2.03 / MAX: 8.62 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 AVX-512 Off AVX-512 On AVX-512 On 512 0.4475 0.895 1.3425 1.79 2.2375 SE +/- 0.007, N = 3 SE +/- 0.005, N = 15 SE +/- 0.007, N = 15 1.798 1.972 1.989 -mno-avx512f - MIN: 1.74 / MAX: 2.69 MIN: 1.8 / MAX: 8.03 MIN: 1.9 / MAX: 8.1 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.15, N = 3 SE +/- 0.13, N = 3 11.43 12.28 12.28 -mno-avx512f - MIN: 11.21 / MAX: 16.87 MIN: 11.79 / MAX: 13.67 MIN: 11.87 / MAX: 18.86 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 AVX-512 Off AVX-512 On AVX-512 On 512 0.8483 1.6966 2.5449 3.3932 4.2415 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 3.34 3.73 3.77 -mno-avx512f - MIN: 3.16 / MAX: 5.05 MIN: 3.56 / MAX: 5.05 MIN: 3.53 / MAX: 4.4 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 AVX-512 Off AVX-512 On AVX-512 On 512 0.6098 1.2196 1.8294 2.4392 3.049 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 2.49 2.68 2.71 -mno-avx512f - MIN: 2.4 / MAX: 3.36 MIN: 2.56 / MAX: 3.63 MIN: 2.55 / MAX: 3.5 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 AVX-512 Off AVX-512 On AVX-512 On 512 0.5243 1.0486 1.5729 2.0972 2.6215 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 2.33 2.33 2.33 -mno-avx512f - MIN: 2.27 / MAX: 3.18 MIN: 2.26 / MAX: 3.21 MIN: 2.27 / MAX: 2.81 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet AVX-512 Off AVX-512 On AVX-512 On 512 0.5895 1.179 1.7685 2.358 2.9475 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 2.37 2.62 2.58 -mno-avx512f - MIN: 2.28 / MAX: 3.27 MIN: 2.48 / MAX: 3.81 MIN: 2.48 / MAX: 3.14 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 AVX-512 Off AVX-512 On AVX-512 On 512 1.0935 2.187 3.2805 4.374 5.4675 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 4.35 4.86 4.79 -mno-avx512f - MIN: 4.2 / MAX: 5.22 MIN: 4.6 / MAX: 6.13 MIN: 4.61 / MAX: 9.9 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface AVX-512 Off AVX-512 On AVX-512 On 512 0.18 0.36 0.54 0.72 0.9 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 0.76 0.80 0.80 -mno-avx512f - MIN: 0.73 / MAX: 1.52 MIN: 0.77 / MAX: 1.52 MIN: 0.77 / MAX: 1.28 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet AVX-512 Off AVX-512 On AVX-512 On 512 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 SE +/- 0.13, N = 3 9.55 10.16 9.96 -mno-avx512f - MIN: 9.33 / MAX: 10.86 MIN: 9.92 / MAX: 11.36 MIN: 9.52 / MAX: 10.98 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 AVX-512 Off AVX-512 On AVX-512 On 512 10 20 30 40 50 SE +/- 0.13, N = 3 SE +/- 0.13, N = 3 SE +/- 0.23, N = 3 45.68 44.74 44.59 -mno-avx512f - MIN: 44.98 / MAX: 50.56 MIN: 44.1 / MAX: 49.06 MIN: 43.87 / MAX: 50.43 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 8.24 8.69 8.64 -mno-avx512f - MIN: 8.06 / MAX: 9.19 MIN: 8.51 / MAX: 14.15 MIN: 8.45 / MAX: 9.39 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.19, N = 3 6.33 6.98 6.69 -mno-avx512f - MIN: 6.2 / MAX: 7.75 MIN: 6.82 / MAX: 7.88 MIN: 6.2 / MAX: 7.35 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 AVX-512 Off AVX-512 On AVX-512 On 512 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.42, N = 3 SE +/- 0.42, N = 3 15.40 15.99 16.24 -mno-avx512f - MIN: 15.07 / MAX: 24.51 MIN: 15.25 / MAX: 18.48 MIN: 15.19 / MAX: 17.46 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny AVX-512 Off AVX-512 On AVX-512 On 512 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.41, N = 3 SE +/- 0.44, N = 3 18.80 20.99 22.01 -mno-avx512f - MIN: 18.51 / MAX: 19.88 MIN: 20.36 / MAX: 37.15 MIN: 21.29 / MAX: 26.71 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd AVX-512 Off AVX-512 On AVX-512 On 512 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.18, N = 3 SE +/- 0.05, N = 3 13.89 17.19 17.34 -mno-avx512f - MIN: 13.58 / MAX: 15.32 MIN: 16.54 / MAX: 25.09 MIN: 17.03 / MAX: 18.98 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m AVX-512 Off AVX-512 On AVX-512 On 512 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 6.70 7.05 7.10 -mno-avx512f - MIN: 6.54 / MAX: 7.93 MIN: 6.87 / MAX: 8.31 MIN: 6.91 / MAX: 11.12 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer AVX-512 Off AVX-512 On AVX-512 On 512 20 40 60 80 100 SE +/- 0.39, N = 3 SE +/- 0.11, N = 3 SE +/- 0.12, N = 3 110.38 87.29 70.39 -mno-avx512f - MIN: 109.43 / MAX: 116.1 MIN: 86.63 / MAX: 92.92 MIN: 69.62 / MAX: 71.72 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time AVX-512 Off AVX-512 On AVX-512 On 512 11 22 33 44 55 46.53 45.49 45.33 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
i9-11900K: AVX-512 On Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 18 October 2022 10:35 by user phoronix.
i9-11900K: AVX-512 Off Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -mno-avx512f" CFLAGS="-O3 -march=native -mno-avx512f"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 19 October 2022 09:56 by user phoronix.
i9-11900K: AVX-512 On 512 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (1402 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 32GB Flash Drive, Graphics: ASUS Intel RKL GT1 31GB (1300MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.10, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -mprefer-vector-width=512" CFLAGS="-O3 -march=native -mprefer-vector-width=512"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x54 - Thermald 2.5.1Python Notes: Python 3.10.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 20 October 2022 10:52 by user phoronix.