AMD EPYC 9754 Bergamo AVX-512

AMD EPYC 9754 1P benchmarks with AVX-512 benchmarking and then AVX-512 disabled. Tests by Michael Larabel for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2307197-NE-AMDBERGAM43&gru&sor.

AMD EPYC 9754 Bergamo AVX-512ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAVX512 OnAVX512 OffAMD EPYC 9754 128-Core @ 2.25GHz (128 Cores / 256 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a4768GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 22.045.19.0-41-generic (x86_64)GNOME Shell 42.5X Server 1.21.1.41.3.224GCC 11.3.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa0010b Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AMD EPYC 9754 Bergamo AVX-512minibude: OpenMP - BM1minibude: OpenMP - BM2openvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objminibude: OpenMP - BM1minibude: OpenMP - BM2libxsmm: 256libxsmm: 128tensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50openvkl: vklBenchmark ISPCospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: scryptcpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: x25xcpuminer-opt: Blake-2 Scpuminer-opt: Garlicoincpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyriteonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-StreamAVX512 OnAVX512 Off237.027238.88760.7327.0827.011430.45118.005690.346073.22580.4011818.336638.71110240.8973970.12125.5414157.6450134.83965925.6705972.1873342.52690.796.63342.88562.48857.551422.361632.40104.7743.11180.7771.62277.24501.67119.32417.25122.81139832.755731.675327.971573.53931381.5669247.9653970.0568624.7370127.0611316.167973.14592993.2111749538628.764977.8972386505309066066014989371174.751048.372334.292339.8144.84540.3511.2610.52110.3510.829.630.991.58858.402946.2608259.646865.8909102.2238498.4779201.5964859.7119187.107181.13226.1815.0614.991190.9157.953954.902551.70251.975692.993317.3462564.1666895.49112.2274132.9504115.26824677.6824528.3052415.32573.318.4570.1084.9697.59106.28109.8839.6415.9144.4617.4846.2846.1719.8946.9220.78117919.173118.020319.546361.3689718.3032213.6393870.9541522.110469.3341260.767161.17552033.159379777149.953010.374555580394733326678761371317.572423.394153.594170.4953.791094.5216.1725.06254.0122.4719.281.911.761023.169088.9093298.062873.3502122.2295906.9414244.15131025.4806OpenBenchmarking.org

CPU Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringAVX512 OffAVX512 On1530456075Min: 20.75 / Avg: 44.22 / Max: 76.13Min: 23.25 / Avg: 51.4 / Max: 74.25

CPU Peak Freq (Highest CPU Core Frequency) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringAVX512 OnAVX512 Off6001200180024003000Min: 2250 / Avg: 2918.06 / Max: 3532Min: 2203 / Avg: 2979.69 / Max: 3559

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringAVX512 OffAVX512 On70140210280350Min: 10.15 / Avg: 179.15 / Max: 378.14Min: 10.25 / Avg: 231.36 / Max: 398.39

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX512 OnAVX512 Off50100150200250SE +/- 0.09, N = 9SE +/- 0.72, N = 8237.03187.111. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX512 OnAVX512 Off50100150200250SE +/- 0.02, N = 3SE +/- 0.62, N = 3238.89181.131. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s Per Watt, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX512 OnAVX512 Off0.34880.69761.04641.39521.7441.5501.026

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s Per Watt, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX512 OnAVX512 Off0.20030.40060.60090.80121.00150.890.62

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAVX512 OnAVX512 Off1428425670SE +/- 0.06, N = 3SE +/- 0.36, N = 360.7326.181. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUAVX512 OnAVX512 Off612182430SE +/- 0.30, N = 12SE +/- 0.21, N = 327.0815.061. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUAVX512 OnAVX512 Off612182430SE +/- 0.18, N = 12SE +/- 0.16, N = 527.0114.991. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX512 OnAVX512 Off30060090012001500SE +/- 22.93, N = 15SE +/- 15.59, N = 141430.451190.911. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off306090120150SE +/- 0.02, N = 3SE +/- 0.03, N = 3118.0057.951. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off12002400360048006000SE +/- 89.26, N = 15SE +/- 2.12, N = 35690.343954.901. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAVX512 OnAVX512 Off13002600390052006500SE +/- 1.32, N = 3SE +/- 0.56, N = 36073.222551.701. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUAVX512 OnAVX512 Off130260390520650SE +/- 7.10, N = 15SE +/- 3.15, N = 15580.40251.971. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off3K6K9K12K15KSE +/- 1.17, N = 3SE +/- 1.65, N = 311818.335692.991. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUAVX512 OnAVX512 Off14002800420056007000SE +/- 13.51, N = 3SE +/- 26.07, N = 156638.713317.341. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAVX512 OnAVX512 Off20K40K60K80K100KSE +/- 314.35, N = 3SE +/- 278.13, N = 3110240.8962564.161. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX512 OnAVX512 Off16K32K48K64K80KSE +/- 95.74, N = 3SE +/- 20.10, N = 373970.1266895.491. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownAVX512 OnAVX512 Off0.15120.30240.45360.60480.7560.6720.560

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonAVX512 OnAVX512 Off0.2070.4140.6210.8281.0350.9200.715

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownAVX512 OnAVX512 Off306090120150SE +/- 0.09, N = 7SE +/- 0.11, N = 7125.54112.23MIN: 122.35 / MAX: 131.95MIN: 109.49 / MAX: 117.02

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjAVX512 OnAVX512 Off0.24080.48160.72240.96321.2041.0700.842

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonAVX512 OnAVX512 Off306090120150SE +/- 0.09, N = 8SE +/- 0.11, N = 7157.65132.95MIN: 155.33 / MAX: 162.92MIN: 130.4 / MAX: 138.16

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjAVX512 OnAVX512 Off306090120150SE +/- 0.16, N = 4SE +/- 0.08, N = 4134.84115.27MIN: 132.49 / MAX: 139MIN: 113.48 / MAX: 119.15

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX512 OnAVX512 Off13002600390052006500SE +/- 2.27, N = 9SE +/- 18.03, N = 85925.674677.681. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX512 OnAVX512 Off13002600390052006500SE +/- 0.44, N = 3SE +/- 15.49, N = 35972.194528.311. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

libxsmm

M N K: 128

OpenBenchmarking.orgGFLOPS/s Per Watt, More Is Betterlibxsmm 2-1.17-3645M N K: 128AVX512 OnAVX512 Off369121513.0712.53

libxsmm

M N K: 256

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256AVX512 OnAVX512 Off7001400210028003500SE +/- 5.78, N = 3SE +/- 6.53, N = 33342.52415.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

libxsmm

M N K: 256

OpenBenchmarking.orgGFLOPS/s Per Watt, More Is Betterlibxsmm 2-1.17-3645M N K: 256AVX512 OnAVX512 Off4812162015.9011.40

libxsmm

M N K: 128

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128AVX512 OnAVX512 Off6001200180024003000SE +/- 12.20, N = 3SE +/- 2.70, N = 32690.72573.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetAVX512 OnAVX512 Off0.66741.33482.00222.66963.3372.9660.576

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetAVX512 OnAVX512 Off0.98461.96922.95383.93844.9234.3760.691

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetAVX512 OnAVX512 Off1.33612.67224.00835.34446.68055.9380.778

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetAVX512 OnAVX512 Off2468106.9840.839

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetAVX512 OnAVX512 Off2468107.0970.865

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetAVX512 OnAVX512 Off0.17420.34840.52260.69680.8710.7740.335

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50AVX512 OnAVX512 Off0.06620.13240.19860.26480.3310.2940.113

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetAVX512 OnAVX512 Off0.25610.51220.76831.02441.28051.1380.360

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50AVX512 OnAVX512 Off0.09160.18320.27480.36640.4580.4070.124

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetAVX512 OnAVX512 Off0.33210.66420.99631.32841.66051.4760.369

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50AVX512 OnAVX512 Off20406080100SE +/- 0.06, N = 3SE +/- 0.03, N = 396.6318.45

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50AVX512 OnAVX512 Off0.10870.21740.32610.43480.54350.4830.131

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetAVX512 OnAVX512 Off0.46760.93521.40281.87042.3382.0780.362

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetAVX512 OnAVX512 Off70140210280350SE +/- 0.70, N = 6SE +/- 0.29, N = 3342.8870.10

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetAVX512 OnAVX512 Off120240360480600SE +/- 2.06, N = 6SE +/- 0.16, N = 3562.4884.96

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetAVX512 OnAVX512 Off2004006008001000SE +/- 1.99, N = 5SE +/- 0.08, N = 3857.5597.59

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50AVX512 OnAVX512 Off0.12080.24160.36240.48320.6040.5370.143

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetAVX512 OnAVX512 Off30060090012001500SE +/- 6.55, N = 3SE +/- 0.24, N = 31422.36106.28

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetAVX512 OnAVX512 Off400800120016002000SE +/- 1.53, N = 3SE +/- 0.06, N = 31632.40109.88

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetAVX512 OnAVX512 Off20406080100SE +/- 1.96, N = 15SE +/- 0.17, N = 3104.7739.64

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50AVX512 OnAVX512 Off1020304050SE +/- 0.03, N = 3SE +/- 0.05, N = 343.1115.91

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetAVX512 OnAVX512 Off4080120160200SE +/- 3.09, N = 15SE +/- 0.10, N = 3180.7744.46

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50AVX512 OnAVX512 Off1632486480SE +/- 0.23, N = 3SE +/- 0.04, N = 371.6217.48

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetAVX512 OnAVX512 Off0.39740.79481.19221.58961.9871.7660.369

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetAVX512 OnAVX512 Off60120180240300SE +/- 2.62, N = 15SE +/- 0.18, N = 3277.2446.28

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetAVX512 OnAVX512 Off110220330440550SE +/- 4.92, N = 3SE +/- 0.06, N = 3501.6746.17

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec Per Watt, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50AVX512 OnAVX512 Off0.1220.2440.3660.4880.610.5420.148

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50AVX512 OnAVX512 Off306090120150SE +/- 1.01, N = 12SE +/- 0.02, N = 3119.3219.89

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetAVX512 OnAVX512 Off90180270360450SE +/- 5.30, N = 12SE +/- 0.05, N = 3417.2546.92

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50AVX512 OnAVX512 Off306090120150SE +/- 0.99, N = 3SE +/- 0.04, N = 3122.8120.78

OpenVKL

Benchmark: vklBenchmark ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAVX512 OnAVX512 Off30060090012001500SE +/- 2.65, N = 3SE +/- 0.33, N = 313981179MIN: 229 / MAX: 11779MIN: 178 / MAX: 10473

OpenVKL

Benchmark: vklBenchmark ISPC

OpenBenchmarking.orgItems / Sec Per Watt, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAVX512 OnAVX512 Off1.31812.63623.95435.27246.59055.8584.569

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeAVX512 OnAVX512 Off816243240SE +/- 0.01, N = 3SE +/- 0.02, N = 332.7619.17

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeAVX512 OnAVX512 Off714212835SE +/- 0.02, N = 3SE +/- 0.00, N = 331.6818.02

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeAVX512 OnAVX512 Off714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 327.9719.55

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeAVX512 OnAVX512 Off0.02660.05320.07980.10640.1330.1180.066

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeAVX512 OnAVX512 Off0.02570.05140.07710.10280.12850.1140.062

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second Per Watt, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeAVX512 OnAVX512 Off0.02390.04780.07170.09560.11950.1060.072

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off1632486480SE +/- 0.14, N = 3SE +/- 0.02, N = 373.5461.37

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off30060090012001500SE +/- 1.58, N = 3SE +/- 5.33, N = 31381.57718.30

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off50100150200250SE +/- 7.49, N = 15SE +/- 0.14, N = 3247.97213.64

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off2004006008001000SE +/- 0.42, N = 3SE +/- 0.45, N = 3970.06870.95

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off130260390520650SE +/- 0.52, N = 3SE +/- 0.58, N = 3624.74522.11

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off306090120150SE +/- 0.01, N = 3SE +/- 0.03, N = 3127.0669.33

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off70140210280350SE +/- 0.78, N = 3SE +/- 0.63, N = 3316.17260.77

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off1632486480SE +/- 0.19, N = 3SE +/- 0.13, N = 373.1561.18

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAVX512 OnAVX512 Off4812162016.8510.81

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAVX512 OnAVX512 Off2468108.9136.476

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SAVX512 OnAVX512 Off5K10K15K20K25K22160.8214551.24

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAVX512 OnAVX512 Off60120180240300272.43202.32

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAVX512 OnAVX512 Off80016002400320040003691.562892.87

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAVX512 OnAVX512 Off122436486051.6544.67

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAVX512 OnAVX512 Off4008001200160020002003.501052.19

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAVX512 OnAVX512 Off6001200180024003000SE +/- 1.66, N = 3SE +/- 0.24, N = 32993.212033.151. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAVX512 OnAVX512 Off300K600K900K1200K1500KSE +/- 4577.90, N = 3SE +/- 1811.04, N = 311749539379771. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAVX512 OnAVX512 Off2K4K6K8K10KSE +/- 340.56, N = 15SE +/- 21.55, N = 38628.767149.951. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAVX512 OnAVX512 Off90018002700360045004347.582825.31

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAVX512 OnAVX512 Off11002200330044005500SE +/- 15.71, N = 3SE +/- 35.48, N = 44977.893010.371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SAVX512 OnAVX512 Off1.6M3.2M4.8M6.4M8MSE +/- 3854.05, N = 3SE +/- 37514.23, N = 15723865045555801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAVX512 OnAVX512 Off11K22K33K44K55KSE +/- 110.15, N = 3SE +/- 102.69, N = 353090394731. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAVX512 OnAVX512 Off140K280K420K560K700KSE +/- 76.38, N = 3SE +/- 141.93, N = 36606603326671. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAVX512 OnAVX512 Off300K600K900K1200K1500KSE +/- 3455.26, N = 3SE +/- 2198.34, N = 314989378761371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

miniBUDE

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225025973098AVX512 Off225025293096OpenBenchmarking.orgMegahertz, More Is BetterminiBUDE 20210901CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

miniBUDE

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225027963014AVX512 Off225026963098OpenBenchmarking.orgMegahertz, More Is BetterminiBUDE 20210901CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

libxsmm

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is Betterlibxsmm 2-1.17-3645CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OnAVX512 Off5001000150020002500Min: 2250 / Avg: 3087.64 / Max: 3133Min: 2250 / Avg: 3086 / Max: 3117

libxsmm

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030543135AVX512 Off223930383143OpenBenchmarking.orgMegahertz, More Is Betterlibxsmm 2-1.17-3645CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Embree

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225026273097AVX512 On225025113101OpenBenchmarking.orgMegahertz, More Is BetterEmbree 4.1CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Embree

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225025843096AVX512 On225025653098OpenBenchmarking.orgMegahertz, More Is BetterEmbree 4.1CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Embree

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225028553098AVX512 On225028413101OpenBenchmarking.orgMegahertz, More Is BetterEmbree 4.1CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVKL

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterOpenVKL 1.3.1CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OnAVX512 Off5001000150020002500Min: 2250 / Avg: 2983.53 / Max: 3118Min: 2250 / Avg: 2958.42 / Max: 3128

OSPRay

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225029263096AVX512 On225028853099OpenBenchmarking.orgMegahertz, More Is BetterOSPRay 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OSPRay

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225029253101AVX512 On225028753096OpenBenchmarking.orgMegahertz, More Is BetterOSPRay 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OSPRay

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030523098AVX512 On225030243099OpenBenchmarking.orgMegahertz, More Is BetterOSPRay 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029983097AVX512 Off225028243100OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.1CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030193099AVX512 On225029093099OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029593095AVX512 Off225029523091OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029253002AVX512 Off225027793096OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030653102AVX512 On225030583102OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225028993096AVX512 On225028043143OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030463099AVX512 On225030213097OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225028863096AVX512 Off225027463096OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Cpuminer-Opt

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029663080AVX512 Off225029193027OpenBenchmarking.orgMegahertz, More Is BetterCpuminer-Opt 3.20.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225029923136AVX512 On225028013254OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030393146AVX512 On225028363101OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030573377AVX512 On225027513111OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030733166AVX512 On225027423102OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OffAVX512 On6001200180024003000Min: 2203 / Avg: 3078.92 / Max: 3359Min: 2250 / Avg: 2753.56 / Max: 3095

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030323208AVX512 On225029773320OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030753217AVX512 On225030413348OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030643197AVX512 On225029923472OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030843221AVX512 On225030663438OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030753309AVX512 On225029953532OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030903249AVX512 On225030213283OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OffAVX512 On6001200180024003000Min: 2250 / Avg: 3085.78 / Max: 3417Min: 2250 / Avg: 2916.06 / Max: 3157

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OffAVX512 On6001200180024003000Min: 2250 / Avg: 3085.27 / Max: 3559Min: 2250 / Avg: 2956.33 / Max: 3308

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OffAVX512 On6001200180024003000Min: 2250 / Avg: 3083.42 / Max: 3336Min: 2250 / Avg: 2973.8 / Max: 3207

TensorFlow

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow 2.12CPU Peak Freq (Highest CPU Core Frequency) MonitorAVX512 OffAVX512 On6001200180024003000Min: 2243 / Avg: 3085.93 / Max: 3295Min: 2250 / Avg: 2958.12 / Max: 3370

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030113136AVX512 Off225029333136OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030263124AVX512 Off225030153127OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030233152AVX512 On225030023416OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030123134AVX512 Off225029623101OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030343099AVX512 Off225029313101OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225030483116AVX512 On225030063115OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030133127AVX512 Off225029403122OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

Neural Magic DeepSparse

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030063135AVX512 Off225029523146OpenBenchmarking.orgMegahertz, More Is BetterNeural Magic DeepSparse 1.5CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225027083101AVX512 Off225024913096OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029313185AVX512 Off225026493132OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225029363426AVX512 Off225026453103OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225030293257AVX512 Off224129183105OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225026963116AVX512 Off225026883124OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225027053121AVX512 Off225026593098OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225027183095AVX512 Off225023733112OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225026613111AVX512 Off221425373101OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 Off225026713102AVX512 On225026193137OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225026793096AVX512 Off225024833131OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225028923097AVX512 Off225025903101OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

OpenVINO

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxAVX512 On225027613119AVX512 Off225026263099OpenBenchmarking.orgMegahertz, More Is BetterOpenVINO 2022.3CPU Peak Freq (Highest CPU Core Frequency) Monitor8001600240032004000

miniBUDE

CPU Temperature Monitor

MinAvgMaxAVX512 On23.336.350.8AVX512 Off20.837.052.8OpenBenchmarking.orgCelsius, Fewer Is BetterminiBUDE 20210901CPU Temperature Monitor1530456075

miniBUDE

CPU Temperature Monitor

MinAvgMaxAVX512 On29.353.462.3AVX512 Off27.955.164.1OpenBenchmarking.orgCelsius, Fewer Is BetterminiBUDE 20210901CPU Temperature Monitor20406080100

libxsmm

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is Betterlibxsmm 2-1.17-3645CPU Temperature MonitorAVX512 OffAVX512 On1122334455Min: 35.5 / Avg: 49.14 / Max: 50.88Min: 35.75 / Avg: 50.34 / Max: 53.25

libxsmm

CPU Temperature Monitor

MinAvgMaxAVX512 Off33.549.453.3AVX512 On34.449.754.3OpenBenchmarking.orgCelsius, Fewer Is Betterlibxsmm 2-1.17-3645CPU Temperature Monitor1530456075

Embree

CPU Temperature Monitor

MinAvgMaxAVX512 On33.543.251.3AVX512 Off33.344.252.8OpenBenchmarking.orgCelsius, Fewer Is BetterEmbree 4.1CPU Temperature Monitor1530456075

Embree

CPU Temperature Monitor

MinAvgMaxAVX512 On33.541.448.9AVX512 Off33.542.250.0OpenBenchmarking.orgCelsius, Fewer Is BetterEmbree 4.1CPU Temperature Monitor1428425670

Embree

CPU Temperature Monitor

MinAvgMaxAVX512 Off32.536.747.0AVX512 On32.539.347.0OpenBenchmarking.orgCelsius, Fewer Is BetterEmbree 4.1CPU Temperature Monitor1428425670

OpenVKL

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVKL 1.3.1CPU Temperature MonitorAVX512 OnAVX512 Off1326395265Min: 30.63 / Avg: 53.11 / Max: 62.25Min: 30.38 / Avg: 54.49 / Max: 64.75

OSPRay

CPU Temperature Monitor

MinAvgMaxAVX512 On33.555.163.3AVX512 Off32.057.565.3OpenBenchmarking.orgCelsius, Fewer Is BetterOSPRay 2.12CPU Temperature Monitor20406080100

OSPRay

CPU Temperature Monitor

MinAvgMaxAVX512 On39.457.264.3AVX512 Off39.459.766.1OpenBenchmarking.orgCelsius, Fewer Is BetterOSPRay 2.12CPU Temperature Monitor20406080100

OSPRay

CPU Temperature Monitor

MinAvgMaxAVX512 On40.055.661.3AVX512 Off40.057.062.8OpenBenchmarking.orgCelsius, Fewer Is BetterOSPRay 2.12CPU Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxAVX512 On30.439.245.3AVX512 Off24.539.845.1OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.1CPU Temperature Monitor1224364860

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off37.655.259.5AVX512 On39.458.764.3OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off38.163.670.0AVX512 On39.566.473.3OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off31.164.070.0AVX512 On33.964.072.8OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off36.843.746.6AVX512 On38.145.447.9OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor1428425670

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off33.062.971.9AVX512 On34.463.471.9OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off33.940.944.6AVX512 On34.942.247.0OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor1428425670

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off30.659.571.9AVX512 On32.063.272.8OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

Cpuminer-Opt

CPU Temperature Monitor

MinAvgMaxAVX512 Off38.662.068.1AVX512 On40.568.774.3OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.20.3CPU Temperature Monitor20406080100

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off33.940.349.4AVX512 On34.941.149.4OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1428425670

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off30.537.145.6AVX512 On32.037.845.3OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1224364860

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off30.436.546.4AVX512 On30.638.345.5OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1428425670

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off28.836.047.9AVX512 On30.944.852.3OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1530456075

TensorFlow

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature MonitorAVX512 OffAVX512 On1122334455Min: 28.75 / Avg: 36.18 / Max: 47.38Min: 33.88 / Avg: 49.84 / Max: 55.75

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off29.135.747.9AVX512 On33.038.545.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1428425670

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off28.837.248.8AVX512 On30.438.144.6OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1428425670

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off29.335.347.1AVX512 On30.840.349.9OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1428425670

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off28.337.551.6AVX512 On31.341.848.6OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1530456075

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off29.935.949.0AVX512 On32.544.352.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1530456075

TensorFlow

CPU Temperature Monitor

MinAvgMaxAVX512 Off28.838.151.6AVX512 On33.545.353.3OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature Monitor1530456075

TensorFlow

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature MonitorAVX512 OffAVX512 On1224364860Min: 29.75 / Avg: 36.4 / Max: 52.13Min: 34.38 / Avg: 50.91 / Max: 58.88

TensorFlow

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature MonitorAVX512 OffAVX512 On1122334455Min: 29 / Avg: 38.33 / Max: 53.13Min: 37.63 / Avg: 48.39 / Max: 57.63

TensorFlow

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature MonitorAVX512 OffAVX512 On1224364860Min: 30.13 / Avg: 36.83 / Max: 51.13Min: 34.63 / Avg: 50.93 / Max: 58.38

TensorFlow

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow 2.12CPU Temperature MonitorAVX512 OffAVX512 On1122334455Min: 29.38 / Avg: 39.11 / Max: 54.13Min: 35.38 / Avg: 48.79 / Max: 57.5

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 Off30.654.570.5AVX512 On34.657.571.4OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 Off38.153.864.3AVX512 On39.458.167.6OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 Off38.152.965.8AVX512 On38.653.166.5OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 On39.862.870.4AVX512 Off40.065.272.9OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 On39.160.467.1AVX512 Off41.060.768.5OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 On38.153.166.1AVX512 Off40.054.069.1OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 On36.057.770.9AVX512 Off38.458.370.5OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

Neural Magic DeepSparse

CPU Temperature Monitor

MinAvgMaxAVX512 On37.357.670.9AVX512 Off38.958.370.9OpenBenchmarking.orgCelsius, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off39.859.368.1AVX512 On38.663.170.0OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off37.657.467.6AVX512 On41.361.273.4OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off36.558.072.4AVX512 On37.660.672.9OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 On36.851.757.5AVX512 Off36.853.962.3OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 On34.459.570.5AVX512 Off35.865.576.1OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 On40.059.266.6AVX512 Off42.167.772.8OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off41.961.164.3AVX512 On36.864.269.9OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off41.056.764.6AVX512 On40.557.365.8OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 On36.861.366.1AVX512 Off36.867.473.3OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off41.057.164.8AVX512 On40.560.666.6OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 Off37.361.868.0AVX512 On41.064.670.0OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

OpenVINO

CPU Temperature Monitor

MinAvgMaxAVX512 On41.463.468.1AVX512 Off41.063.969.0OpenBenchmarking.orgCelsius, Fewer Is BetterOpenVINO 2022.3CPU Temperature Monitor20406080100

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUAVX512 OnAVX512 Off30060090012001500SE +/- 12.60, N = 4SE +/- 1.80, N = 31174.751317.57MIN: 1143.88MIN: 1299.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAVX512 OnAVX512 Off5001000150020002500SE +/- 0.42, N = 3SE +/- 26.84, N = 31048.372423.39MIN: 508.1 / MAX: 1159.4MIN: 1130.42 / MAX: 2937.221. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUAVX512 OnAVX512 Off9001800270036004500SE +/- 22.79, N = 12SE +/- 59.43, N = 32334.294153.59MIN: 1017.83 / MAX: 3101.05MIN: 1975.28 / MAX: 5152.661. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUAVX512 OnAVX512 Off9001800270036004500SE +/- 14.83, N = 12SE +/- 41.79, N = 52339.814170.49MIN: 1045.35 / MAX: 3232.69MIN: 1759.65 / MAX: 4969.51. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX512 OnAVX512 Off1224364860SE +/- 0.60, N = 15SE +/- 0.62, N = 1444.8453.79MIN: 8.1 / MAX: 137.01MIN: 13.85 / MAX: 136.831. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off2004006008001000SE +/- 0.07, N = 3SE +/- 0.34, N = 3540.351094.52MIN: 257.81 / MAX: 586.56MIN: 509.48 / MAX: 1179.631. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off48121620SE +/- 0.15, N = 15SE +/- 0.01, N = 311.2616.17MIN: 4.37 / MAX: 45.06MIN: 8.44 / MAX: 56.881. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAVX512 OnAVX512 Off612182430SE +/- 0.00, N = 3SE +/- 0.01, N = 310.5225.06MIN: 5.11 / MAX: 34.37MIN: 13.08 / MAX: 57.021. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUAVX512 OnAVX512 Off60120180240300SE +/- 1.22, N = 15SE +/- 2.83, N = 15110.35254.01MIN: 49.7 / MAX: 183.59MIN: 116.71 / MAX: 398.881. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUAVX512 OnAVX512 Off510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 310.8222.47MIN: 5 / MAX: 31.44MIN: 10.87 / MAX: 43.51. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUAVX512 OnAVX512 Off510152025SE +/- 0.02, N = 3SE +/- 0.14, N = 159.6319.28MIN: 6.4 / MAX: 33.26MIN: 10.31 / MAX: 50.611. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAVX512 OnAVX512 Off0.42980.85961.28941.71922.149SE +/- 0.00, N = 3SE +/- 0.01, N = 30.991.91MIN: 0.35 / MAX: 19.47MIN: 0.67 / MAX: 19.481. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX512 OnAVX512 Off0.3960.7921.1881.5841.98SE +/- 0.00, N = 3SE +/- 0.00, N = 31.581.76MIN: 0.55 / MAX: 17.78MIN: 0.64 / MAX: 19.61. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off2004006008001000SE +/- 0.33, N = 3SE +/- 1.40, N = 3858.401023.17

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off20406080100SE +/- 0.05, N = 3SE +/- 0.67, N = 346.2688.91

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off60120180240300SE +/- 6.02, N = 15SE +/- 0.24, N = 3259.65298.06

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off1632486480SE +/- 0.03, N = 3SE +/- 0.04, N = 365.8973.35

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off306090120150SE +/- 0.07, N = 3SE +/- 0.15, N = 3102.22122.23

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off2004006008001000SE +/- 0.16, N = 3SE +/- 0.94, N = 3498.48906.94

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off50100150200250SE +/- 0.47, N = 3SE +/- 0.64, N = 3201.60244.15

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX512 OnAVX512 Off2004006008001000SE +/- 0.45, N = 3SE +/- 1.31, N = 3859.711025.48

miniBUDE

CPU Power Consumption Monitor

MinAvgMaxAVX512 On19.2152.9317.5AVX512 Off19.5182.4337.5OpenBenchmarking.orgWatts, Fewer Is BetterminiBUDE 20210901CPU Power Consumption Monitor80160240320400

miniBUDE

CPU Power Consumption Monitor

MinAvgMaxAVX512 On19.7268.3323.2AVX512 Off19.7292.3341.7OpenBenchmarking.orgWatts, Fewer Is BetterminiBUDE 20210901CPU Power Consumption Monitor80160240320400

libxsmm

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is Betterlibxsmm 2-1.17-3645CPU Power Consumption MonitorAVX512 OffAVX512 On60120180240300Min: 20.58 / Avg: 205.43 / Max: 327.08Min: 20.28 / Avg: 205.93 / Max: 329.89

libxsmm

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.5210.2340.2AVX512 Off20.7211.9335.7OpenBenchmarking.orgWatts, Fewer Is Betterlibxsmm 2-1.17-3645CPU Power Consumption Monitor80160240320400

Embree

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.3186.8333.7AVX512 Off20.6200.3339.8OpenBenchmarking.orgWatts, Fewer Is BetterEmbree 4.1CPU Power Consumption Monitor80160240320400

Embree

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.1171.3327.2AVX512 Off20.4186.1333.2OpenBenchmarking.orgWatts, Fewer Is BetterEmbree 4.1CPU Power Consumption Monitor80160240320400

Embree

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.3126.0325.6AVX512 Off20.2136.9330.7OpenBenchmarking.orgWatts, Fewer Is BetterEmbree 4.1CPU Power Consumption Monitor80160240320400

OpenVKL

CPU Power Consumption Monitor

MinAvgMaxAVX512 On19.9238.7322.2AVX512 Off10.2258.1348.2OpenBenchmarking.orgWatts, Fewer Is BetterOpenVKL 1.3.1CPU Power Consumption Monitor100200300400500

OSPRay

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.1276.9329.8AVX512 Off20.8290.6346.2OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor100200300400500

OSPRay

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.9278.8332.5AVX512 Off20.9292.2347.0OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor100200300400500

OSPRay

CPU Power Consumption Monitor

MinAvgMaxAVX512 On21.1263.0313.6AVX512 Off21.2271.7321.8OpenBenchmarking.orgWatts, Fewer Is BetterOSPRay 2.12CPU Power Consumption Monitor80160240320400

oneDNN

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.0176.7277.9AVX512 Off19.8202.0272.5OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.1CPU Power Consumption Monitor70140210280350

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.0278.5321.6AVX512 On20.4295.4341.0OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor80160240320400

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.9314.0361.0AVX512 On20.9335.8385.3OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor100200300400500

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.0313.1358.1AVX512 On20.4326.6376.0OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor100200300400500

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.7194.9210.1AVX512 Off20.8195.1204.9OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor60120180240300

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.2318.3370.8AVX512 Off20.2324.2376.0OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor100200300400500

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.4160.1173.0AVX512 On19.9167.1199.7OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor50100150200250

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.7316.2362.0AVX512 On20.3329.8380.7OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor100200300400500

Cpuminer-Opt

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off11.0310.1352.7AVX512 On21.1344.8398.4OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor110220330440550

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 On10.4115.6199.6AVX512 Off20.6121.7145.6OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor50100150200250

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.5123.0139.4AVX512 On20.3128.6210.9OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor60120180240300

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off19.9125.4141.0AVX512 On20.2144.4224.5OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor60120180240300

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.0126.7155.1AVX512 On20.1203.7257.0OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor70140210280350

TensorFlow

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption MonitorAVX512 OffAVX512 On50100150200250Min: 20.3 / Avg: 127.04 / Max: 189.92Min: 20.26 / Avg: 230.01 / Max: 283.14

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.1118.4140.5AVX512 On19.8135.4179.1OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor50100150200250

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.2140.3151.8AVX512 On20.1146.6171.4OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor50100150200250

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.2123.6149.7AVX512 On20.3158.9208.4OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor60120180240300

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.1141.1172.9AVX512 On20.0176.0204.8OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor50100150200250

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off19.9125.3152.5AVX512 On20.1187.9236.9OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor60120180240300

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.1140.4179.7AVX512 On20.6200.2230.6OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption Monitor60120180240300

TensorFlow

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption MonitorAVX512 OffAVX512 On50100150200250Min: 20.1 / Avg: 127.43 / Max: 214.25Min: 19.85 / Avg: 241.46 / Max: 277.92

TensorFlow

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption MonitorAVX512 OffAVX512 On50100150200250Min: 20.06 / Avg: 139.18 / Max: 198.98Min: 20.59 / Avg: 222.08 / Max: 272.77

TensorFlow

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption MonitorAVX512 OffAVX512 On50100150200250Min: 19.97 / Avg: 127.28 / Max: 218.04Min: 10.84 / Avg: 236.33 / Max: 283.4

TensorFlow

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.12CPU Power Consumption MonitorAVX512 OffAVX512 On50100150200250Min: 20.22 / Avg: 140.06 / Max: 209.42Min: 20.5 / Avg: 226.78 / Max: 289.01

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.7244.1344.4AVX512 On20.4253.4358.2OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor100200300400500

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.3234.7326.7AVX512 On21.1250.2321.3OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor80160240320400

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.0228.6329.0AVX512 On10.5234.1342.4OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor80160240320400

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 On21.1279.2332.9AVX512 Off21.0284.5342.1OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor80160240320400

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.8255.7320.6AVX512 Off21.0256.2320.4OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor80160240320400

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.6228.3324.5AVX512 Off21.5235.2340.9OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor80160240320400

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off20.8247.7347.3AVX512 On20.6247.7358.6OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor100200300400500

Neural Magic DeepSparse

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.7244.3357.9AVX512 Off21.2245.5344.8OpenBenchmarking.orgWatts, Fewer Is BetterNeural Magic DeepSparse 1.5CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.1286.8325.8AVX512 On21.1315.2358.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.3283.3329.0AVX512 On19.8306.5359.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.0292.7340.3AVX512 On20.8306.2358.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.7256.0333.9AVX512 Off21.0266.8326.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor80160240320400

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.5295.4336.3AVX512 Off21.2323.5369.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 On20.8306.3336.2AVX512 Off21.2330.4358.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.3288.2311.1AVX512 On21.2327.7354.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.2292.9331.3AVX512 On10.9295.1345.9OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 On21.1306.8333.5AVX512 Off21.1336.6365.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.1293.4328.1AVX512 On21.6308.5346.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.0298.9332.9AVX512 On21.2316.4347.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor100200300400500

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxAVX512 Off21.2308.2334.1AVX512 On21.4311.4336.9OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2022.3CPU Power Consumption Monitor80160240320400


Phoronix Test Suite v10.8.5