Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2401110-NE-NEWTESTS900 new-tests - Phoronix Test Suite new-tests Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&gru&rdt .
new-tests Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d AMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads) TYAN B8026T70AE24HR (V1.02.B10 BIOS) AMD 17h 128GB 280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8 llvmpipe VE228 2 x Broadcom NetXtreme BCM5720 PCIe Ubuntu 23.10 6.6.9-060609-generic (x86_64) GNOME Shell 45.0 X Server 1.21.1.7 4.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits) GCC 13.2.0 ext4 1920x1080 AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads) AMD Cinnabar (RCB1009C BIOS) AMD Device 14a4 6 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG 1000GB INTEL SSDPE2KX010T8 1920x1200 AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads) ASPEED AMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e - b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 Security Details - Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details - 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) Python Details - 32, 32 z, 32 c, 32 d: Python 3.11.6
new-tests pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l quicksilver: CORAL2 P1 quicksilver: CORAL2 P2 quicksilver: CTS2 ffmpeg: libx265 - Live ffmpeg: libx265 - Upload ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M tensorflow: CPU - 1 - VGG-16 tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 16 - VGG-16 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 1 - GoogLeNet tensorflow: CPU - 1 - ResNet-50 tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream cachebench: Read cachebench: Write cachebench: Read / Modify / Write quantlib: Multi-Threaded compress-7zip: Compression Rating compress-7zip: Decompression Rating rocksdb: Rand Read rocksdb: Update Rand rocksdb: Read While Writing rocksdb: Read Rand Write Rand speedb: Rand Read speedb: Update Rand speedb: Read While Writing speedb: Read Rand Write Rand llama-cpp: llama-2-7b.Q4_0.gguf llama-cpp: llama-2-13b.Q4_0.gguf llama-cpp: llama-2-70b-chat.Q5_0.gguf ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream dacapobench: Jython dacapobench: Eclipse dacapobench: GraphChi dacapobench: Tradesoap dacapobench: Tradebeans dacapobench: Spring Boot dacapobench: Apache Kafka dacapobench: Apache Tomcat dacapobench: jMonkeyEngine dacapobench: Apache Cassandra dacapobench: Apache Xalan XSLT dacapobench: Batik SVG Toolkit dacapobench: H2 Database Engine dacapobench: FOP Print Formatter dacapobench: PMD Source Code Analyzer dacapobench: Apache Lucene Search Index dacapobench: Apache Lucene Search Engine dacapobench: Avrora AVR Simulation Framework dacapobench: BioJava Biological Data Framework dacapobench: Zxing 1D/2D Barcode Image Processing dacapobench: H2O In-Memory Platform For Machine Learning y-cruncher: 500M y-cruncher: 1B openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time build-ffmpeg: Time To Compile build-gem5: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 12996667 15013333 11426667 15.693 33.923 21180000 16140000 16270000 5.202 10.416 21250000 16150000 16260000 5.213 10.476 52.44 19.04 40.19 15.61 9.85 7.17 18790000 15350000 14320000 109.84 22.28 45.13 45.18 17.17 151.45 150.8 1190.42 32.82 3921.5 576.18 1960.18 1704.26 5747.65 666.22 199.9 3300.99 1741.57 898.6 40123.62 745 52441.94 36.9584 37.2967 41.5958 37.284 45.9374 38.9378 5.801 48.451 186.625 185.665 18777.2 18845.5 25814.4 4067.4 19004.5 18860.1 9.73 32.12 25.15 272.93 28.99 8.74 158.47 51.34 21.2933 836.4214 266.8574 2208.1537 123.0157 26.0566 266.8799 123.8817 182.5085 40.1688 383.9746 21.2278 7616.087334 45646.091353 87227.587713 107079.2 241545 212209 176770468 630575 4284691 2373654 179685954 314123 7457600 2231403 29.75 17.94 3.42 3404 3451 4049 60673 116377 61987 116566 71361 136464 929.23 105.48 105.97 13.36 486.65 3.91 27.69 8.07 18.69 5.41 23.95 79.82 9.56 9.12 35.51 0.65 42.87 0.48 747.0674 19.1107 59.8638 7.2332 129.8749 607.935 59.8833 128.8158 87.4908 396.2914 41.6314 747.314 6703 12656 3536 5403 8561 2444 5110 2107 6914 5946 871 1733 2675 751 1784 4613 1402 5613 7874 609 3974 5.656 11.676 28.372583 72.807288 23.557 254.01 52.133 433.789 44.73 112.03 55.65 410.61 139.09 52.78 18.92 39.96 15.51 9.82 7.11 18760000 15230000 14290000 110.37 22.20 45.05 45.08 17.18 150.06 150.37 1197.46 32.81 3924.86 579.41 1964.99 1704.02 5751.58 666.3 201.15 3299.93 1735.64 896.69 40101.8 730.82 52475.39 37.2545 37.6791 41.8198 36.8586 46.3088 39.107 5.899 58.715 185.562 184.981 18961.3 18763.8 25943.7 4038.6 18936.5 18909 9.75 31.92 25.2 274.97 28.71 8.77 155.77 51.57 21.2711 835.262 266.9761 2199.4941 122.955 26.0768 267.8417 123.785 182.7643 39.9467 385.6481 21.289 7616.334142 45646.816107 87218.210974 107381.6 242399 211584 177167636 636242 4364996 2361270 179434924 314114 7210235 2259344 29.9 17.87 3.41 3406 3446 4048 61430 115669 62113 116972 71495 136312 927.57 106.44 106.24 13.29 486.03 3.9 27.53 8.05 18.69 5.42 23.95 79.39 9.56 9.16 35.59 0.66 43.71 0.47 745.1806 19.1338 59.8673 7.261 129.8035 608.1326 59.6674 128.845 87.325 397.9593 41.4377 746.128 6773 12735 3630 5168 8600 2460 5121 2082 6917 5938 859 1723 2655 696 1820 4589 1425 5441 7858 599 3868 5.685 11.595 30.75472 71.201285 23.759 272.61 52.012 434.187 44.48 112.09 55.54 410.43 138.6 53.00 18.86 40.32 15.32 10.04 7.18 1040000 15180000 14430000 110.02 22.21 45.13 44.95 16.51 150.07 150.84 1166.56 31.22 3877.91 554.68 1860.99 1627.93 5416.31 632.92 194.21 3099.2 1694.01 853.38 39562.87 692.02 52382.31 35.9147 36.9967 41.5696 37.4405 45.4648 39.0046 5.829 47.253 180.955 183.899 18947.3 18897.5 25385.9 4136.3 18783.9 18887.5 9.77 33.14 24.47 274.97 27.73 8.61 157.6 51.56 20.8729 816.2785 266.8428 2195.9198 122.3307 25.8175 266.034 123.1469 181.1043 38.7708 384.3164 21.0419 7615.948086 45645.091133 87238.013197 98916.2 240287 211815 160665305 633688 4419497 2327800 163202721 317758 7746346 2229494 29.74 17.87 3.42 3493 3515 4157 62802 118221 63402 118980 73024 139685 964.2 106.43 105.91 13.65 510.79 4.03 28.77 8.52 19.58 5.78 25.22 82.18 10.22 9.39 37.4 0.67 46.17 0.48 753.1229 19.5831 59.9016 7.2738 130.4755 611.6026 60.0613 129.5421 88.1952 411.3435 41.5889 751.9259 6865 12826 3538 5366 8520 2533 5111 2094 6917 5955 852 1718 2773 764 1966 4580 1379 5561 7904 569 3979 5.783 11.902 30.537591 72.384007 24.446 258.307 53.615 453.693 47.52 119.72 59.58 426.3 148.74 53.30 18.86 40.31 15.35 10.21 7.15 18840000 15100000 14280000 110.29 22.22 44.97 45.10 16.54 151.25 150.25 1166.83 31.2 3869.7 553.65 1862.24 1628.91 5423.13 634.5 195.05 3100.95 1696.5 848.62 39843.05 690.24 52344.6 36.2812 36.9369 41.557 37.4056 45.6482 39.1421 5.977 58.642 186.368 184.099 18901.1 18866.1 25396.8 4095.7 18924 18818.6 9.75 33.02 24.51 276.19 28.79 8.59 158.08 51.49 21.0932 815.9768 266.5343 2189.0655 121.8001 25.7874 266.2776 122.9312 181.1155 38.8343 381.7839 21.0667 7615.833145 45643.038713 87854.117672 98618.7 241191 211383 160707812 630478 4244478 2351568 163512432 313683 7105602 2215896 29.85 18.08 3.42 3499 3522 4132 63336 118802 62787 119783 73329 139445 965.35 105.64 106.32 13.65 510.9 4.03 28.82 8.52 19.56 5.78 25.16 81.87 10.21 9.37 37.61 0.67 46.28 0.48 751.2117 19.5858 59.9698 7.2896 130.7937 611.4439 60.0284 129.8101 88.2278 410.3267 41.8438 750.3997 6769 12768 3656 5149 8380 2452 5114 2112 6916 5927 861 1738 2634 758 1833 4602 1433 5572 7907 599 3755 5.751 11.975 30.724194 72.305836 24.3 258.934 53.632 452.606 47.41 119.57 59.79 426.37 148.56 OpenBenchmarking.org
CPU Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring Zen 1 - EPYC 7601 130 260 390 520 650 Min: 242.58 / Avg: 585.92 / Max: 718
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 32 32 z 32 c 32 d 12 24 36 48 60 52.44 52.78 53.00 53.30 MIN: 15.02 / MAX: 53.14 MIN: 17.43 / MAX: 53.32 MIN: 50.62 / MAX: 53.51 MIN: 50.97 / MAX: 53.84
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 32 32 z 32 c 32 d 5 10 15 20 25 19.04 18.92 18.86 18.86 MIN: 6.89 / MAX: 19.18 MIN: 7.59 / MAX: 19.04 MIN: 10.78 / MAX: 19.02 MIN: 7.91 / MAX: 19.03
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 32 32 z 32 c 32 d 9 18 27 36 45 40.19 39.96 40.32 40.31 MIN: 15.55 / MAX: 40.67 MIN: 15.13 / MAX: 40.53 MIN: 15.51 / MAX: 40.87 MIN: 15.27 / MAX: 40.73
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 32 32 z 32 c 32 d 4 8 12 16 20 15.61 15.51 15.32 15.35 MIN: 6.89 / MAX: 15.74 MIN: 7.3 / MAX: 15.63 MIN: 6.91 / MAX: 15.45 MIN: 8.86 / MAX: 15.52
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l 32 32 z 32 c 32 d 3 6 9 12 15 9.85 9.82 10.04 10.21 MIN: 5.1 / MAX: 9.99 MIN: 5.63 / MAX: 10.05 MIN: 5.86 / MAX: 10.23 MIN: 5.69 / MAX: 10.32
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l 32 32 z 32 c 32 d 2 4 6 8 10 7.17 7.11 7.18 7.15 MIN: 4.45 / MAX: 7.33 MIN: 4.25 / MAX: 7.26 MIN: 4.37 / MAX: 7.37 MIN: 4.34 / MAX: 7.3
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CORAL2 P1 Zen 1 - EPYC 7601 5K 10K 15K 20K 25K 22248.55
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CORAL2 P2 Zen 1 - EPYC 7601 6K 12K 18K 24K 30K 27116.87
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CTS2 Zen 1 - EPYC 7601 4K 8K 12K 16K 20K 18307.66
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P1 Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 5M 10M 15M 20M 25M SE +/- 66916.20, N = 3 12996667 21180000 21250000 18790000 18760000 1040000 18840000 1. (CXX) g++ options: -fopenmp -O3 -march=native
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P2 Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 3M 6M 9M 12M 15M SE +/- 37118.43, N = 3 15013333 16140000 16150000 15350000 15230000 15180000 15100000 1. (CXX) g++ options: -fopenmp -O3 -march=native
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CTS2 Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 3M 6M 9M 12M 15M SE +/- 16666.67, N = 3 11426667 16270000 16260000 14320000 14290000 14430000 14280000 1. (CXX) g++ options: -fopenmp -O3 -march=native
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live 32 32 z 32 c 32 d 20 40 60 80 100 109.84 110.37 110.02 110.29 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload 32 32 z 32 c 32 d 5 10 15 20 25 22.28 22.20 22.21 22.22 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform 32 32 z 32 c 32 d 10 20 30 40 50 45.13 45.05 45.13 44.97 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand 32 32 z 32 c 32 d 10 20 30 40 50 45.18 45.08 44.95 45.10 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU 32 32 z 32 c 32 d 4 8 12 16 20 17.17 17.18 16.51 16.54 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU 32 32 z 32 c 32 d 30 60 90 120 150 151.45 150.06 150.07 151.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU 32 32 z 32 c 32 d 30 60 90 120 150 150.80 150.37 150.84 150.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU 32 32 z 32 c 32 d 300 600 900 1200 1500 1190.42 1197.46 1166.56 1166.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 8 16 24 32 40 32.82 32.81 31.22 31.20 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU 32 32 z 32 c 32 d 800 1600 2400 3200 4000 3921.50 3924.86 3877.91 3869.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU 32 32 z 32 c 32 d 130 260 390 520 650 576.18 579.41 554.68 553.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 400 800 1200 1600 2000 1960.18 1964.99 1860.99 1862.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU 32 32 z 32 c 32 d 400 800 1200 1600 2000 1704.26 1704.02 1627.93 1628.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 1200 2400 3600 4800 6000 5747.65 5751.58 5416.31 5423.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 140 280 420 560 700 666.22 666.30 632.92 634.50 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU 32 32 z 32 c 32 d 40 80 120 160 200 199.90 201.15 194.21 195.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 700 1400 2100 2800 3500 3300.99 3299.93 3099.20 3100.95 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU 32 32 z 32 c 32 d 400 800 1200 1600 2000 1741.57 1735.64 1694.01 1696.50 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU 32 32 z 32 c 32 d 200 400 600 800 1000 898.60 896.69 853.38 848.62 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 32 32 z 32 c 32 d 9K 18K 27K 36K 45K 40123.62 40101.80 39562.87 39843.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 160 320 480 640 800 745.00 730.82 692.02 690.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 11K 22K 33K 44K 55K 52441.94 52475.39 52382.31 52344.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown 32 32 z 32 c 32 d 9 18 27 36 45 36.96 37.25 35.91 36.28 MIN: 36.61 / MAX: 37.43 MIN: 36.89 / MAX: 37.75 MIN: 35.53 / MAX: 37.08 MIN: 35.88 / MAX: 37.13
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown 32 32 z 32 c 32 d 9 18 27 36 45 37.30 37.68 37.00 36.94 MIN: 36.86 / MAX: 38.04 MIN: 37.25 / MAX: 38.37 MIN: 36.53 / MAX: 38.11 MIN: 36.46 / MAX: 37.76
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon 32 32 z 32 c 32 d 10 20 30 40 50 41.60 41.82 41.57 41.56 MIN: 41.36 / MAX: 41.86 MIN: 41.6 / MAX: 42.16 MIN: 41.37 / MAX: 41.9 MIN: 41.33 / MAX: 41.84
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj 32 32 z 32 c 32 d 9 18 27 36 45 37.28 36.86 37.44 37.41 MIN: 37.09 / MAX: 37.7 MIN: 36.67 / MAX: 37.11 MIN: 37.24 / MAX: 37.71 MIN: 37.22 / MAX: 37.69
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon 32 32 z 32 c 32 d 11 22 33 44 55 45.94 46.31 45.46 45.65 MIN: 45.66 / MAX: 46.38 MIN: 46.05 / MAX: 46.74 MIN: 45.22 / MAX: 46.6 MIN: 45.37 / MAX: 46.89
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj 32 32 z 32 c 32 d 9 18 27 36 45 38.94 39.11 39.00 39.14 MIN: 38.69 / MAX: 39.29 MIN: 38.88 / MAX: 39.43 MIN: 38.78 / MAX: 39.64 MIN: 38.92 / MAX: 39.84
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K 32 32 z 32 c 32 d 1.3448 2.6896 4.0344 5.3792 6.724 5.801 5.899 5.829 5.977 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K 32 32 z 32 c 32 d 13 26 39 52 65 48.45 58.72 47.25 58.64 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K 32 32 z 32 c 32 d 40 80 120 160 200 186.63 185.56 180.96 186.37 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K 32 32 z 32 c 32 d 40 80 120 160 200 185.67 184.98 183.90 184.10 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Xmrig Variant: KawPow - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M 32 32 z 32 c 32 d 4K 8K 12K 16K 20K 18777.2 18961.3 18947.3 18901.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M 32 32 z 32 c 32 d 4K 8K 12K 16K 20K 18845.5 18763.8 18897.5 18866.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M 32 32 z 32 c 32 d 6K 12K 18K 24K 30K 25814.4 25943.7 25385.9 25396.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: GhostRider - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M 32 32 z 32 c 32 d 900 1800 2700 3600 4500 4067.4 4038.6 4136.3 4095.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: CryptoNight-Heavy - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M 32 32 z 32 c 32 d 4K 8K 12K 16K 20K 19004.5 18936.5 18783.9 18924.0 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: CryptoNight-Femto UPX2 - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M 32 32 z 32 c 32 d 4K 8K 12K 16K 20K 18860.1 18909.0 18887.5 18818.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
TensorFlow Device: CPU - Batch Size: 1 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 32 32 z 32 c 32 d 3 6 9 12 15 9.73 9.75 9.77 9.75
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet 32 32 z 32 c 32 d 8 16 24 32 40 32.12 31.92 33.14 33.02
TensorFlow Device: CPU - Batch Size: 16 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 32 32 z 32 c 32 d 6 12 18 24 30 25.15 25.20 24.47 24.51
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet 32 32 z 32 c 32 d 60 120 180 240 300 272.93 274.97 274.97 276.19
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet 32 32 z 32 c 32 d 7 14 21 28 35 28.99 28.71 27.73 28.79
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 32 32 z 32 c 32 d 2 4 6 8 10 8.74 8.77 8.61 8.59
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet 32 32 z 32 c 32 d 40 80 120 160 200 158.47 155.77 157.60 158.08
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 32 32 z 32 c 32 d 12 24 36 48 60 51.34 51.57 51.56 51.49
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 5 10 15 20 25 21.29 21.27 20.87 21.09
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 200 400 600 800 1000 836.42 835.26 816.28 815.98
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 60 120 180 240 300 266.86 266.98 266.84 266.53
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 500 1000 1500 2000 2500 2208.15 2199.49 2195.92 2189.07
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 123.02 122.96 122.33 121.80
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 6 12 18 24 30 26.06 26.08 25.82 25.79
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 60 120 180 240 300 266.88 267.84 266.03 266.28
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 123.88 123.79 123.15 122.93
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 40 80 120 160 200 182.51 182.76 181.10 181.12
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 9 18 27 36 45 40.17 39.95 38.77 38.83
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 80 160 240 320 400 383.97 385.65 384.32 381.78
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 5 10 15 20 25 21.23 21.29 21.04 21.07
CacheBench Test: Read OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read 32 32 z 32 c 32 d 1600 3200 4800 6400 8000 7616.09 7616.33 7615.95 7615.83 MIN: 7615.65 / MAX: 7616.54 MIN: 7615.95 / MAX: 7616.74 MIN: 7615.46 / MAX: 7616.35 MIN: 7615.4 / MAX: 7616.44 1. (CC) gcc options: -O3 -lrt
CacheBench Test: Write OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Write 32 32 z 32 c 32 d 10K 20K 30K 40K 50K 45646.09 45646.82 45645.09 45643.04 MIN: 45484.29 / MAX: 45698.11 MIN: 45482.27 / MAX: 45698.03 MIN: 45483.02 / MAX: 45696.19 MIN: 45482.26 / MAX: 45696.12 1. (CC) gcc options: -O3 -lrt
CacheBench Test: Read / Modify / Write OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read / Modify / Write 32 32 z 32 c 32 d 20K 40K 60K 80K 100K 87227.59 87218.21 87238.01 87854.12 MIN: 65739.52 / MAX: 90694.35 MIN: 65721.62 / MAX: 90703.93 MIN: 65732.92 / MAX: 90706.91 MIN: 72077.93 / MAX: 90708.03 1. (CC) gcc options: -O3 -lrt
QuantLib Configuration: Multi-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded 32 32 z 32 c 32 d 20K 40K 60K 80K 100K 107079.2 107381.6 98916.2 98618.7 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating 32 32 z 32 c 32 d 50K 100K 150K 200K 250K 241545 242399 240287 241191 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating 32 32 z 32 c 32 d 50K 100K 150K 200K 250K 212209 211584 211815 211383 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Read 32 32 z 32 c 32 d 40M 80M 120M 160M 200M 176770468 177167636 160665305 160707812 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random 32 32 z 32 c 32 d 140K 280K 420K 560K 700K 630575 636242 633688 630478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing 32 32 z 32 c 32 d 900K 1800K 2700K 3600K 4500K 4284691 4364996 4419497 4244478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random 32 32 z 32 c 32 d 500K 1000K 1500K 2000K 2500K 2373654 2361270 2327800 2351568 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Random Read OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Random Read 32 32 z 32 c 32 d 40M 80M 120M 160M 200M 179685954 179434924 163202721 163512432 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Update Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Update Random 32 32 z 32 c 32 d 70K 140K 210K 280K 350K 314123 314114 317758 313683 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read While Writing 32 32 z 32 c 32 d 1.7M 3.4M 5.1M 6.8M 8.5M 7457600 7210235 7746346 7105602 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read Random Write Random 32 32 z 32 c 32 d 500K 1000K 1500K 2000K 2500K 2231403 2259344 2229494 2215896 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Meta Performance Per Watts Performance Per Watts OpenBenchmarking.org Performance Per Watts, More Is Better Meta Performance Per Watts Performance Per Watts Zen 1 - EPYC 7601 3M 6M 9M 12M 15M 13064001.66
Llama.cpp Model: llama-2-7b.Q4_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf 32 32 z 32 c 32 d 7 14 21 28 35 29.75 29.90 29.74 29.85 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llama.cpp Model: llama-2-13b.Q4_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf 32 32 z 32 c 32 d 4 8 12 16 20 17.94 17.87 17.87 18.08 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llama.cpp Model: llama-2-70b-chat.Q5_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf 32 32 z 32 c 32 d 0.7695 1.539 2.3085 3.078 3.8475 3.42 3.41 3.42 3.42 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 800 1600 2400 3200 4000 3404 3406 3493 3499
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 800 1600 2400 3200 4000 3451 3446 3515 3522
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 900 1800 2700 3600 4500 4049 4048 4157 4132
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 14K 28K 42K 56K 70K 60673 61430 62802 63336
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 30K 60K 90K 120K 150K 116377 115669 118221 118802
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 14K 28K 42K 56K 70K 61987 62113 63402 62787
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 30K 60K 90K 120K 150K 116566 116972 118980 119783
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 16K 32K 48K 64K 80K 71361 71495 73024 73329
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 30K 60K 90K 120K 150K 136464 136312 139685 139445
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU 32 32 z 32 c 32 d 200 400 600 800 1000 929.23 927.57 964.20 965.35 MIN: 907.01 / MAX: 1013.02 MIN: 895.6 / MAX: 1019.94 MIN: 905.78 / MAX: 1053.38 MIN: 922.7 / MAX: 1047.5 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU 32 32 z 32 c 32 d 20 40 60 80 100 105.48 106.44 106.43 105.64 MIN: 82.05 / MAX: 167.92 MIN: 81.71 / MAX: 196.1 MIN: 80.87 / MAX: 199.77 MIN: 54.2 / MAX: 154.42 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU 32 32 z 32 c 32 d 20 40 60 80 100 105.97 106.24 105.91 106.32 MIN: 81.88 / MAX: 218.45 MIN: 81.06 / MAX: 185.99 MIN: 82.12 / MAX: 188.16 MIN: 81.37 / MAX: 177.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU 32 32 z 32 c 32 d 4 8 12 16 20 13.36 13.29 13.65 13.65 MIN: 7.26 / MAX: 78.85 MIN: 8.3 / MAX: 73.59 MIN: 9.08 / MAX: 67.03 MIN: 6.73 / MAX: 75.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 110 220 330 440 550 486.65 486.03 510.79 510.90 MIN: 465.68 / MAX: 570.73 MIN: 454.31 / MAX: 580.9 MIN: 473.86 / MAX: 584.54 MIN: 470.7 / MAX: 595.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU 32 32 z 32 c 32 d 0.9068 1.8136 2.7204 3.6272 4.534 3.91 3.90 4.03 4.03 MIN: 2.2 / MAX: 72.73 MIN: 2.18 / MAX: 64.81 MIN: 2.23 / MAX: 54.09 MIN: 2.23 / MAX: 62.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU 32 32 z 32 c 32 d 7 14 21 28 35 27.69 27.53 28.77 28.82 MIN: 18.56 / MAX: 147.54 MIN: 18.86 / MAX: 82.58 MIN: 17.12 / MAX: 135.79 MIN: 19.39 / MAX: 99.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 2 4 6 8 10 8.07 8.05 8.52 8.52 MIN: 4.55 / MAX: 69.24 MIN: 4.56 / MAX: 76.48 MIN: 4.97 / MAX: 67.6 MIN: 4.8 / MAX: 75.53 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU 32 32 z 32 c 32 d 5 10 15 20 25 18.69 18.69 19.58 19.56 MIN: 9.97 / MAX: 81.33 MIN: 9.78 / MAX: 86.93 MIN: 10.24 / MAX: 83.63 MIN: 13.73 / MAX: 73.6 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 1.3005 2.601 3.9015 5.202 6.5025 5.41 5.42 5.78 5.78 MIN: 3.17 / MAX: 57.08 MIN: 3.15 / MAX: 67.23 MIN: 3.21 / MAX: 58.78 MIN: 3.37 / MAX: 65.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 6 12 18 24 30 23.95 23.95 25.22 25.16 MIN: 13.94 / MAX: 114.01 MIN: 15.19 / MAX: 90.71 MIN: 21.61 / MAX: 89.16 MIN: 19.24 / MAX: 86.7 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU 32 32 z 32 c 32 d 20 40 60 80 100 79.82 79.39 82.18 81.87 MIN: 42.02 / MAX: 179.47 MIN: 43.97 / MAX: 186.13 MIN: 58.39 / MAX: 175.7 MIN: 52.13 / MAX: 175.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 3 6 9 12 15 9.56 9.56 10.22 10.21 MIN: 5.1 / MAX: 77.12 MIN: 5.09 / MAX: 75.37 MIN: 5.48 / MAX: 68.07 MIN: 5.17 / MAX: 61.15 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU 32 32 z 32 c 32 d 3 6 9 12 15 9.12 9.16 9.39 9.37 MIN: 6.22 / MAX: 56.95 MIN: 5.99 / MAX: 67.91 MIN: 5.95 / MAX: 68.66 MIN: 6.07 / MAX: 71.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU 32 32 z 32 c 32 d 9 18 27 36 45 35.51 35.59 37.40 37.61 MIN: 22.8 / MAX: 100.53 MIN: 24.72 / MAX: 147.24 MIN: 27.33 / MAX: 92.33 MIN: 24.11 / MAX: 127.49 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 32 32 z 32 c 32 d 0.1508 0.3016 0.4524 0.6032 0.754 0.65 0.66 0.67 0.67 MIN: 0.36 / MAX: 51.48 MIN: 0.36 / MAX: 65.79 MIN: 0.36 / MAX: 62.87 MIN: 0.36 / MAX: 50.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 10 20 30 40 50 42.87 43.71 46.17 46.28 MIN: 35.14 / MAX: 107.5 MIN: 35.06 / MAX: 153.84 MIN: 39.81 / MAX: 161.92 MIN: 30.15 / MAX: 108.49 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 0.108 0.216 0.324 0.432 0.54 0.48 0.47 0.48 0.48 MIN: 0.27 / MAX: 50.11 MIN: 0.27 / MAX: 64.47 MIN: 0.27 / MAX: 50.17 MIN: 0.27 / MAX: 65.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 160 320 480 640 800 747.07 745.18 753.12 751.21
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 5 10 15 20 25 19.11 19.13 19.58 19.59
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 13 26 39 52 65 59.86 59.87 59.90 59.97
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 2 4 6 8 10 7.2332 7.2610 7.2738 7.2896
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 129.87 129.80 130.48 130.79
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 130 260 390 520 650 607.94 608.13 611.60 611.44
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 13 26 39 52 65 59.88 59.67 60.06 60.03
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 128.82 128.85 129.54 129.81
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 20 40 60 80 100 87.49 87.33 88.20 88.23
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 90 180 270 360 450 396.29 397.96 411.34 410.33
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 10 20 30 40 50 41.63 41.44 41.59 41.84
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 160 320 480 640 800 747.31 746.13 751.93 750.40
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Jython 32 32 z 32 c 32 d 1500 3000 4500 6000 7500 6703 6773 6865 6769
DaCapo Benchmark Java Test: Eclipse OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Eclipse 32 32 z 32 c 32 d 3K 6K 9K 12K 15K 12656 12735 12826 12768
DaCapo Benchmark Java Test: GraphChi OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: GraphChi 32 32 z 32 c 32 d 800 1600 2400 3200 4000 3536 3630 3538 3656
DaCapo Benchmark Java Test: Tradesoap OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradesoap 32 32 z 32 c 32 d 1200 2400 3600 4800 6000 5403 5168 5366 5149
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradebeans 32 32 z 32 c 32 d 2K 4K 6K 8K 10K 8561 8600 8520 8380
DaCapo Benchmark Java Test: Spring Boot OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Spring Boot 32 32 z 32 c 32 d 500 1000 1500 2000 2500 2444 2460 2533 2452
DaCapo Benchmark Java Test: Apache Kafka OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Kafka 32 32 z 32 c 32 d 1100 2200 3300 4400 5500 5110 5121 5111 5114
DaCapo Benchmark Java Test: Apache Tomcat OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Tomcat 32 32 z 32 c 32 d 500 1000 1500 2000 2500 2107 2082 2094 2112
DaCapo Benchmark Java Test: jMonkeyEngine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: jMonkeyEngine 32 32 z 32 c 32 d 1500 3000 4500 6000 7500 6914 6917 6917 6916
DaCapo Benchmark Java Test: Apache Cassandra OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Cassandra 32 32 z 32 c 32 d 1300 2600 3900 5200 6500 5946 5938 5955 5927
DaCapo Benchmark Java Test: Apache Xalan XSLT OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Xalan XSLT 32 32 z 32 c 32 d 200 400 600 800 1000 871 859 852 861
DaCapo Benchmark Java Test: Batik SVG Toolkit OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Batik SVG Toolkit 32 32 z 32 c 32 d 400 800 1200 1600 2000 1733 1723 1718 1738
DaCapo Benchmark Java Test: H2 Database Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2 Database Engine 32 32 z 32 c 32 d 600 1200 1800 2400 3000 2675 2655 2773 2634
DaCapo Benchmark Java Test: FOP Print Formatter OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: FOP Print Formatter 32 32 z 32 c 32 d 160 320 480 640 800 751 696 764 758
DaCapo Benchmark Java Test: PMD Source Code Analyzer OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: PMD Source Code Analyzer 32 32 z 32 c 32 d 400 800 1200 1600 2000 1784 1820 1966 1833
DaCapo Benchmark Java Test: Apache Lucene Search Index OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Index 32 32 z 32 c 32 d 1000 2000 3000 4000 5000 4613 4589 4580 4602
DaCapo Benchmark Java Test: Apache Lucene Search Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Engine 32 32 z 32 c 32 d 300 600 900 1200 1500 1402 1425 1379 1433
DaCapo Benchmark Java Test: Avrora AVR Simulation Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Avrora AVR Simulation Framework 32 32 z 32 c 32 d 1200 2400 3600 4800 6000 5613 5441 5561 5572
DaCapo Benchmark Java Test: BioJava Biological Data Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: BioJava Biological Data Framework 32 32 z 32 c 32 d 2K 4K 6K 8K 10K 7874 7858 7904 7907
DaCapo Benchmark Java Test: Zxing 1D/2D Barcode Image Processing OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Zxing 1D/2D Barcode Image Processing 32 32 z 32 c 32 d 130 260 390 520 650 609 599 569 599
DaCapo Benchmark Java Test: H2O In-Memory Platform For Machine Learning OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2O In-Memory Platform For Machine Learning 32 32 z 32 c 32 d 900 1800 2700 3600 4500 3974 3868 3979 3755
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 4 8 12 16 20 SE +/- 0.118, N = 3 15.693 5.202 5.213 5.656 5.685 5.783 5.751
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 8 16 24 32 40 SE +/- 0.09, N = 3 33.92 10.42 10.48 11.68 11.60 11.90 11.98
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time 32 32 z 32 c 32 d 7 14 21 28 35 28.37 30.75 30.54 30.72 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time 32 32 z 32 c 32 d 16 32 48 64 80 72.81 71.20 72.38 72.31 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
Timed FFmpeg Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed FFmpeg Compilation 6.1 Time To Compile 32 32 z 32 c 32 d 6 12 18 24 30 23.56 23.76 24.45 24.30
Timed Gem5 Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 23.0.1 Time To Compile 32 32 z 32 c 32 d 60 120 180 240 300 254.01 272.61 258.31 258.93
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig 32 32 z 32 c 32 d 12 24 36 48 60 52.13 52.01 53.62 53.63
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig 32 32 z 32 c 32 d 100 200 300 400 500 433.79 434.19 453.69 452.61
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: BMW27 - Compute: CPU-Only 32 32 z 32 c 32 d 11 22 33 44 55 44.73 44.48 47.52 47.41
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Classroom - Compute: CPU-Only 32 32 z 32 c 32 d 30 60 90 120 150 112.03 112.09 119.72 119.57
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Fishy Cat - Compute: CPU-Only 32 32 z 32 c 32 d 13 26 39 52 65 55.65 55.54 59.58 59.79
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Barbershop - Compute: CPU-Only 32 32 z 32 c 32 d 90 180 270 360 450 410.61 410.43 426.30 426.37
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Pabellon Barcelona - Compute: CPU-Only 32 32 z 32 c 32 d 30 60 90 120 150 139.09 138.60 148.74 148.56
Quicksilver CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 243 584 648 OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor 200 400 600 800 1000
Quicksilver CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 255.2 553.7 594.9 OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor 160 320 480 640 800
Quicksilver CPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor Zen 1 - EPYC 7601 120 240 360 480 600 Min: 258.88 / Avg: 624.15 / Max: 662.04
Y-Cruncher CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 262 543 712 OpenBenchmarking.org Watts, Fewer Is Better Y-Cruncher 0.8.3 CPU Power Consumption Monitor 200 400 600 800 1000
Y-Cruncher CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 263 602 718 OpenBenchmarking.org Watts, Fewer Is Better Y-Cruncher 0.8.3 CPU Power Consumption Monitor 200 400 600 800 1000
Phoronix Test Suite v10.8.4