Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2401110-NE-NEWTESTS900 new-tests - Phoronix Test Suite new-tests Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&export=pdf&grt&sor .
new-tests Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d AMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads) TYAN B8026T70AE24HR (V1.02.B10 BIOS) AMD 17h 128GB 280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8 llvmpipe VE228 2 x Broadcom NetXtreme BCM5720 PCIe Ubuntu 23.10 6.6.9-060609-generic (x86_64) GNOME Shell 45.0 X Server 1.21.1.7 4.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits) GCC 13.2.0 ext4 1920x1080 AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads) AMD Cinnabar (RCB1009C BIOS) AMD Device 14a4 6 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG 1000GB INTEL SSDPE2KX010T8 1920x1200 AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads) ASPEED AMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e - b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 - 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 Security Details - Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details - 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) Python Details - 32, 32 z, 32 c, 32 d: Python 3.11.6
new-tests compress-7zip: Compression Rating compress-7zip: Decompression Rating blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only cachebench: Read cachebench: Write cachebench: Read / Modify / Write dacapobench: Jython dacapobench: Eclipse dacapobench: GraphChi dacapobench: Tradesoap dacapobench: Tradebeans dacapobench: Spring Boot dacapobench: Apache Kafka dacapobench: Apache Tomcat dacapobench: jMonkeyEngine dacapobench: Apache Cassandra dacapobench: Apache Xalan XSLT dacapobench: Batik SVG Toolkit dacapobench: H2 Database Engine dacapobench: FOP Print Formatter dacapobench: PMD Source Code Analyzer dacapobench: Apache Lucene Search Index dacapobench: Apache Lucene Search Engine dacapobench: Avrora AVR Simulation Framework dacapobench: BioJava Biological Data Framework dacapobench: Zxing 1D/2D Barcode Image Processing dacapobench: H2O In-Memory Platform For Machine Learning embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj ffmpeg: libx265 - Live ffmpeg: libx265 - Upload ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand llama-cpp: llama-2-7b.Q4_0.gguf llama-cpp: llama-2-13b.Q4_0.gguf llama-cpp: llama-2-70b-chat.Q5_0.gguf deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l quantlib: Multi-Threaded quicksilver: CORAL2 P1 quicksilver: CORAL2 P2 quicksilver: CTS2 rocksdb: Rand Read rocksdb: Update Rand rocksdb: Read While Writing rocksdb: Read Rand Write Rand speedb: Rand Read speedb: Update Rand speedb: Read While Writing speedb: Read Rand Write Rand svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K tensorflow: CPU - 1 - VGG-16 tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 16 - VGG-16 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 1 - GoogLeNet tensorflow: CPU - 1 - ResNet-50 tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 build-ffmpeg: Time To Compile build-gem5: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M y-cruncher: 500M y-cruncher: 1B Zen 1 - EPYC 7601 b c 32 32 z 32 c 32 d 12996667 15013333 11426667 15.693 33.923 21180000 16140000 16270000 5.202 10.416 21250000 16150000 16260000 5.213 10.476 241545 212209 44.73 112.03 55.65 410.61 139.09 7616.087334 45646.091353 87227.587713 6703 12656 3536 5403 8561 2444 5110 2107 6914 5946 871 1733 2675 751 1784 4613 1402 5613 7874 609 3974 36.9584 37.2967 41.5958 37.284 45.9374 38.9378 109.84 22.28 45.13 45.18 29.75 17.94 3.42 21.2933 747.0674 836.4214 19.1107 266.8574 59.8638 2208.1537 7.2332 123.0157 129.8749 26.0566 607.935 266.8799 59.8833 123.8817 128.8158 182.5085 87.4908 40.1688 396.2914 383.9746 41.6314 21.2278 747.314 28.372583 72.807288 17.17 929.23 151.45 105.48 150.8 105.97 1190.42 13.36 32.82 486.65 3921.5 3.91 576.18 27.69 1960.18 8.07 1704.26 18.69 5747.65 5.41 666.22 23.95 199.9 79.82 3300.99 9.56 1741.57 9.12 898.6 35.51 40123.62 0.65 745 42.87 52441.94 0.48 3404 3451 4049 60673 116377 61987 116566 71361 136464 52.44 19.04 40.19 15.61 9.85 7.17 107079.2 18790000 15350000 14320000 176770468 630575 4284691 2373654 179685954 314123 7457600 2231403 5.801 48.451 186.625 185.665 9.73 32.12 25.15 272.93 28.99 8.74 158.47 51.34 23.557 254.01 52.133 433.789 18777.2 18845.5 25814.4 4067.4 19004.5 18860.1 5.656 11.676 242399 211584 44.48 112.09 55.54 410.43 138.6 7616.334142 45646.816107 87218.210974 6773 12735 3630 5168 8600 2460 5121 2082 6917 5938 859 1723 2655 696 1820 4589 1425 5441 7858 599 3868 37.2545 37.6791 41.8198 36.8586 46.3088 39.107 110.37 22.20 45.05 45.08 29.9 17.87 3.41 21.2711 745.1806 835.262 19.1338 266.9761 59.8673 2199.4941 7.261 122.955 129.8035 26.0768 608.1326 267.8417 59.6674 123.785 128.845 182.7643 87.325 39.9467 397.9593 385.6481 41.4377 21.289 746.128 30.75472 71.201285 17.18 927.57 150.06 106.44 150.37 106.24 1197.46 13.29 32.81 486.03 3924.86 3.9 579.41 27.53 1964.99 8.05 1704.02 18.69 5751.58 5.42 666.3 23.95 201.15 79.39 3299.93 9.56 1735.64 9.16 896.69 35.59 40101.8 0.66 730.82 43.71 52475.39 0.47 3406 3446 4048 61430 115669 62113 116972 71495 136312 52.78 18.92 39.96 15.51 9.82 7.11 107381.6 18760000 15230000 14290000 177167636 636242 4364996 2361270 179434924 314114 7210235 2259344 5.899 58.715 185.562 184.981 9.75 31.92 25.2 274.97 28.71 8.77 155.77 51.57 23.759 272.61 52.012 434.187 18961.3 18763.8 25943.7 4038.6 18936.5 18909 5.685 11.595 240287 211815 47.52 119.72 59.58 426.3 148.74 7615.948086 45645.091133 87238.013197 6865 12826 3538 5366 8520 2533 5111 2094 6917 5955 852 1718 2773 764 1966 4580 1379 5561 7904 569 3979 35.9147 36.9967 41.5696 37.4405 45.4648 39.0046 110.02 22.21 45.13 44.95 29.74 17.87 3.42 20.8729 753.1229 816.2785 19.5831 266.8428 59.9016 2195.9198 7.2738 122.3307 130.4755 25.8175 611.6026 266.034 60.0613 123.1469 129.5421 181.1043 88.1952 38.7708 411.3435 384.3164 41.5889 21.0419 751.9259 30.537591 72.384007 16.51 964.2 150.07 106.43 150.84 105.91 1166.56 13.65 31.22 510.79 3877.91 4.03 554.68 28.77 1860.99 8.52 1627.93 19.58 5416.31 5.78 632.92 25.22 194.21 82.18 3099.2 10.22 1694.01 9.39 853.38 37.4 39562.87 0.67 692.02 46.17 52382.31 0.48 3493 3515 4157 62802 118221 63402 118980 73024 139685 53.00 18.86 40.32 15.32 10.04 7.18 98916.2 1040000 15180000 14430000 160665305 633688 4419497 2327800 163202721 317758 7746346 2229494 5.829 47.253 180.955 183.899 9.77 33.14 24.47 274.97 27.73 8.61 157.6 51.56 24.446 258.307 53.615 453.693 18947.3 18897.5 25385.9 4136.3 18783.9 18887.5 5.783 11.902 241191 211383 47.41 119.57 59.79 426.37 148.56 7615.833145 45643.038713 87854.117672 6769 12768 3656 5149 8380 2452 5114 2112 6916 5927 861 1738 2634 758 1833 4602 1433 5572 7907 599 3755 36.2812 36.9369 41.557 37.4056 45.6482 39.1421 110.29 22.22 44.97 45.10 29.85 18.08 3.42 21.0932 751.2117 815.9768 19.5858 266.5343 59.9698 2189.0655 7.2896 121.8001 130.7937 25.7874 611.4439 266.2776 60.0284 122.9312 129.8101 181.1155 88.2278 38.8343 410.3267 381.7839 41.8438 21.0667 750.3997 30.724194 72.305836 16.54 965.35 151.25 105.64 150.25 106.32 1166.83 13.65 31.2 510.9 3869.7 4.03 553.65 28.82 1862.24 8.52 1628.91 19.56 5423.13 5.78 634.5 25.16 195.05 81.87 3100.95 10.21 1696.5 9.37 848.62 37.61 39843.05 0.67 690.24 46.28 52344.6 0.48 3499 3522 4132 63336 118802 62787 119783 73329 139445 53.30 18.86 40.31 15.35 10.21 7.15 98618.7 18840000 15100000 14280000 160707812 630478 4244478 2351568 163512432 313683 7105602 2215896 5.977 58.642 186.368 184.099 9.75 33.02 24.51 276.19 28.79 8.59 158.08 51.49 24.3 258.934 53.632 452.606 18901.1 18866.1 25396.8 4095.7 18924 18818.6 5.751 11.975 OpenBenchmarking.org
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating 32 z 32 32 d 32 c 50K 100K 150K 200K 250K 242399 241545 241191 240287 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating 32 32 c 32 z 32 d 50K 100K 150K 200K 250K 212209 211815 211584 211383 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: BMW27 - Compute: CPU-Only 32 z 32 32 d 32 c 11 22 33 44 55 44.48 44.73 47.41 47.52
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Classroom - Compute: CPU-Only 32 32 z 32 d 32 c 30 60 90 120 150 112.03 112.09 119.57 119.72
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Fishy Cat - Compute: CPU-Only 32 z 32 32 c 32 d 13 26 39 52 65 55.54 55.65 59.58 59.79
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Barbershop - Compute: CPU-Only 32 z 32 32 c 32 d 90 180 270 360 450 410.43 410.61 426.30 426.37
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: Pabellon Barcelona - Compute: CPU-Only 32 z 32 32 d 32 c 30 60 90 120 150 138.60 139.09 148.56 148.74
CacheBench Test: Read OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read 32 z 32 32 c 32 d 1600 3200 4800 6400 8000 7616.33 7616.09 7615.95 7615.83 MIN: 7615.95 / MAX: 7616.74 MIN: 7615.65 / MAX: 7616.54 MIN: 7615.46 / MAX: 7616.35 MIN: 7615.4 / MAX: 7616.44 1. (CC) gcc options: -O3 -lrt
CacheBench Test: Write OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Write 32 z 32 32 c 32 d 10K 20K 30K 40K 50K 45646.82 45646.09 45645.09 45643.04 MIN: 45482.27 / MAX: 45698.03 MIN: 45484.29 / MAX: 45698.11 MIN: 45483.02 / MAX: 45696.19 MIN: 45482.26 / MAX: 45696.12 1. (CC) gcc options: -O3 -lrt
CacheBench Test: Read / Modify / Write OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read / Modify / Write 32 d 32 c 32 32 z 20K 40K 60K 80K 100K 87854.12 87238.01 87227.59 87218.21 MIN: 72077.93 / MAX: 90708.03 MIN: 65732.92 / MAX: 90706.91 MIN: 65739.52 / MAX: 90694.35 MIN: 65721.62 / MAX: 90703.93 1. (CC) gcc options: -O3 -lrt
CPU Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts CPU Power Consumption Monitor Phoronix Test Suite System Monitoring Zen 1 - EPYC 7601 130 260 390 520 650 Min: 242.58 / Avg: 585.92 / Max: 718
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Jython 32 32 d 32 z 32 c 1500 3000 4500 6000 7500 6703 6769 6773 6865
DaCapo Benchmark Java Test: Eclipse OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Eclipse 32 32 z 32 d 32 c 3K 6K 9K 12K 15K 12656 12735 12768 12826
DaCapo Benchmark Java Test: GraphChi OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: GraphChi 32 32 c 32 z 32 d 800 1600 2400 3200 4000 3536 3538 3630 3656
DaCapo Benchmark Java Test: Tradesoap OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradesoap 32 d 32 z 32 c 32 1200 2400 3600 4800 6000 5149 5168 5366 5403
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradebeans 32 d 32 c 32 32 z 2K 4K 6K 8K 10K 8380 8520 8561 8600
DaCapo Benchmark Java Test: Spring Boot OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Spring Boot 32 32 d 32 z 32 c 500 1000 1500 2000 2500 2444 2452 2460 2533
DaCapo Benchmark Java Test: Apache Kafka OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Kafka 32 32 c 32 d 32 z 1100 2200 3300 4400 5500 5110 5111 5114 5121
DaCapo Benchmark Java Test: Apache Tomcat OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Tomcat 32 z 32 c 32 32 d 500 1000 1500 2000 2500 2082 2094 2107 2112
DaCapo Benchmark Java Test: jMonkeyEngine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: jMonkeyEngine 32 32 d 32 z 32 c 1500 3000 4500 6000 7500 6914 6916 6917 6917
DaCapo Benchmark Java Test: Apache Cassandra OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Cassandra 32 d 32 z 32 32 c 1300 2600 3900 5200 6500 5927 5938 5946 5955
DaCapo Benchmark Java Test: Apache Xalan XSLT OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Xalan XSLT 32 c 32 z 32 d 32 200 400 600 800 1000 852 859 861 871
DaCapo Benchmark Java Test: Batik SVG Toolkit OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Batik SVG Toolkit 32 c 32 z 32 32 d 400 800 1200 1600 2000 1718 1723 1733 1738
DaCapo Benchmark Java Test: H2 Database Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2 Database Engine 32 d 32 z 32 32 c 600 1200 1800 2400 3000 2634 2655 2675 2773
DaCapo Benchmark Java Test: FOP Print Formatter OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: FOP Print Formatter 32 z 32 32 d 32 c 160 320 480 640 800 696 751 758 764
DaCapo Benchmark Java Test: PMD Source Code Analyzer OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: PMD Source Code Analyzer 32 32 z 32 d 32 c 400 800 1200 1600 2000 1784 1820 1833 1966
DaCapo Benchmark Java Test: Apache Lucene Search Index OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Index 32 c 32 z 32 d 32 1000 2000 3000 4000 5000 4580 4589 4602 4613
DaCapo Benchmark Java Test: Apache Lucene Search Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Engine 32 c 32 32 z 32 d 300 600 900 1200 1500 1379 1402 1425 1433
DaCapo Benchmark Java Test: Avrora AVR Simulation Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Avrora AVR Simulation Framework 32 z 32 c 32 d 32 1200 2400 3600 4800 6000 5441 5561 5572 5613
DaCapo Benchmark Java Test: BioJava Biological Data Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: BioJava Biological Data Framework 32 z 32 32 c 32 d 2K 4K 6K 8K 10K 7858 7874 7904 7907
DaCapo Benchmark Java Test: Zxing 1D/2D Barcode Image Processing OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Zxing 1D/2D Barcode Image Processing 32 c 32 z 32 d 32 130 260 390 520 650 569 599 599 609
DaCapo Benchmark Java Test: H2O In-Memory Platform For Machine Learning OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2O In-Memory Platform For Machine Learning 32 d 32 z 32 32 c 900 1800 2700 3600 4500 3755 3868 3974 3979
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown 32 z 32 32 d 32 c 9 18 27 36 45 37.25 36.96 36.28 35.91 MIN: 36.89 / MAX: 37.75 MIN: 36.61 / MAX: 37.43 MIN: 35.88 / MAX: 37.13 MIN: 35.53 / MAX: 37.08
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown 32 z 32 32 c 32 d 9 18 27 36 45 37.68 37.30 37.00 36.94 MIN: 37.25 / MAX: 38.37 MIN: 36.86 / MAX: 38.04 MIN: 36.53 / MAX: 38.11 MIN: 36.46 / MAX: 37.76
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon 32 z 32 32 c 32 d 10 20 30 40 50 41.82 41.60 41.57 41.56 MIN: 41.6 / MAX: 42.16 MIN: 41.36 / MAX: 41.86 MIN: 41.37 / MAX: 41.9 MIN: 41.33 / MAX: 41.84
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj 32 c 32 d 32 32 z 9 18 27 36 45 37.44 37.41 37.28 36.86 MIN: 37.24 / MAX: 37.71 MIN: 37.22 / MAX: 37.69 MIN: 37.09 / MAX: 37.7 MIN: 36.67 / MAX: 37.11
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon 32 z 32 32 d 32 c 11 22 33 44 55 46.31 45.94 45.65 45.46 MIN: 46.05 / MAX: 46.74 MIN: 45.66 / MAX: 46.38 MIN: 45.37 / MAX: 46.89 MIN: 45.22 / MAX: 46.6
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj 32 d 32 z 32 c 32 9 18 27 36 45 39.14 39.11 39.00 38.94 MIN: 38.92 / MAX: 39.84 MIN: 38.88 / MAX: 39.43 MIN: 38.78 / MAX: 39.64 MIN: 38.69 / MAX: 39.29
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live 32 z 32 d 32 c 32 20 40 60 80 100 110.37 110.29 110.02 109.84 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload 32 32 d 32 c 32 z 5 10 15 20 25 22.28 22.22 22.21 22.20 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform 32 c 32 32 z 32 d 10 20 30 40 50 45.13 45.13 45.05 44.97 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand 32 32 d 32 z 32 c 10 20 30 40 50 45.18 45.10 45.08 44.95 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Llama.cpp Model: llama-2-7b.Q4_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf 32 z 32 d 32 32 c 7 14 21 28 35 29.90 29.85 29.75 29.74 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llama.cpp Model: llama-2-13b.Q4_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf 32 d 32 32 c 32 z 4 8 12 16 20 18.08 17.94 17.87 17.87 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llama.cpp Model: llama-2-70b-chat.Q5_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf 32 d 32 c 32 32 z 0.7695 1.539 2.3085 3.078 3.8475 3.42 3.42 3.42 3.41 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Meta Performance Per Watts Performance Per Watts OpenBenchmarking.org Performance Per Watts, More Is Better Meta Performance Per Watts Performance Per Watts Zen 1 - EPYC 7601 3M 6M 9M 12M 15M 13064001.66
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream 32 32 z 32 d 32 c 5 10 15 20 25 21.29 21.27 21.09 20.87
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 160 320 480 640 800 745.18 747.07 751.21 753.12
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 200 400 600 800 1000 836.42 835.26 816.28 815.98
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 5 10 15 20 25 19.11 19.13 19.58 19.59
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream 32 z 32 32 c 32 d 60 120 180 240 300 266.98 266.86 266.84 266.53
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 13 26 39 52 65 59.86 59.87 59.90 59.97
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 500 1000 1500 2000 2500 2208.15 2199.49 2195.92 2189.07
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 2 4 6 8 10 7.2332 7.2610 7.2738 7.2896
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 123.02 122.96 122.33 121.80
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream 32 z 32 32 c 32 d 30 60 90 120 150 129.80 129.87 130.48 130.79
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream 32 z 32 32 c 32 d 6 12 18 24 30 26.08 26.06 25.82 25.79
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream 32 32 z 32 d 32 c 130 260 390 520 650 607.94 608.13 611.44 611.60
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 60 120 180 240 300 267.84 266.88 266.28 266.03
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 13 26 39 52 65 59.67 59.88 60.03 60.06
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 123.88 123.79 123.15 122.93
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 32 z 32 c 32 d 30 60 90 120 150 128.82 128.85 129.54 129.81
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 40 80 120 160 200 182.76 182.51 181.12 181.10
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream 32 z 32 32 c 32 d 20 40 60 80 100 87.33 87.49 88.20 88.23
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream 32 32 z 32 d 32 c 9 18 27 36 45 40.17 39.95 38.83 38.77
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream 32 32 z 32 d 32 c 90 180 270 360 450 396.29 397.96 410.33 411.34
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 z 32 c 32 32 d 80 160 240 320 400 385.65 384.32 383.97 381.78
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream 32 z 32 c 32 32 d 10 20 30 40 50 41.44 41.59 41.63 41.84
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 5 10 15 20 25 21.29 21.23 21.07 21.04
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream 32 z 32 32 d 32 c 160 320 480 640 800 746.13 747.31 750.40 751.93
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time 32 32 c 32 d 32 z 7 14 21 28 35 28.37 30.54 30.72 30.75 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time 32 z 32 d 32 c 32 16 32 48 64 80 71.20 72.31 72.38 72.81 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU 32 z 32 32 d 32 c 4 8 12 16 20 17.18 17.17 16.54 16.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU 32 z 32 32 c 32 d 200 400 600 800 1000 927.57 929.23 964.20 965.35 MIN: 895.6 / MAX: 1019.94 MIN: 907.01 / MAX: 1013.02 MIN: 905.78 / MAX: 1053.38 MIN: 922.7 / MAX: 1047.5 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU 32 32 d 32 c 32 z 30 60 90 120 150 151.45 151.25 150.07 150.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU 32 32 d 32 c 32 z 20 40 60 80 100 105.48 105.64 106.43 106.44 MIN: 82.05 / MAX: 167.92 MIN: 54.2 / MAX: 154.42 MIN: 80.87 / MAX: 199.77 MIN: 81.71 / MAX: 196.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU 32 c 32 32 z 32 d 30 60 90 120 150 150.84 150.80 150.37 150.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU 32 c 32 32 z 32 d 20 40 60 80 100 105.91 105.97 106.24 106.32 MIN: 82.12 / MAX: 188.16 MIN: 81.88 / MAX: 218.45 MIN: 81.06 / MAX: 185.99 MIN: 81.37 / MAX: 177.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU 32 z 32 32 d 32 c 300 600 900 1200 1500 1197.46 1190.42 1166.83 1166.56 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU 32 z 32 32 c 32 d 4 8 12 16 20 13.29 13.36 13.65 13.65 MIN: 8.3 / MAX: 73.59 MIN: 7.26 / MAX: 78.85 MIN: 9.08 / MAX: 67.03 MIN: 6.73 / MAX: 75.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 8 16 24 32 40 32.82 32.81 31.22 31.20 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU 32 z 32 32 c 32 d 110 220 330 440 550 486.03 486.65 510.79 510.90 MIN: 454.31 / MAX: 580.9 MIN: 465.68 / MAX: 570.73 MIN: 473.86 / MAX: 584.54 MIN: 470.7 / MAX: 595.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU 32 z 32 32 c 32 d 800 1600 2400 3200 4000 3924.86 3921.50 3877.91 3869.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU 32 z 32 32 c 32 d 0.9068 1.8136 2.7204 3.6272 4.534 3.90 3.91 4.03 4.03 MIN: 2.18 / MAX: 64.81 MIN: 2.2 / MAX: 72.73 MIN: 2.23 / MAX: 54.09 MIN: 2.23 / MAX: 62.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU 32 z 32 32 c 32 d 130 260 390 520 650 579.41 576.18 554.68 553.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU 32 z 32 32 c 32 d 7 14 21 28 35 27.53 27.69 28.77 28.82 MIN: 18.86 / MAX: 82.58 MIN: 18.56 / MAX: 147.54 MIN: 17.12 / MAX: 135.79 MIN: 19.39 / MAX: 99.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU 32 z 32 32 d 32 c 400 800 1200 1600 2000 1964.99 1960.18 1862.24 1860.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU 32 z 32 32 c 32 d 2 4 6 8 10 8.05 8.07 8.52 8.52 MIN: 4.56 / MAX: 76.48 MIN: 4.55 / MAX: 69.24 MIN: 4.97 / MAX: 67.6 MIN: 4.8 / MAX: 75.53 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU 32 32 z 32 d 32 c 400 800 1200 1600 2000 1704.26 1704.02 1628.91 1627.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU 32 32 z 32 d 32 c 5 10 15 20 25 18.69 18.69 19.56 19.58 MIN: 9.97 / MAX: 81.33 MIN: 9.78 / MAX: 86.93 MIN: 13.73 / MAX: 73.6 MIN: 10.24 / MAX: 83.63 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU 32 z 32 32 d 32 c 1200 2400 3600 4800 6000 5751.58 5747.65 5423.13 5416.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 1.3005 2.601 3.9015 5.202 6.5025 5.41 5.42 5.78 5.78 MIN: 3.17 / MAX: 57.08 MIN: 3.15 / MAX: 67.23 MIN: 3.21 / MAX: 58.78 MIN: 3.37 / MAX: 65.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 32 z 32 32 d 32 c 140 280 420 560 700 666.30 666.22 634.50 632.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 32 32 z 32 d 32 c 6 12 18 24 30 23.95 23.95 25.16 25.22 MIN: 13.94 / MAX: 114.01 MIN: 15.19 / MAX: 90.71 MIN: 19.24 / MAX: 86.7 MIN: 21.61 / MAX: 89.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU 32 z 32 32 d 32 c 40 80 120 160 200 201.15 199.90 195.05 194.21 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU 32 z 32 32 d 32 c 20 40 60 80 100 79.39 79.82 81.87 82.18 MIN: 43.97 / MAX: 186.13 MIN: 42.02 / MAX: 179.47 MIN: 52.13 / MAX: 175.84 MIN: 58.39 / MAX: 175.7 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU 32 32 z 32 d 32 c 700 1400 2100 2800 3500 3300.99 3299.93 3100.95 3099.20 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU 32 32 z 32 d 32 c 3 6 9 12 15 9.56 9.56 10.21 10.22 MIN: 5.1 / MAX: 77.12 MIN: 5.09 / MAX: 75.37 MIN: 5.17 / MAX: 61.15 MIN: 5.48 / MAX: 68.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU 32 32 z 32 d 32 c 400 800 1200 1600 2000 1741.57 1735.64 1696.50 1694.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU 32 32 z 32 d 32 c 3 6 9 12 15 9.12 9.16 9.37 9.39 MIN: 6.22 / MAX: 56.95 MIN: 5.99 / MAX: 67.91 MIN: 6.07 / MAX: 71.06 MIN: 5.95 / MAX: 68.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU 32 32 z 32 c 32 d 200 400 600 800 1000 898.60 896.69 853.38 848.62 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU 32 32 z 32 c 32 d 9 18 27 36 45 35.51 35.59 37.40 37.61 MIN: 22.8 / MAX: 100.53 MIN: 24.72 / MAX: 147.24 MIN: 27.33 / MAX: 92.33 MIN: 24.11 / MAX: 127.49 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 32 32 z 32 d 32 c 9K 18K 27K 36K 45K 40123.62 40101.80 39843.05 39562.87 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 32 32 z 32 c 32 d 0.1508 0.3016 0.4524 0.6032 0.754 0.65 0.66 0.67 0.67 MIN: 0.36 / MAX: 51.48 MIN: 0.36 / MAX: 65.79 MIN: 0.36 / MAX: 62.87 MIN: 0.36 / MAX: 50.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 160 320 480 640 800 745.00 730.82 692.02 690.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU 32 32 z 32 c 32 d 10 20 30 40 50 42.87 43.71 46.17 46.28 MIN: 35.14 / MAX: 107.5 MIN: 35.06 / MAX: 153.84 MIN: 39.81 / MAX: 161.92 MIN: 30.15 / MAX: 108.49 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 32 z 32 32 c 32 d 11K 22K 33K 44K 55K 52475.39 52441.94 52382.31 52344.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 32 z 32 32 c 32 d 0.108 0.216 0.324 0.432 0.54 0.47 0.48 0.48 0.48 MIN: 0.27 / MAX: 64.47 MIN: 0.27 / MAX: 50.11 MIN: 0.27 / MAX: 50.17 MIN: 0.27 / MAX: 65.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 800 1600 2400 3200 4000 3404 3406 3493 3499
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 z 32 32 c 32 d 800 1600 2400 3200 4000 3446 3451 3515 3522
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU 32 z 32 32 d 32 c 900 1800 2700 3600 4500 4048 4049 4132 4157
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 14K 28K 42K 56K 70K 60673 61430 62802 63336
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 z 32 32 c 32 d 30K 60K 90K 120K 150K 115669 116377 118221 118802
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 d 32 c 14K 28K 42K 56K 70K 61987 62113 62787 63402
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 30K 60K 90K 120K 150K 116566 116972 118980 119783
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU 32 32 z 32 c 32 d 16K 32K 48K 64K 80K 71361 71495 73024 73329
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU 32 z 32 32 d 32 c 30K 60K 90K 120K 150K 136312 136464 139445 139685
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 32 d 32 c 32 z 32 12 24 36 48 60 53.30 53.00 52.78 52.44 MIN: 50.97 / MAX: 53.84 MIN: 50.62 / MAX: 53.51 MIN: 17.43 / MAX: 53.32 MIN: 15.02 / MAX: 53.14
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 32 32 z 32 d 32 c 5 10 15 20 25 19.04 18.92 18.86 18.86 MIN: 6.89 / MAX: 19.18 MIN: 7.59 / MAX: 19.04 MIN: 7.91 / MAX: 19.03 MIN: 10.78 / MAX: 19.02
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 32 c 32 d 32 32 z 9 18 27 36 45 40.32 40.31 40.19 39.96 MIN: 15.51 / MAX: 40.87 MIN: 15.27 / MAX: 40.73 MIN: 15.55 / MAX: 40.67 MIN: 15.13 / MAX: 40.53
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 32 32 z 32 d 32 c 4 8 12 16 20 15.61 15.51 15.35 15.32 MIN: 6.89 / MAX: 15.74 MIN: 7.3 / MAX: 15.63 MIN: 8.86 / MAX: 15.52 MIN: 6.91 / MAX: 15.45
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l 32 d 32 c 32 32 z 3 6 9 12 15 10.21 10.04 9.85 9.82 MIN: 5.69 / MAX: 10.32 MIN: 5.86 / MAX: 10.23 MIN: 5.1 / MAX: 9.99 MIN: 5.63 / MAX: 10.05
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l 32 c 32 32 d 32 z 2 4 6 8 10 7.18 7.17 7.15 7.11 MIN: 4.37 / MAX: 7.37 MIN: 4.45 / MAX: 7.33 MIN: 4.34 / MAX: 7.3 MIN: 4.25 / MAX: 7.26
QuantLib Configuration: Multi-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded 32 z 32 32 c 32 d 20K 40K 60K 80K 100K 107381.6 107079.2 98916.2 98618.7 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P1 c b 32 d 32 32 z Zen 1 - EPYC 7601 32 c 5M 10M 15M 20M 25M SE +/- 66916.20, N = 3 21250000 21180000 18840000 18790000 18760000 12996667 1040000 1. (CXX) g++ options: -fopenmp -O3 -march=native
Quicksilver Input: CORAL2 P1 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CORAL2 P1 Zen 1 - EPYC 7601 5K 10K 15K 20K 25K 22248.55
Quicksilver CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 243 584 648 OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor 200 400 600 800 1000
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P2 c b 32 32 z 32 c 32 d Zen 1 - EPYC 7601 3M 6M 9M 12M 15M SE +/- 37118.43, N = 3 16150000 16140000 15350000 15230000 15180000 15100000 15013333 1. (CXX) g++ options: -fopenmp -O3 -march=native
Quicksilver Input: CORAL2 P2 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CORAL2 P2 Zen 1 - EPYC 7601 6K 12K 18K 24K 30K 27116.87
Quicksilver CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 255.2 553.7 594.9 OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor 160 320 480 640 800
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CTS2 b c 32 c 32 32 z 32 d Zen 1 - EPYC 7601 3M 6M 9M 12M 15M SE +/- 16666.67, N = 3 16270000 16260000 14430000 14320000 14290000 14280000 11426667 1. (CXX) g++ options: -fopenmp -O3 -march=native
Quicksilver Input: CTS2 OpenBenchmarking.org Figure Of Merit Per Watt, More Is Better Quicksilver 20230818 Input: CTS2 Zen 1 - EPYC 7601 4K 8K 12K 16K 20K 18307.66
Quicksilver CPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Quicksilver 20230818 CPU Power Consumption Monitor Zen 1 - EPYC 7601 120 240 360 480 600 Min: 258.88 / Avg: 624.15 / Max: 662.04
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Read 32 z 32 32 d 32 c 40M 80M 120M 160M 200M 177167636 176770468 160707812 160665305 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random 32 z 32 c 32 32 d 140K 280K 420K 560K 700K 636242 633688 630575 630478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing 32 c 32 z 32 32 d 900K 1800K 2700K 3600K 4500K 4419497 4364996 4284691 4244478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random 32 32 z 32 d 32 c 500K 1000K 1500K 2000K 2500K 2373654 2361270 2351568 2327800 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Random Read OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Random Read 32 32 z 32 d 32 c 40M 80M 120M 160M 200M 179685954 179434924 163512432 163202721 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Update Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Update Random 32 c 32 32 z 32 d 70K 140K 210K 280K 350K 317758 314123 314114 313683 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read While Writing 32 c 32 32 z 32 d 1.7M 3.4M 5.1M 6.8M 8.5M 7746346 7457600 7210235 7105602 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Speedb Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read Random Write Random 32 z 32 32 c 32 d 500K 1000K 1500K 2000K 2500K 2259344 2231403 2229494 2215896 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K 32 d 32 z 32 c 32 1.3448 2.6896 4.0344 5.3792 6.724 5.977 5.899 5.829 5.801 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K 32 z 32 d 32 32 c 13 26 39 52 65 58.72 58.64 48.45 47.25 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K 32 32 d 32 z 32 c 40 80 120 160 200 186.63 186.37 185.56 180.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K 32 32 z 32 d 32 c 40 80 120 160 200 185.67 184.98 184.10 183.90 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow Device: CPU - Batch Size: 1 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 32 c 32 d 32 z 32 3 6 9 12 15 9.77 9.75 9.75 9.73
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet 32 c 32 d 32 32 z 8 16 24 32 40 33.14 33.02 32.12 31.92
TensorFlow Device: CPU - Batch Size: 16 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 32 z 32 32 d 32 c 6 12 18 24 30 25.20 25.15 24.51 24.47
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet 32 d 32 c 32 z 32 60 120 180 240 300 276.19 274.97 274.97 272.93
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet 32 32 d 32 z 32 c 7 14 21 28 35 28.99 28.79 28.71 27.73
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 32 z 32 32 c 32 d 2 4 6 8 10 8.77 8.74 8.61 8.59
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet 32 32 d 32 c 32 z 40 80 120 160 200 158.47 158.08 157.60 155.77
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 32 z 32 c 32 d 32 12 24 36 48 60 51.57 51.56 51.49 51.34
Timed FFmpeg Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed FFmpeg Compilation 6.1 Time To Compile 32 32 z 32 d 32 c 6 12 18 24 30 23.56 23.76 24.30 24.45
Timed Gem5 Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 23.0.1 Time To Compile 32 32 c 32 d 32 z 60 120 180 240 300 254.01 258.31 258.93 272.61
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig 32 z 32 32 c 32 d 12 24 36 48 60 52.01 52.13 53.62 53.63
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig 32 32 z 32 d 32 c 100 200 300 400 500 433.79 434.19 452.61 453.69
Xmrig Variant: KawPow - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M 32 z 32 c 32 d 32 4K 8K 12K 16K 20K 18961.3 18947.3 18901.1 18777.2 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M 32 c 32 d 32 32 z 4K 8K 12K 16K 20K 18897.5 18866.1 18845.5 18763.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M 32 z 32 32 d 32 c 6K 12K 18K 24K 30K 25943.7 25814.4 25396.8 25385.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: GhostRider - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M 32 c 32 d 32 32 z 900 1800 2700 3600 4500 4136.3 4095.7 4067.4 4038.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: CryptoNight-Heavy - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M 32 32 z 32 d 32 c 4K 8K 12K 16K 20K 19004.5 18936.5 18924.0 18783.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: CryptoNight-Femto UPX2 - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M 32 z 32 c 32 32 d 4K 8K 12K 16K 20K 18909.0 18887.5 18860.1 18818.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M b c 32 32 z 32 d 32 c Zen 1 - EPYC 7601 4 8 12 16 20 SE +/- 0.118, N = 3 5.202 5.213 5.656 5.685 5.751 5.783 15.693
Y-Cruncher CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 262 543 712 OpenBenchmarking.org Watts, Fewer Is Better Y-Cruncher 0.8.3 CPU Power Consumption Monitor 200 400 600 800 1000
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B b c 32 z 32 32 c 32 d Zen 1 - EPYC 7601 8 16 24 32 40 SE +/- 0.09, N = 3 10.42 10.48 11.60 11.68 11.90 11.98 33.92
Y-Cruncher CPU Power Consumption Monitor Min Avg Max Zen 1 - EPYC 7601 263 602 718 OpenBenchmarking.org Watts, Fewer Is Better Y-Cruncher 0.8.3 CPU Power Consumption Monitor 200 400 600 800 1000
Phoronix Test Suite v10.8.4