epyc 9654 AMD March Tests for a future article. 2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2303299-NE-EPYC9654A14&rdt&grr .
epyc 9654 AMD March Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution a b c d e AMD EPYC 9654 96-Core @ 3.71GHz (96 Cores / 192 Threads) AMD Titanite_4G (RTI1004D BIOS) AMD Device 14a4 768GB 800GB INTEL SSDPF21Q800GB ASPEED VGA HDMI Broadcom NetXtreme BCM5720 PCIe Ubuntu 23.04 5.19.0-21-generic (x86_64) GNOME Shell 43.1 X Server 1.21.1.4 1.3.224 GCC 12.2.0 ext4 1920x1080 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads) 1520GB OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-l0Aoyl/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-l0Aoyl/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa101111 Python Details - Python 3.10.9 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
epyc 9654 AMD March rocksdb: Seq Fill tensorflow: CPU - 512 - ResNet-50 opencv: Graph API mysqlslap: 8192 ffmpeg: libx264 - Upload ffmpeg: libx264 - Upload opencv: Stitching build-llvm: Unix Makefiles tensorflow: CPU - 256 - ResNet-50 opencv: Image Processing mysqlslap: 4096 openssl: SHA512 openssl: SHA256 openssl: AES-256-GCM openssl: AES-128-GCM openssl: ChaCha20-Poly1305 openssl: ChaCha20 ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Platform ffmpeg: libx264 - Platform clickhouse: 100M Rows Hits Dataset, Third Run clickhouse: 100M Rows Hits Dataset, Second Run clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache mysqlslap: 2048 mysqlslap: 1024 mysqlslap: 512 ffmpeg: libx265 - Upload ffmpeg: libx265 - Upload ffmpeg: libx265 - Platform ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Video On Demand pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 800 - Read Only pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 800 - Read Write pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 1000 - Read Only pgbench: 1 - 1000 - Read Write - Average Latency pgbench: 1 - 1000 - Read Write pgbench: 1 - 800 - Read Write - Average Latency pgbench: 1 - 800 - Read Write pgbench: 1 - 1000 - Read Only - Average Latency pgbench: 1 - 1000 - Read Only pgbench: 1 - 800 - Read Only - Average Latency pgbench: 1 - 800 - Read Only opencv: Core build-nodejs: Time To Compile build-llvm: Ninja tensorflow: CPU - 512 - GoogLeNet build-godot: Time To Compile onnx: fcn-resnet101-11 - CPU - Parallel onnx: fcn-resnet101-11 - CPU - Parallel onnx: GPT-2 - CPU - Parallel onnx: GPT-2 - CPU - Parallel onnx: GPT-2 - CPU - Standard onnx: GPT-2 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: bertsquad-12 - CPU - Parallel onnx: bertsquad-12 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: yolov4 - CPU - Parallel onnx: yolov4 - CPU - Parallel onnx: yolov4 - CPU - Standard onnx: yolov4 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallel onnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallel onnx: Faster R-CNN R-50-FPN-int8 - CPU - Standard onnx: Faster R-CNN R-50-FPN-int8 - CPU - Standard nginx: 500 apache: 500 onnx: CaffeNet 12-int8 - CPU - Parallel onnx: CaffeNet 12-int8 - CPU - Parallel opencv: Features 2D onnx: CaffeNet 12-int8 - CPU - Standard onnx: CaffeNet 12-int8 - CPU - Standard onnx: ResNet50 v1-12-int8 - CPU - Parallel onnx: ResNet50 v1-12-int8 - CPU - Parallel onnx: ResNet50 v1-12-int8 - CPU - Standard onnx: ResNet50 v1-12-int8 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard onnx: super-resolution-10 - CPU - Standard tensorflow: CPU - 64 - ResNet-50 compress-zstd: 19, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed opencv: Video tensorflow: CPU - 256 - GoogLeNet daphne: OpenMP - Points2Image compress-zstd: 19 - Decompression Speed compress-zstd: 19 - Compression Speed memcached: 1:100 memcached: 1:5 memcached: 1:10 compress-zstd: 12 - Decompression Speed compress-zstd: 12 - Compression Speed compress-zstd: 8, Long Mode - Decompression Speed compress-zstd: 8, Long Mode - Compression Speed compress-zstd: 8 - Decompression Speed compress-zstd: 8 - Compression Speed build2: Time To Compile deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream tensorflow: CPU - 32 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream rocksdb: Rand Fill Sync rocksdb: Update Rand rocksdb: Rand Fill john-the-ripper: MD5 rocksdb: Read Rand Write Rand rocksdb: Read While Writing john-the-ripper: HMAC-SHA512 openssl: RSA4096 openssl: RSA4096 rocksdb: Rand Read deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream ffmpeg: libx265 - Live ffmpeg: libx265 - Live nginx: 200 apache: 200 deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream tensorflow: CPU - 16 - ResNet-50 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream ffmpeg: libx264 - Live ffmpeg: libx264 - Live deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream tensorflow: CPU - 512 - AlexNet deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream opencv: Object Detection tensorflow: CPU - 64 - GoogLeNet opencv: DNN - Deep Neural Network john-the-ripper: WPA PSK john-the-ripper: bcrypt john-the-ripper: Blowfish gromacs: MPI CPU - water_GMX50_bare tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 256 - AlexNet daphne: OpenMP - NDT Mapping specfem3d: Water-layered Halfspace tensorflow: CPU - 16 - GoogLeNet specfem3d: Layered Halfspace embree: Pathtracer - Asian Dragon Obj daphne: OpenMP - Euclidean Cluster embree: Pathtracer ISPC - Asian Dragon Obj build-ffmpeg: Time To Compile tensorflow: CPU - 64 - AlexNet specfem3d: Homogeneous Halfspace specfem3d: Tomographic Model tensorflow: CPU - 32 - AlexNet specfem3d: Mount St. Helens dav1d: Chimera 1080p 10-bit tensorflow: CPU - 16 - AlexNet draco: Church Facade dav1d: Chimera 1080p draco: Lion dav1d: Summer Nature 4K embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer ISPC - Asian Dragon dav1d: Summer Nature 1080p nginx: 100 a b c d e 662613 163.85 230494 446 12.45 202.82901313 190987 217.187 146.79 119907 654 40028926290 129947980460 780271471000 908982494280 356999237630 510745602460 48.08 157.534841326 48.25 156.979756208 612.78 602.50 584.98 860 912 915 28.18 89.614916611 57.13 132.597004196 57.02 132.842087496 0.209 3833822 18.461 54169 13.731 58262 0.267 3741941 2619.509 382 1184.082 676 0.267 3738972 0.215 3718995 65772 133.701 125.877 516.87 107.66 797.88 1.25331 6.27808 159.109 7.78072 128.485 195.521 5.11448 82.137 12.1742 106.776 9.3652 156.183 6.40254 151.274 6.61035 41.3976 24.1547 33.9783 29.4284 32.4891 30.7765 26.998 37.0363 240111.29 173757.38 1.66637 599.496 71850 1.90737 524.041 5.13811 194.561 4.82662 207.156 8.95095 111.698 8.07185 123.879 104.12 1329.8 8.5 41999 459.97 18078.769197393 1395.1 17.4 2851726.85 3870015.6 3203112.6 1633.7 316.8 1619.8 903.2 1580.9 1225.5 63.167 321.4749 149.0497 80.72 1109.1052 42.3271 445786 645530 644356 15556000 2792738 9939924 309175000 1462850.1 35951.3 432927777 1108.7024 43.0198 151.3746 316.6097 136.22 37.07 257954.01 143188.9 30.5695 1567.7533 11.2681 88.7046 57.39 119.5961 400.507 5.1385 194.537 28.0217 35.6789 28.2192 35.4293 76.7052 624.9625 16.0249 62.3453 9.8202 101.7889 217.98 23.17 108.8046 440.4626 47.7743 1003.5666 1375.44 5.1198 195.1995 4.8287 206.9391 5.1229 195.128 24950 316.04 22944 653913 163238 163353 11.248 241.43 1276.22 954.82 20.426995567 142.32 19.841829247 106.5442 1637.357959197 113.1673 12.811 856.98 10.661133476 8.695806704 593.9 8.549248083 602.64 355.72 6872 657.5 5321 379.84 104.2447 110.7402 121.1083 132.7141 807.16 662046 163.76 204945 438 12.42 203.29 190687 217.451 146.9 119436 693 40018641540 129484883100 776495266470 910814515750 356991460690 506631603000 48.19 157.201576479 48.26 156.98 606.58 603.93 582.44 839 874 898 28.14 89.730906933 56.98 132.947013563 57.18 132.48 0.21 3816666 18.322 54579 12.98 61635 0.268 3730123 2099.011 476 1449.389 552 0.27 3707315 0.21 3803554 65256 132.796 126.288 515.11 107.352 794.909 1.258 6.24337 159.988 7.75078 128.979 204.386 4.89263 81.5006 12.2692 83.2442 12.0125 157.15 6.36316 154.16 6.48661 40.6488 24.5996 30.8057 32.4597 32.5919 30.6798 26.739 37.3945 237868.2 208703.78 1.77352 563.298 73789 1.80855 552.793 5.15191 194.057 4.27622 233.812 8.97883 111.351 8.84768 113.016 105.04 1336.1 8.56 38143 454.67 17717.930545712 1399.2 17.4 2813977.52 3833862.64 3163183.27 1629 317.4 1625.7 900.7 1593 1217.1 63.014 320.9851 149.2181 81.8 1110.6738 42.0961 446883 644787 640977 15608000 2785663 9108882 308492000 1462987.4 35946.3 435267657 1108.9774 42.2783 149.8129 320.1841 136.89 36.891181239 258099.68 164665.51 30.5827 1567.6088 11.2807 88.5975 57.85 119.6426 400.3873 5.0222 199.0439 28.2018 35.4512 28.2485 35.3934 76.5973 626.0654 16.1331 61.9263 9.8577 101.3977 218.14 23.15 108.8016 439.9247 47.8736 1001.9837 1375.77 5.1045 195.7729 4.8365 206.6159 5.1686 193.3983 24394 310.53 23755 654104 163353 163241 11.237 239.45 1272.13 949.71 20.448751145 157.65 19.458733603 107.1054 1637.00 113.4477 13.012 853.06 10.386901707 8.699354709 594.27 8.433500494 603.05 353.36 6788 656.31 5296 381.16 105.1822 111.2807 121.2691 132.7068 806.08 660783 163.66 207090 439 12.45 202.88 191634 214.021 146.99 122137 678 40002428390 130061831940 779191711330 909186377320 356961832960 510959296510 48.27 156.93 48.25 157.00 623.39 592.33 578.76 852 873 894 28.13 89.78 57.01 132.872175892 57.12 132.626633247 0.211 3785876 18.477 54120 12.169 65740 0.265 3776352 2357.907 424 1125.104 711 0.272 3672471 0.21 3804637 68743 133.106 126.228 518.52 107.075 853.816 1.17121 6.25916 159.596 7.88188 126.839 195.983 5.10242 82.1422 12.1735 106.923 9.35231 156.583 6.3862 143.643 6.96155 40.5691 24.648 30.7501 32.5181 34.4452 29.0291 26.9766 37.0657 241662.73 185857.48 1.66912 598.432 75180 1.86383 536.343 5.45677 183.218 4.82211 207.345 8.92427 112.031 8.12097 123.129 105.29 1338 8.38 37173 455.45 17373.125512145 1393.8 17.4 2821181.21 3858723.95 3154822.34 1641.7 314.8 1613.8 893.2 1577.1 1220.5 63.222 321.1526 149.1959 80.41 1108.4187 42.8173 451734 647442 641338 15608000 2802641 10000158 309621000 1462827.1 35968.3 434781404 1112.3295 42.1065 150.7648 317.4936 136.56 36.98 255419.28 165838.18 30.5443 1569.2552 11.251 88.831 57.9 119.8062 399.6568 5.0721 197.0861 28.0947 35.5865 28.1884 35.4677 76.6635 625.2598 16.0386 62.2933 9.9034 100.9338 217.64 23.203361631 109.2253 439.0234 47.6861 1005.7215 1378.85 5.1492 194.0788 4.8353 206.6742 5.0961 196.1478 23509 295.75 23144 653913 163353 163299 11.246 239.48 1276.62 937.41 19.898906002 158.76 19.778071377 106.9305 1636.09 113.3183 13.161 857.87 10.665913818 8.463161346 591.78 8.26604604 603.51 354.96 6888 657.22 5270 383.95 105.124 111.3003 120.9085 132.5315 809.86 438833 166.89 390167 384 12.70 198.824978146 268869 199.261 131.65 333961 578 79615585770 258641794620 1552708662150 1810537338580 710753283550 1017352168790 48.76 155.359546761 48.78 155.276512617 551.92 536.53 525.86 650 561 624 28.15 89.69 56.37 134.37552551 57.42 131.931538366 0.22 3643496 22.312 44818 17.072 46860 0.296 3381479 2198.896 455 1415.162 565 0.27 3700926 0.22 3638802 267066 106.05 97.849 530.51 97.311 1083.89 0.922595 8.92113 111.962 7.88812 126.738 209.018 4.78422 112.935 8.85423 94.8158 10.5465 244.017 4.09797 206.355 4.84593 75.3612 13.2688 50.5887 19.7663 40.4083 24.7447 27.5873 36.2449 196034.9 141164.84 3.0553 326.983 110697 2.39687 417.118 9.86863 101.311 7.25262 137.868 11.2389 88.9539 8.06837 123.932 68.59 1334.7 8.34 126947 382.29 13064.153704123 1406.1 17.3 2571874.26 2575013.52 3068513.78 1636 317.8 1615.1 865.5 1611.3 1217.7 60.474 339.5778 281.229 45.4 1133.2605 84.5516 357298 436861 438023 27276000 1689260 16955493 286156000 2936562.9 72050.2 863491650 1127.5771 85.0229 154.2824 620.4124 128.58 39.276270815 33.6465 2845.8208 11.5966 86.1837 25.41 126.9401 754.4392 5.0842 196.6158 29.4515 33.947 29.2312 34.2025 79.1804 1209.9217 16.3645 61.0447 10.0304 99.65 218.73 23.087381477 113.905 840.7314 49.0273 1953.9329 1843.74 5.3096 188.2083 4.9053 203.7038 5.2159 191.6421 71477 191.95 34502 1263000 315340 315110 18.413 120.06 1347.52 802.27 12.810395283 67.8 11.916989228 174.2633 1506.75 181.4535 10.857 588.27 6.276873254 5.078447644 330.97 4.709617691 184.36 6721 5218 172.2769 180.4766 194.9019 211.9803 438667 171.18 382454 293 12.66 199.473346601 241229 200.02 132.39 312082 351 79537665830 258600679990 1551466680320 1804869906900 710739693380 1017084218150 48.60 155.854204067 48.66 155.684439476 568.25 536.90 527.95 327 336 334 28.18 89.59 55.28 137.04 56.39 134.34 0.244 3274920 71.159 14053 47.002 17020 0.289 3461133 1871.558 534 1529.108 523 0.269 3716270 0.218 3669550 182548 104.355 97.521 521.87 98.009 1135.38 0.880761 8.8782 112.505 9.71824 102.872 223.867 4.46687 111.298 8.98441 112.883 8.85844 245.992 4.06505 212.449 4.70693 75.1448 13.307 51.8944 19.2692 37.5588 26.622 28.1403 35.5324 194753.92 142512.83 3.08913 323.431 119368 2.07112 482.694 9.78363 102.184 7.16208 139.612 11.1294 89.8277 8.11091 123.282 68.53 1330.7 8.46 122021 377.56 11981.045985251 1385 17.3 2595069.86 2550554.27 3063463.55 1611.2 315.2 1606.4 943.2 1599.2 1213.4 60.348 340.4654 279.9601 45.9 1136.2993 84.2779 174168 435840 431217 27169000 1713878 14344054 292569000 2937037.2 72086.6 859555542 1135.447 84.4009 155.0272 617.5781 135.83 37.18 33.9411 2819.9596 11.5825 86.2895 24.48 126.8569 754.7591 5.0918 196.3169 29.0871 34.372 28.9508 34.5339 79.542 1204.6601 16.1904 61.6961 9.9712 100.2364 218.71 23.09 114.3093 838.0345 48.8074 1964.2605 1775.18 5.2412 190.6621 4.9353 202.464 5.2886 189.0071 33386 177.73 47834 1255000 314928 314188 19.134 106.1 1386.72 760.15 10.759381397 67.07 12.57728528 173.9641 1494.17 182.3542 11.19 597.46 6.233892837 5.322492803 319.03 4.677201807 184.99 6784 5300 173.5786 180.8434 195.2731 213.2995 OpenBenchmarking.org
RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Sequential Fill a b c d e 140K 280K 420K 560K 700K 662613 662046 660783 438833 438667 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
TensorFlow Device: CPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 a b c d e 40 80 120 160 200 163.85 163.76 163.66 166.89 171.18
OpenCV Test: Graph API OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Graph API a b c d e 80K 160K 240K 320K 400K 230494 204945 207090 390167 382454 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
MariaDB Clients: 8192 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 8192 a b c d e 100 200 300 400 500 446 438 439 384 293 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Upload a b c d e 3 6 9 12 15 12.45 12.42 12.45 12.70 12.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Upload a b c d e 40 80 120 160 200 202.83 203.29 202.88 198.82 199.47 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenCV Test: Stitching OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Stitching a b c d e 60K 120K 180K 240K 300K 190987 190687 191634 268869 241229 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
Timed LLVM Compilation Build System: Unix Makefiles OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Unix Makefiles a b c d e 50 100 150 200 250 217.19 217.45 214.02 199.26 200.02
TensorFlow Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b c d e 30 60 90 120 150 146.79 146.90 146.99 131.65 132.39
OpenCV Test: Image Processing OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Image Processing a b c d e 70K 140K 210K 280K 350K 119907 119436 122137 333961 312082 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
MariaDB Clients: 4096 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 4096 a b c d e 150 300 450 600 750 654 693 678 578 351 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
OpenSSL Algorithm: SHA512 OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA512 a b c d e 20000M 40000M 60000M 80000M 100000M 40028926290 40018641540 40002428390 79615585770 79537665830 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: SHA256 OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA256 a b c d e 60000M 120000M 180000M 240000M 300000M 129947980460 129484883100 130061831940 258641794620 258600679990 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: AES-256-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-256-GCM a b c d e 300000M 600000M 900000M 1200000M 1500000M 780271471000 776495266470 779191711330 1552708662150 1551466680320 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: AES-128-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-128-GCM a b c d e 400000M 800000M 1200000M 1600000M 2000000M 908982494280 910814515750 909186377320 1810537338580 1804869906900 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: ChaCha20-Poly1305 OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 a b c d e 150000M 300000M 450000M 600000M 750000M 356999237630 356991460690 356961832960 710753283550 710739693380 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: ChaCha20 OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20 a b c d e 200000M 400000M 600000M 800000M 1000000M 510745602460 506631603000 510959296510 1017352168790 1017084218150 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand a b c d e 11 22 33 44 55 48.08 48.19 48.27 48.76 48.60 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand a b c d e 30 60 90 120 150 157.53 157.20 156.93 155.36 155.85 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Platform a b c d e 11 22 33 44 55 48.25 48.26 48.25 48.78 48.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Platform a b c d e 30 60 90 120 150 156.98 156.98 157.00 155.28 155.68 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
ClickHouse 100M Rows Hits Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run a b c d e 130 260 390 520 650 612.78 606.58 623.39 551.92 568.25 MIN: 59.52 / MAX: 5454.55 MIN: 58.71 / MAX: 5454.55 MIN: 57.97 / MAX: 7500 MIN: 87.98 / MAX: 6000 MIN: 90.09 / MAX: 6666.67
ClickHouse 100M Rows Hits Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run a b c d e 130 260 390 520 650 602.50 603.93 592.33 536.53 536.90 MIN: 58.14 / MAX: 6666.67 MIN: 59 / MAX: 7500 MIN: 58.2 / MAX: 5000 MIN: 74.17 / MAX: 6666.67 MIN: 75.09 / MAX: 6000
ClickHouse 100M Rows Hits Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache a b c d e 130 260 390 520 650 584.98 582.44 578.76 525.86 527.95 MIN: 58.37 / MAX: 6000 MIN: 56.98 / MAX: 6000 MIN: 57.75 / MAX: 6000 MIN: 60.3 / MAX: 5454.55 MIN: 61.35 / MAX: 5454.55
MariaDB Clients: 2048 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 2048 a b c d e 200 400 600 800 1000 860 839 852 650 327 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Clients: 1024 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 1024 a b c d e 200 400 600 800 1000 912 874 873 561 336 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Clients: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 512 a b c d e 200 400 600 800 1000 915 898 894 624 334 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload a b c d e 7 14 21 28 35 28.18 28.14 28.13 28.15 28.18 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload a b c d e 20 40 60 80 100 89.61 89.73 89.78 89.69 89.59 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform a b c d e 13 26 39 52 65 57.13 56.98 57.01 56.37 55.28 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform a b c d e 30 60 90 120 150 132.60 132.95 132.87 134.38 137.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand a b c d e 13 26 39 52 65 57.02 57.18 57.12 57.42 56.39 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand a b c d e 30 60 90 120 150 132.84 132.48 132.63 131.93 134.34 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency a b c d e 0.0549 0.1098 0.1647 0.2196 0.2745 0.209 0.210 0.211 0.220 0.244 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only a b c d e 800K 1600K 2400K 3200K 4000K 3833822 3816666 3785876 3643496 3274920 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency a b c d e 16 32 48 64 80 18.46 18.32 18.48 22.31 71.16 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write a b c d e 12K 24K 36K 48K 60K 54169 54579 54120 44818 14053 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency a b c d e 11 22 33 44 55 13.73 12.98 12.17 17.07 47.00 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write a b c d e 14K 28K 42K 56K 70K 58262 61635 65740 46860 17020 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency a b c d e 0.0666 0.1332 0.1998 0.2664 0.333 0.267 0.268 0.265 0.296 0.289 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only a b c d e 800K 1600K 2400K 3200K 4000K 3741941 3730123 3776352 3381479 3461133 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency a b c d e 600 1200 1800 2400 3000 2619.51 2099.01 2357.91 2198.90 1871.56 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write a b c d e 120 240 360 480 600 382 476 424 455 534 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency a b c d e 300 600 900 1200 1500 1184.08 1449.39 1125.10 1415.16 1529.11 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Write a b c d e 150 300 450 600 750 676 552 711 565 523 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency a b c d e 0.0612 0.1224 0.1836 0.2448 0.306 0.267 0.270 0.272 0.270 0.269 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only a b c d e 800K 1600K 2400K 3200K 4000K 3738972 3707315 3672471 3700926 3716270 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency a b c d e 0.0495 0.099 0.1485 0.198 0.2475 0.215 0.210 0.210 0.220 0.218 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Only a b c d e 800K 1600K 2400K 3200K 4000K 3718995 3803554 3804637 3638802 3669550 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenCV Test: Core OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Core a b c d e 60K 120K 180K 240K 300K 65772 65256 68743 267066 182548 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 19.8.1 Time To Compile a b c d e 30 60 90 120 150 133.70 132.80 133.11 106.05 104.36
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Ninja a b c d e 30 60 90 120 150 125.88 126.29 126.23 97.85 97.52
TensorFlow Device: CPU - Batch Size: 512 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet a b c d e 110 220 330 440 550 516.87 515.11 518.52 530.51 521.87
Timed Godot Game Engine Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Godot Game Engine Compilation 4.0 Time To Compile a b c d e 20 40 60 80 100 107.66 107.35 107.08 97.31 98.01
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel a b c d e 200 400 600 800 1000 797.88 794.91 853.82 1083.89 1135.38 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel a b c d e 0.2831 0.5662 0.8493 1.1324 1.4155 1.253310 1.258000 1.171210 0.922595 0.880761 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel a b c d e 2 4 6 8 10 6.27808 6.24337 6.25916 8.92113 8.87820 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel a b c d e 40 80 120 160 200 159.11 159.99 159.60 111.96 112.51 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard a b c d e 3 6 9 12 15 7.78072 7.75078 7.88188 7.88812 9.71824 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard a b c d e 30 60 90 120 150 128.49 128.98 126.84 126.74 102.87 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard a b c d e 50 100 150 200 250 195.52 204.39 195.98 209.02 223.87 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard a b c d e 1.1508 2.3016 3.4524 4.6032 5.754 5.11448 4.89263 5.10242 4.78422 4.46687 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel a b c d e 30 60 90 120 150 82.14 81.50 82.14 112.94 111.30 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel a b c d e 3 6 9 12 15 12.17420 12.26920 12.17350 8.85423 8.98441 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard a b c d e 30 60 90 120 150 106.78 83.24 106.92 94.82 112.88 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard a b c d e 3 6 9 12 15 9.36520 12.01250 9.35231 10.54650 8.85844 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel a b c d e 50 100 150 200 250 156.18 157.15 156.58 244.02 245.99 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel a b c d e 2 4 6 8 10 6.40254 6.36316 6.38620 4.09797 4.06505 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard a b c d e 50 100 150 200 250 151.27 154.16 143.64 206.36 212.45 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard a b c d e 2 4 6 8 10 6.61035 6.48661 6.96155 4.84593 4.70693 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel a b c d e 20 40 60 80 100 41.40 40.65 40.57 75.36 75.14 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel a b c d e 6 12 18 24 30 24.15 24.60 24.65 13.27 13.31 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard a b c d e 12 24 36 48 60 33.98 30.81 30.75 50.59 51.89 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard a b c d e 8 16 24 32 40 29.43 32.46 32.52 19.77 19.27 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel a b c d e 9 18 27 36 45 32.49 32.59 34.45 40.41 37.56 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel a b c d e 7 14 21 28 35 30.78 30.68 29.03 24.74 26.62 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard a b c d e 7 14 21 28 35 27.00 26.74 26.98 27.59 28.14 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard a b c d e 9 18 27 36 45 37.04 37.39 37.07 36.24 35.53 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
nginx Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 a b c d e 50K 100K 150K 200K 250K 240111.29 237868.20 241662.73 196034.90 194753.92 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Apache HTTP Server Concurrent Requests: 500 OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 500 a b c d e 40K 80K 120K 160K 200K 173757.38 208703.78 185857.48 141164.84 142512.83 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel a b c d e 0.6951 1.3902 2.0853 2.7804 3.4755 1.66637 1.77352 1.66912 3.05530 3.08913 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel a b c d e 130 260 390 520 650 599.50 563.30 598.43 326.98 323.43 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenCV Test: Features 2D OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Features 2D a b c d e 30K 60K 90K 120K 150K 71850 73789 75180 110697 119368 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard a b c d e 0.5393 1.0786 1.6179 2.1572 2.6965 1.90737 1.80855 1.86383 2.39687 2.07112 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard a b c d e 120 240 360 480 600 524.04 552.79 536.34 417.12 482.69 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel a b c d e 3 6 9 12 15 5.13811 5.15191 5.45677 9.86863 9.78363 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel a b c d e 40 80 120 160 200 194.56 194.06 183.22 101.31 102.18 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard a b c d e 2 4 6 8 10 4.82662 4.27622 4.82211 7.25262 7.16208 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard a b c d e 50 100 150 200 250 207.16 233.81 207.35 137.87 139.61 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel a b c d e 3 6 9 12 15 8.95095 8.97883 8.92427 11.23890 11.12940 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel a b c d e 30 60 90 120 150 111.70 111.35 112.03 88.95 89.83 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inference Time Cost (ms), Fewer Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard a b c d e 2 4 6 8 10 8.07185 8.84768 8.12097 8.06837 8.11091 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Second, More Is Better ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard a b c d e 30 60 90 120 150 123.88 113.02 123.13 123.93 123.28 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b c d e 20 40 60 80 100 104.12 105.04 105.29 68.59 68.53
Zstd Compression Compression Level: 19, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed a b c d e 300 600 900 1200 1500 1329.8 1336.1 1338.0 1334.7 1330.7 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 19, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed a b c d e 2 4 6 8 10 8.50 8.56 8.38 8.34 8.46 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenCV Test: Video OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Video a b c d e 30K 60K 90K 120K 150K 41999 38143 37173 126947 122021 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet a b c d e 100 200 300 400 500 459.97 454.67 455.45 382.29 377.56
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Points2Image OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: Points2Image a b c d e 4K 8K 12K 16K 20K 18078.77 17717.93 17373.13 13064.15 11981.05 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
Zstd Compression Compression Level: 19 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19 - Decompression Speed a b c d e 300 600 900 1200 1500 1395.1 1399.2 1393.8 1406.1 1385.0 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 19 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19 - Compression Speed a b c d e 4 8 12 16 20 17.4 17.4 17.4 17.3 17.3 1. (CC) gcc options: -O3 -pthread -lz -llzma
Memcached Set To Get Ratio: 1:100 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 a b c d e 600K 1200K 1800K 2400K 3000K 2851726.85 2813977.52 2821181.21 2571874.26 2595069.86 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 a b c d e 800K 1600K 2400K 3200K 4000K 3870015.60 3833862.64 3858723.95 2575013.52 2550554.27 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 a b c d e 700K 1400K 2100K 2800K 3500K 3203112.60 3163183.27 3154822.34 3068513.78 3063463.55 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Zstd Compression Compression Level: 12 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 12 - Decompression Speed a b c d e 400 800 1200 1600 2000 1633.7 1629.0 1641.7 1636.0 1611.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 12 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 12 - Compression Speed a b c d e 70 140 210 280 350 316.8 317.4 314.8 317.8 315.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 8, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Decompression Speed a b c d e 300 600 900 1200 1500 1619.8 1625.7 1613.8 1615.1 1606.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 8, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Compression Speed a b c d e 200 400 600 800 1000 903.2 900.7 893.2 865.5 943.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 8 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8 - Decompression Speed a b c d e 300 600 900 1200 1500 1580.9 1593.0 1577.1 1611.3 1599.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
Zstd Compression Compression Level: 8 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8 - Compression Speed a b c d e 300 600 900 1200 1500 1225.5 1217.1 1220.5 1217.7 1213.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.15 Time To Compile a b c d e 14 28 42 56 70 63.17 63.01 63.22 60.47 60.35
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e 70 140 210 280 350 321.47 320.99 321.15 339.58 340.47
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e 60 120 180 240 300 149.05 149.22 149.20 281.23 279.96
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b c d e 20 40 60 80 100 80.72 81.80 80.41 45.40 45.90
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e 200 400 600 800 1000 1109.11 1110.67 1108.42 1133.26 1136.30
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e 20 40 60 80 100 42.33 42.10 42.82 84.55 84.28
RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Fill Sync a b c d e 100K 200K 300K 400K 500K 445786 446883 451734 357298 174168 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random a b c d e 140K 280K 420K 560K 700K 645530 644787 647442 436861 435840 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Fill a b c d e 140K 280K 420K 560K 700K 644356 640977 641338 438023 431217 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
John The Ripper Test: MD5 OpenBenchmarking.org Real C/S, More Is Better John The Ripper 2023.03.14 Test: MD5 a b c d e 6M 12M 18M 24M 30M 15556000 15608000 15608000 27276000 27169000 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt
RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random a b c d e 600K 1200K 1800K 2400K 3000K 2792738 2785663 2802641 1689260 1713878 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing a b c d e 4M 8M 12M 16M 20M 9939924 9108882 10000158 16955493 14344054 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
John The Ripper Test: HMAC-SHA512 OpenBenchmarking.org Real C/S, More Is Better John The Ripper 2023.03.14 Test: HMAC-SHA512 a b c d e 70M 140M 210M 280M 350M 309175000 308492000 309621000 286156000 292569000 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 a b c d e 600K 1200K 1800K 2400K 3000K 1462850.1 1462987.4 1462827.1 2936562.9 2937037.2 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 a b c d e 15K 30K 45K 60K 75K 35951.3 35946.3 35968.3 72050.2 72086.6 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Read a b c d e 200M 400M 600M 800M 1000M 432927777 435267657 434781404 863491650 859555542 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e 200 400 600 800 1000 1108.70 1108.98 1112.33 1127.58 1135.45
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e 20 40 60 80 100 43.02 42.28 42.11 85.02 84.40
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e 30 60 90 120 150 151.37 149.81 150.76 154.28 155.03
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e 130 260 390 520 650 316.61 320.18 317.49 620.41 617.58
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live a b c d e 30 60 90 120 150 136.22 136.89 136.56 128.58 135.83 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live a b c d e 9 18 27 36 45 37.07 36.89 36.98 39.28 37.18 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
nginx Connections: 200 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b c 60K 120K 180K 240K 300K 257954.01 258099.68 255419.28 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Apache HTTP Server Concurrent Requests: 200 OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 200 a b c 40K 80K 120K 160K 200K 143188.90 164665.51 165838.18 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e 8 16 24 32 40 30.57 30.58 30.54 33.65 33.94
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e 600 1200 1800 2400 3000 1567.75 1567.61 1569.26 2845.82 2819.96
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b c d e 3 6 9 12 15 11.27 11.28 11.25 11.60 11.58
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b c d e 20 40 60 80 100 88.70 88.60 88.83 86.18 86.29
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c d e 13 26 39 52 65 57.39 57.85 57.90 25.41 24.48
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e 30 60 90 120 150 119.60 119.64 119.81 126.94 126.86
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e 160 320 480 640 800 400.51 400.39 399.66 754.44 754.76
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b c d e 1.1562 2.3124 3.4686 4.6248 5.781 5.1385 5.0222 5.0721 5.0842 5.0918
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b c d e 40 80 120 160 200 194.54 199.04 197.09 196.62 196.32
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d e 7 14 21 28 35 28.02 28.20 28.09 29.45 29.09
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d e 8 16 24 32 40 35.68 35.45 35.59 33.95 34.37
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d e 7 14 21 28 35 28.22 28.25 28.19 29.23 28.95
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d e 8 16 24 32 40 35.43 35.39 35.47 34.20 34.53
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e 20 40 60 80 100 76.71 76.60 76.66 79.18 79.54
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e 300 600 900 1200 1500 624.96 626.07 625.26 1209.92 1204.66
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d e 4 8 12 16 20 16.02 16.13 16.04 16.36 16.19
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d e 14 28 42 56 70 62.35 61.93 62.29 61.04 61.70
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b c d e 3 6 9 12 15 9.8202 9.8577 9.9034 10.0304 9.9712
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b c d e 20 40 60 80 100 101.79 101.40 100.93 99.65 100.24
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Live a b c d e 50 100 150 200 250 217.98 218.14 217.64 218.73 218.71 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Live a b c d e 6 12 18 24 30 23.17 23.15 23.20 23.09 23.09 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d e 30 60 90 120 150 108.80 108.80 109.23 113.91 114.31
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d e 200 400 600 800 1000 440.46 439.92 439.02 840.73 838.03
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e 11 22 33 44 55 47.77 47.87 47.69 49.03 48.81
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e 400 800 1200 1600 2000 1003.57 1001.98 1005.72 1953.93 1964.26
TensorFlow Device: CPU - Batch Size: 512 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet a b c d e 400 800 1200 1600 2000 1375.44 1375.77 1378.85 1843.74 1775.18
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d e 1.1947 2.3894 3.5841 4.7788 5.9735 5.1198 5.1045 5.1492 5.3096 5.2412
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d e 40 80 120 160 200 195.20 195.77 194.08 188.21 190.66
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d e 1.1104 2.2208 3.3312 4.4416 5.552 4.8287 4.8365 4.8353 4.9053 4.9353
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d e 50 100 150 200 250 206.94 206.62 206.67 203.70 202.46
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d e 1.1899 2.3798 3.5697 4.7596 5.9495 5.1229 5.1686 5.0961 5.2159 5.2886
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d e 40 80 120 160 200 195.13 193.40 196.15 191.64 189.01
OpenCV Test: Object Detection OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Object Detection a b c d e 15K 30K 45K 60K 75K 24950 24394 23509 71477 33386 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b c d e 70 140 210 280 350 316.04 310.53 295.75 191.95 177.73
OpenCV Test: DNN - Deep Neural Network OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: DNN - Deep Neural Network a b c d e 10K 20K 30K 40K 50K 22944 23755 23144 34502 47834 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
John The Ripper Test: WPA PSK OpenBenchmarking.org Real C/S, More Is Better John The Ripper 2023.03.14 Test: WPA PSK a b c d e 300K 600K 900K 1200K 1500K 653913 654104 653913 1263000 1255000 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt
John The Ripper Test: bcrypt OpenBenchmarking.org Real C/S, More Is Better John The Ripper 2023.03.14 Test: bcrypt a b c d e 70K 140K 210K 280K 350K 163238 163353 163353 315340 314928 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt
John The Ripper Test: Blowfish OpenBenchmarking.org Real C/S, More Is Better John The Ripper 2023.03.14 Test: Blowfish a b c d e 70K 140K 210K 280K 350K 163353 163241 163299 315110 314188 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare a b c d e 5 10 15 20 25 11.25 11.24 11.25 18.41 19.13 1. (CXX) g++ options: -O3
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet a b c d e 50 100 150 200 250 241.43 239.45 239.48 120.06 106.10
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet a b c d e 300 600 900 1200 1500 1276.22 1272.13 1276.62 1347.52 1386.72
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: NDT Mapping OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: NDT Mapping a b c d e 200 400 600 800 1000 954.82 949.71 937.41 802.27 760.15 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a b c d e 5 10 15 20 25 20.43 20.45 19.90 12.81 10.76 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b c d e 40 80 120 160 200 142.32 157.65 158.76 67.80 67.07
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a b c d e 5 10 15 20 25 19.84 19.46 19.78 11.92 12.58 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon Obj a b c d e 40 80 120 160 200 106.54 107.11 106.93 174.26 173.96 MIN: 104.86 / MAX: 109.02 MIN: 105.52 / MAX: 109.66 MIN: 105.38 / MAX: 108.8 MIN: 170.44 / MAX: 179.73 MIN: 169.75 / MAX: 180.18
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Euclidean Cluster OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: Euclidean Cluster a b c d e 400 800 1200 1600 2000 1637.36 1637.00 1636.09 1506.75 1494.17 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c d e 40 80 120 160 200 113.17 113.45 113.32 181.45 182.35 MIN: 111.68 / MAX: 116.14 MIN: 111.85 / MAX: 115.79 MIN: 111.66 / MAX: 115.92 MIN: 177.49 / MAX: 186.64 MIN: 178.38 / MAX: 188.72
Timed FFmpeg Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed FFmpeg Compilation 6.0 Time To Compile a b c d e 3 6 9 12 15 12.81 13.01 13.16 10.86 11.19
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet a b c d e 200 400 600 800 1000 856.98 853.06 857.87 588.27 597.46
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace a b c d e 3 6 9 12 15 10.661133476 10.386901707 10.665913818 6.276873254 6.233892837 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model a b c d e 2 4 6 8 10 8.695806704 8.699354709 8.463161346 5.078447644 5.322492803 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet a b c d e 130 260 390 520 650 593.90 594.27 591.78 330.97 319.03
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens a b c d e 2 4 6 8 10 8.549248083 8.433500494 8.266046040 4.709617691 4.677201807 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
dav1d Video Input: Chimera 1080p 10-bit OpenBenchmarking.org FPS, More Is Better dav1d 1.1 Video Input: Chimera 1080p 10-bit a b c 130 260 390 520 650 602.64 603.05 603.51 1. (CC) gcc options: -pthread -lm
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet a b c d e 80 160 240 320 400 355.72 353.36 354.96 184.36 184.99
Google Draco Model: Church Facade OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.5.6 Model: Church Facade a b c d e 1500 3000 4500 6000 7500 6872 6788 6888 6721 6784 1. (CXX) g++ options: -O3
dav1d Video Input: Chimera 1080p OpenBenchmarking.org FPS, More Is Better dav1d 1.1 Video Input: Chimera 1080p a b c 140 280 420 560 700 657.50 656.31 657.22 1. (CC) gcc options: -pthread -lm
Google Draco Model: Lion OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.5.6 Model: Lion a b c d e 1100 2200 3300 4400 5500 5321 5296 5270 5218 5300 1. (CXX) g++ options: -O3
dav1d Video Input: Summer Nature 4K OpenBenchmarking.org FPS, More Is Better dav1d 1.1 Video Input: Summer Nature 4K a b c 80 160 240 320 400 379.84 381.16 383.95 1. (CC) gcc options: -pthread -lm
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Crown a b c d e 40 80 120 160 200 104.24 105.18 105.12 172.28 173.58 MIN: 102.16 / MAX: 107.49 MIN: 103.34 / MAX: 107.96 MIN: 102.85 / MAX: 108.38 MIN: 167.64 / MAX: 181.13 MIN: 168.69 / MAX: 180.7
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Crown a b c d e 40 80 120 160 200 110.74 111.28 111.30 180.48 180.84 MIN: 108.24 / MAX: 114.35 MIN: 108.92 / MAX: 115.18 MIN: 108.84 / MAX: 114.92 MIN: 174.62 / MAX: 189.72 MIN: 174.89 / MAX: 190.36
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon a b c d e 40 80 120 160 200 121.11 121.27 120.91 194.90 195.27 MIN: 118.84 / MAX: 124.01 MIN: 119.43 / MAX: 123.26 MIN: 119.01 / MAX: 122.81 MIN: 190.69 / MAX: 207.25 MIN: 191.17 / MAX: 206.18
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon a b c d e 50 100 150 200 250 132.71 132.71 132.53 211.98 213.30 MIN: 131.03 / MAX: 135.12 MIN: 130.87 / MAX: 135.14 MIN: 130.91 / MAX: 135.37 MIN: 207.56 / MAX: 231.22 MIN: 208.86 / MAX: 228.78
dav1d Video Input: Summer Nature 1080p OpenBenchmarking.org FPS, More Is Better dav1d 1.1 Video Input: Summer Nature 1080p a b c 200 400 600 800 1000 807.16 806.08 809.86 1. (CC) gcc options: -pthread -lm
Phoronix Test Suite v10.8.5