extra tests2 Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&grr&export=txt&sro&rro .
extra tests2 Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution a b c d e f g 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads) Supermicro H13DSH (1.5 BIOS) 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07 astdrmfb AlmaLinux 9.2 5.14.0-284.25.1.el9_2.x86_64 (x86_64) GCC 11.3.1 20221121 ext4 1024x768 AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) Supermicro H13SSW (1.1 BIOS) 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details - a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 Java Details - OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS) Python Details - Python 3.9.16 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
extra tests2 brl-cad: VGR Performance Metric openvkl: vklBenchmarkCPU Scalar tidb: oltp_read_write - 16 tidb: oltp_read_write - 32 tidb: oltp_update_non_index - 1 tidb: oltp_read_write - 64 openvkl: vklBenchmarkCPU ISPC tidb: oltp_update_non_index - 64 tidb: oltp_read_write - 1 tidb: oltp_update_index - 16 tidb: oltp_point_select - 1 tidb: oltp_point_select - 128 tidb: oltp_point_select - 32 tidb: oltp_point_select - 64 tidb: oltp_update_index - 32 tidb: oltp_update_index - 128 tidb: oltp_update_index - 1 tidb: oltp_read_write - 128 blender: Barbershop - CPU-Only tidb: oltp_update_non_index - 16 tidb: oltp_update_non_index - 32 tidb: oltp_update_index - 64 tidb: oltp_update_non_index - 128 tidb: oltp_point_select - 16 blender: Pabellon Barcelona - CPU-Only ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time nekrs: TurboPipe Periodic blender: Classroom - CPU-Only cassandra: Writes ospray: particle_volume/ao/real_time deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream easywave: e2Asean Grid + BengkuluSept2007 Source - 2400 nekrs: Kershaw oidn: RTLightmap.hdr.4096x4096 - CPU-Only onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU blender: Fishy Cat - CPU-Only openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU ospray: gravity_spheres_volume/dim_512/pathtracer/real_time specfem3d: Layered Halfspace specfem3d: Water-layered Halfspace deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream blender: BMW27 - CPU-Only hadoop: Rename - 100 - 1000000 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream hadoop: Rename - 50 - 1000000 deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream hadoop: Delete - 100 - 1000000 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream hadoop: Delete - 50 - 1000000 deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream oidn: RTLightmap.hdr.4096x4096 - CPU-Only build-linux-kernel: defconfig deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream hadoop: Open - 100 - 1000000 embree: Pathtracer - Asian Dragon Obj oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only hadoop: File Status - 100 - 1000000 hadoop: Open - 50 - 1000000 embree: Pathtracer ISPC - Asian Dragon Obj hadoop: File Status - 50 - 1000000 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream svt-av1: Preset 4 - Bosphorus 4K embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown specfem3d: Homogeneous Halfspace liquid-dsp: 96 - 256 - 512 hadoop: Create - 100 - 1000000 embree: Pathtracer - Asian Dragon Obj liquid-dsp: 64 - 256 - 512 liquid-dsp: 96 - 256 - 32 liquid-dsp: 96 - 256 - 57 embree: Pathtracer - Asian Dragon liquid-dsp: 32 - 256 - 512 liquid-dsp: 64 - 256 - 32 liquid-dsp: 64 - 256 - 57 liquid-dsp: 16 - 256 - 512 liquid-dsp: 8 - 256 - 512 liquid-dsp: 4 - 256 - 512 liquid-dsp: 32 - 256 - 57 liquid-dsp: 32 - 256 - 32 liquid-dsp: 16 - 256 - 32 liquid-dsp: 1 - 256 - 512 liquid-dsp: 2 - 256 - 512 liquid-dsp: 16 - 256 - 57 liquid-dsp: 8 - 256 - 57 liquid-dsp: 8 - 256 - 32 liquid-dsp: 4 - 256 - 57 liquid-dsp: 4 - 256 - 32 liquid-dsp: 2 - 256 - 57 liquid-dsp: 2 - 256 - 32 liquid-dsp: 1 - 256 - 57 liquid-dsp: 1 - 256 - 32 hadoop: Create - 50 - 1000000 embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon specfem3d: Tomographic Model specfem3d: Mount St. Helens remhos: Sample Remap Example embree: Pathtracer - Crown kripke: embree: Pathtracer ISPC - Crown onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only embree: Pathtracer - Asian Dragon embree: Pathtracer ISPC - Asian Dragon hadoop: Rename - 100 - 100000 hadoop: Rename - 50 - 100000 hadoop: Delete - 100 - 100000 hadoop: Delete - 50 - 100000 onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU hadoop: Open - 50 - 100000 hadoop: Open - 100 - 100000 hadoop: File Status - 100 - 100000 hadoop: File Status - 50 - 100000 svt-av1: Preset 4 - Bosphorus 1080p hadoop: Create - 100 - 100000 hadoop: Create - 50 - 100000 onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU svt-av1: Preset 8 - Bosphorus 4K onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU easywave: e2Asean Grid + BengkuluSept2007 Source - 240 svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p a b c d e f g 772162 38331 58974 1328 79090 41281 2540 12558 4331 159242 104627 127567 18361 27087 1212 85757 254.88 18095 28735 51105 80.54 15.9528 215.096 6767710000 66.42 248095 15.986 485.7175 49.3258 11106900000 33.22 393.6 30.41 213.94 56.01 13.8739 14.2369 42.44 282.55 42.24 283.97 37.8 317.22 14.23 842.91 4.88 2454.09 16.02 748.44 4.86 9837.58 38.5 1244.69 0.34 120606.38 30.72 1560.03 4.17 2873.24 0.54 86884.64 16.26 2945.26 5.89 2033.17 8.28 5776.94 2.03 5882.91 16.3468 26.885983804 26.985020908 33.3412 718.9189 26.2 73078 347.6612 68.5988 73239 16.9074 1417.0706 90114 605.0388 39.4951 98932 150.5897 158.924 605.758 39.4391 118.7509 201.3925 0.86 27.354 35.628 672.4635 215332 1886792 1126126 2173913 74.3211 322.2505 49.3653 485.6725 111.0131 215.6383 109.8027 218.1464 49.0092 489.1203 4.6508 5137.0114 5.203 15.10511773 711640000 46145 53.5733 622560000 3005800000 2559800000 425810000 2207700000 1994400000 216080000 109870000 52911000 1192100000 1183500000 594230000 13909000 27901000 699740000 369430000 307540000 196220000 153850000 117490000 77181000 59401000 39499000 53665 56.4853 12.312946652 11.024765775 16.346 54.9017 56.0871 1.83 1.84 60.1449 67.3378 75529 70522 87566 91075 460829 420168 515464 529101 12.477 40733 43649 90.811 141.219 163.013 163.459 422.994 510.361 768517 36950 61520 1312 80183 39759 2510 4405 159728 106180 130802 17817 27464 89099 255.3 18068 28914 24371 67515 80.76 15.9888 214.074 6757360000 66.64 256661 15.9785 487.3599 49.1666 11240300000 33.17 393.23 30.44 213.62 56.06 13.7666 14.1783 42.2 284.22 42.09 284.99 37.79 317.28 14.03 854.51 4.89 2450.26 15.98 750.49 4.85 9849.07 38.66 1239.67 0.34 120728.22 31 1546.02 4.16 2880.58 0.54 87359.23 16.02 2986.46 5.91 2028.01 8.27 5780.44 2.05 5836.27 16.4365 28.65210863 29.460761197 33.3798 717.9693 26.24 72129 347.2189 68.6617 71679 17.0669 1403.065 86715 605.7307 39.4681 97314 150.6055 159.0596 606.6693 39.4539 118.9482 201.2528 0.86 27.241 35.6422 672.3734 173822 161970 1020408 1941748 74.5553 321.1829 49.1082 488.1264 110.9152 215.9254 109.2332 219.5263 48.9703 489.4464 4.6476 5138.8341 5.149 14.46058698 718140000 44437 53.8135 610950000 2995400000 2571100000 429620000 2212100000 2001900000 216150000 108080000 55588000 1214200000 1190300000 602470000 14021000 27736000 692760000 366930000 305110000 196590000 153690000 114010000 77019000 59296000 39486000 52119 56.6902 12.100595947 11.318495709 16.791 55.3925 56.4551 1.83 1.84 59.9124 67.1951 69348 73046 90827 73801 469484 404858 458716 862069 12.591 37425 41288 91.322 138.338 166.692 166.378 427.686 542.611 762529 37368 59630 1381 78469 39106 2485 12681 4471 149962 17565 26546 1189 254.72 23324 52865 65406 80.41 15.9778 214.136 6754170000 66.72 270480 15.9872 507.4786 47.1535 10826700000 33.03 393.37 30.43 213.79 56.02 13.8317 14.1399 42.43 282.67 42.19 284.31 37.79 317.33 14.12 849.3 4.88 2455.51 15.83 757.38 4.86 9845.27 38.75 1237.29 0.34 123484.28 30.89 1551.63 4.16 2881.14 0.54 86789.8 16.02 2987.33 5.9 2029.79 8.24 5802.65 2.05 5840.53 16.535 27.490850157 27.060235079 33.4637 716.1404 26.12 66827 347.3728 68.6287 74638 16.8863 1418.9041 97031 605.9183 39.4464 90147 145.2562 164.6131 605.8765 39.4183 118.7801 201.5402 0.87 27.408 35.6825 671.2554 185874 1893939 683995 284252 74.5031 321.5082 49.0173 489.1106 111.025 215.647 109.5847 218.5162 49.2055 487.0522 4.6348 5153.6644 5.049 14.808273365 715030000 44001 53.6927 622630000 2999800000 2564900000 424400000 2206800000 2010300000 214910000 109140000 55165000 1254800000 1184800000 603650000 14225000 28227000 674930000 366990000 306760000 194510000 153670000 118550000 76924000 57519000 39453000 52260 56.9327 12.040917877 11.32735977 16.243 55.4037 56.8078 1.83 1.82 59.7929 67.5038 67159 77101 73475 90580 401606 403226 729927 657895 12.617 35075 43937 90.417 143.545 161.495 163.055 431.895 516.906 298064 191 36480 46977 1693 55334 487 34224 12622 5898 129492 98149 115675 17612 1479 59727 670.87 18563 26273 21108 224.15 5.57001 151.905 7934570000 182.99 197866 5.57469 493.596 16.1648 98.98 10318900000 0.34 1643.99 1642.51 1641.92 838.516 847.38 849.163 90.03 761.59 10.47 398.52 20.03 5.45329 5.60747 74.71 107.02 74.81 106.9 64.41 124.12 21.57 370.57 7.7 1036.99 23.2 344.67 4.51 3540.88 40.4 395.66 0.35 44958.07 30.02 532.59 6.79 1175.67 0.49 32002.62 15.36 1039.61 10.01 797.64 7.93 2013.77 3.11 2564.78 6.58745 71.614294327 62.441749585 33.2242 240.5529 72 81208 325.8763 24.4833 83921 15.7246 508.087 112613 606.101 13.0694 111012 143.7643 55.6132 606.5773 13.1312 112.2499 71.137 0.34 55.173 31.0525 257.2728 1248439 22.3517 0.72 0.72 600601 278319 23.3531 1818182 38.105 73.3058 108.909 49.0863 162.8502 111.1081 71.9189 110.1114 72.455 48.8729 163.5559 4.996 1599.2079 4.107 21.8943 22.3922 35.571684908 286250000 71296 22.2614 282920000 1065200000 1120800000 24.845 273760000 1059500000 1093300000 193850000 99594000 50258000 1035000000 1047100000 545360000 12683000 24627000 689150000 363310000 278030000 188930000 138600000 105650000 67054000 52665000 35228000 72134 23.8733 27.7438 27.330985588 26.736414118 30.761 21.4811 240994500 22.585 0.628236 3.05991 3.81576 0.72 0.72 24.6872 28.3643 82102 82372 105708 101010 1.03749 2.49408 0.652259 578035 529101 591716 632911 10.91 57971 58617 1.02875 1.25758 0.60395 66.988 2.13332 1.55824 1.33789 118.946 161.854 163.189 1.91374 0.847805 3.37782 1.657 526.216 604.986 296125 190 36784 46737 1708 53893 487 33881 3209 12567 5976 129904 96907 118657 17117 24611 1490 60145 670.64 18557 26285 21271 42138 70250 224.1 5.56353 151.506 7931010000 182.56 195798 5.54107 494.2575 16.1392 99.415 10264000000 0.34 1643.97 1639.36 1641 849.712 841.078 851.659 90.31 761.16 10.47 398.91 20 5.46153 5.6204 74.5 107.27 74.54 107.24 64.68 123.61 21.4 373.64 7.77 1028.64 23.32 342.81 4.51 3544.18 36.98 432.32 0.35 44933.27 30.1 530.99 6.8 1174.6 0.49 32032.06 15.36 1039.82 10.06 793.75 7.96 2007.53 3.11 2562.54 6.5827 70.189028506 62.325146828 33.2625 240.2349 71.44 84360 325.7416 24.4725 84041 15.6245 511.4098 113225 607.913 12.9433 113327 144.1013 55.4634 606.755 13.1187 112.0574 71.2727 0.34 55.093 30.9867 257.894 1204819 22.2911 0.72 0.72 235627 251004 23.5283 320924 38.067 73.2609 109.0938 49.0094 163.1361 111.0888 71.9146 109.9654 72.6571 49.0598 162.9298 4.9859 1599.1543 4.114 21.9909 22.3422 35.030134889 285880000 70057 22.1577 281830000 1065100000 1117800000 24.8282 273480000 1057500000 1095400000 196040000 97005000 50380000 1032000000 1046600000 545140000 12366000 25207000 692920000 357990000 277780000 191230000 138620000 105480000 68846000 52827000 35315000 70897 23.9393 27.8293 27.459821308 26.799143446 30.845 21.4357 236243900 22.5694 0.633975 3.0637 3.84421 0.72 0.72 24.7343 28.3141 83822 82237 98039 100604 1.14432 2.56522 0.65761 552486 294985 613497 389105 10.984 58824 58617 1.05425 1.28043 0.575794 67.721 2.1257 1.54911 1.33861 119.307 162.051 162.608 1.91781 0.844434 3.38436 1.654 525.173 597.011 295603 191 36125 47141 1697 54956 488 34470 3218 12692 5954 130389 97368 119092 24830 1483 60310 667.87 26695 21067 41424 70105 223.95 5.55581 151.78 7955790000 181.7 196287 5.5732 494.2211 16.1307 97.987 9976450000 0.34 1642.35 1636.44 1636.76 851.494 845.308 849.344 90.26 760.57 10.48 399.24 20.01 5.45227 5.61454 74.43 107.39 74.87 106.76 64.31 124.3 21.65 369.26 7.67 1041.87 23.28 343.49 4.5 3548.78 37.01 431.94 0.35 44968.43 29.95 533.74 6.76 1180.85 0.49 31951.64 15.38 1038.47 10.09 791.74 7.97 2004.76 3.14 2539.97 6.59563 70.542255905 61.281769124 33.2751 240.1642 71.96 85815 324.9568 24.5238 82501 15.721 508.2088 110803 607.8171 13.0853 111198 143.6922 55.5428 606.7852 13.0934 112.413 71.044 0.34 55.148 31.0292 257.5046 1303781 22.2676 0.72 0.72 1964637 1221001 23.5042 1795332 38.015 73.2163 109.09 49.0712 162.9294 110.8852 71.9401 110.0001 72.5736 49.0714 162.8976 4.9877 1600.5275 4.138 21.7746 22.4407 35.535073001 285920000 70537 22.149 283030000 1065300000 1120500000 24.887 273390000 1057100000 1094600000 194500000 99441000 49977000 1024600000 1041900000 545020000 12681000 25199000 693340000 350450000 276390000 189880000 138580000 105740000 68861000 52879000 35271000 69920 23.9354 27.826 26.973757395 26.873168455 30.725 21.5913 236591000 22.6566 0.630325 3.05674 3.81823 0.72 0.72 24.7047 28.3237 79491 81633 99404 96993 1.00136 2.49714 0.653182 578035 523560 478469 709220 10.736 59382 58343 1.06144 1.20653 0.600834 67.393 2.13062 1.57282 1.34183 118.486 160.798 161.847 1.91422 0.850691 3.37956 1.657 521.518 585.368 295522 191 36088 46993 1705 55301 489 34107 3195 12627 96840 118549 17135 24574 1481 59944 669.09 18735 41695 69923 224.12 5.56539 151.681 7964910000 183.29 197092 5.57553 495.6033 16.0672 97.529 10500600000 0.34 1641.4 1631.99 1637.37 848.032 847.417 837.595 90.63 759.92 10.48 398.13 20.05 5.47725 5.62278 74.71 107.04 74.58 107.24 64.77 123.41 21.47 372.26 7.74 1031.6 23.42 341.36 4.52 3533.64 36.98 432.2 0.35 45097.99 29.72 538.01 6.79 1175.58 0.49 32008.03 15.37 1039.37 10.06 793.9 7.96 2006.09 3.12 2557.66 6.60085 69.955609165 62.810924376 33.3674 239.5176 72.01 85763 325.5075 24.4607 84810 15.6892 509.139 113895 607.1628 13.073 110828 144.1053 55.4264 608.7163 13.0596 112.4779 70.9275 0.34 55.172 31.0574 257.2808 1107420 22.2559 0.72 0.72 2049180 654022 23.7091 2036660 37.95 73.1867 109.2191 49.0259 162.9937 110.9759 71.9003 109.9037 72.6879 48.9837 163.2282 4.9787 1602.5221 4.143 21.8305 22.4215 35.378600021 286530000 70922 22.1901 281730000 1065700000 1118200000 24.9619 274070000 1056200000 1099300000 194670000 100170000 49556000 1033400000 1047100000 543050000 12256000 22727000 682070000 357810000 277410000 190750000 138460000 104800000 68678000 52854000 35236000 72706 23.8796 27.91 27.746475162 27.696631371 30.75 21.5847 237175700 22.7745 0.629108 3.05458 3.82381 0.72 0.72 24.8193 28.4793 80386 82237 102564 103950 1.12723 2.51441 0.6477 546448 460829 487805 561798 11.016 58928 60680 1.04567 1.27918 0.61232 67.811 2.11813 1.55118 1.33564 118.481 161.324 160.322 1.91274 0.843492 3.38156 1.648 528.533 586.748 OpenBenchmarking.org
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric g f e d c b a 170K 340K 510K 680K 850K 295522 295603 296125 298064 762529 768517 772162 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
OpenVKL Benchmark: vklBenchmarkCPU Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar g f e d 40 80 120 160 200 191 191 190 191 MIN: 13 / MAX: 3483 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3471
TiDB Community Server Test: oltp_read_write - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 16 g f e d c b a 8K 16K 24K 32K 40K 36088 36125 36784 36480 37368 36950 38331
TiDB Community Server Test: oltp_read_write - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 32 g f e d c b a 13K 26K 39K 52K 65K 46993 47141 46737 46977 59630 61520 58974
TiDB Community Server Test: oltp_update_non_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 g f e d c b a 400 800 1200 1600 2000 1705 1697 1708 1693 1381 1312 1328
TiDB Community Server Test: oltp_read_write - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 g f e d c b a 20K 40K 60K 80K 100K 55301 54956 53893 55334 78469 80183 79090
OpenVKL Benchmark: vklBenchmarkCPU ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC g f e d 110 220 330 440 550 489 488 487 487 MIN: 36 / MAX: 6969 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6949
TiDB Community Server Test: oltp_update_non_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 64 g f e d c b a 9K 18K 27K 36K 45K 34107 34470 33881 34224 39106 39759 41281
TiDB Community Server Test: oltp_read_write - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 g f e c b a 700 1400 2100 2800 3500 3195 3218 3209 2485 2510 2540
TiDB Community Server Test: oltp_update_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 g f e d c a 3K 6K 9K 12K 15K 12627 12692 12567 12622 12681 12558
TiDB Community Server Test: oltp_point_select - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 f e d c b a 1300 2600 3900 5200 6500 5954 5976 5898 4471 4405 4331
TiDB Community Server Test: oltp_point_select - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 f e d c b a 30K 60K 90K 120K 150K 130389 129904 129492 149962 159728 159242
TiDB Community Server Test: oltp_point_select - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 g f e d b a 20K 40K 60K 80K 100K 96840 97368 96907 98149 106180 104627
TiDB Community Server Test: oltp_point_select - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 g f e d b a 30K 60K 90K 120K 150K 118549 119092 118657 115675 130802 127567
TiDB Community Server Test: oltp_update_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 g e d c b a 4K 8K 12K 16K 20K 17135 17117 17612 17565 17817 18361
TiDB Community Server Test: oltp_update_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 g f e c b a 6K 12K 18K 24K 30K 24574 24830 24611 26546 27464 27087
TiDB Community Server Test: oltp_update_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 g f e d c a 300 600 900 1200 1500 1481 1483 1490 1479 1189 1212
TiDB Community Server Test: oltp_read_write - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 g f e d b a 20K 40K 60K 80K 100K 59944 60310 60145 59727 89099 85757
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only g f e d c b a 140 280 420 560 700 669.09 667.87 670.64 670.87 254.72 255.30 254.88
TiDB Community Server Test: oltp_update_non_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 g e d b a 4K 8K 12K 16K 20K 18735 18557 18563 18068 18095
TiDB Community Server Test: oltp_update_non_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 f e d b a 6K 12K 18K 24K 30K 26695 26285 26273 28914 28735
TiDB Community Server Test: oltp_update_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 f e d c b 5K 10K 15K 20K 25K 21067 21271 21108 23324 24371
TiDB Community Server Test: oltp_update_non_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 g f e c a 11K 22K 33K 44K 55K 41695 41424 42138 52865 51105
TiDB Community Server Test: oltp_point_select - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 g f e c b 15K 30K 45K 60K 75K 69923 70105 70250 65406 67515
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only g f e d c b a 50 100 150 200 250 224.12 223.95 224.10 224.15 80.41 80.76 80.54
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time g f e d c b a 4 8 12 16 20 5.56539 5.55581 5.56353 5.57001 15.97780 15.98880 15.95280
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time g f e d c b a 50 100 150 200 250 151.68 151.78 151.51 151.91 214.14 214.07 215.10
nekRS Input: TurboPipe Periodic OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic g f e d c b a 2000M 4000M 6000M 8000M 10000M 7964910000 7955790000 7931010000 7934570000 6754170000 6757360000 6767710000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only g f e d c b a 40 80 120 160 200 183.29 181.70 182.56 182.99 66.72 66.64 66.42
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes g f e d c b a 60K 120K 180K 240K 300K 197092 196287 195798 197866 270480 256661 248095
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time g f e d c b a 4 8 12 16 20 5.57553 5.57320 5.54107 5.57469 15.98720 15.97850 15.98600
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream g f e d c b a 110 220 330 440 550 495.60 494.22 494.26 493.60 507.48 487.36 485.72
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream g f e d c b a 11 22 33 44 55 16.07 16.13 16.14 16.16 47.15 49.17 49.33
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 g f e d 20 40 60 80 100 97.53 97.99 99.42 98.98 1. (CXX) g++ options: -O3 -fopenmp
nekRS Input: Kershaw OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw g f e d c b a 2000M 4000M 6000M 8000M 10000M 10500600000 9976450000 10264000000 10318900000 10826700000 11240300000 11106900000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only g f e d 0.0765 0.153 0.2295 0.306 0.3825 0.34 0.34 0.34 0.34
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU g f e d 400 800 1200 1600 2000 1641.40 1642.35 1643.97 1643.99 MIN: 1589.91 MIN: 1586.17 MIN: 1590.89 MIN: 1588.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU g f e d 400 800 1200 1600 2000 1631.99 1636.44 1639.36 1642.51 MIN: 1581.62 MIN: 1585.81 MIN: 1581.93 MIN: 1593.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU g f e d 400 800 1200 1600 2000 1637.37 1636.76 1641.00 1641.92 MIN: 1584.58 MIN: 1585.98 MIN: 1595.55 MIN: 1584.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU g f e d 200 400 600 800 1000 848.03 851.49 849.71 838.52 MIN: 807.34 MIN: 807.97 MIN: 805.98 MIN: 796.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU g f e d 200 400 600 800 1000 847.42 845.31 841.08 847.38 MIN: 806.72 MIN: 803.78 MIN: 798.46 MIN: 806.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU g f e d 200 400 600 800 1000 837.60 849.34 851.66 849.16 MIN: 796.61 MIN: 805.8 MIN: 809.45 MIN: 806.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only g f e d c b a 20 40 60 80 100 90.63 90.26 90.31 90.03 33.03 33.17 33.22
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU g f e d c b a 160 320 480 640 800 759.92 760.57 761.16 761.59 393.37 393.23 393.60 MIN: 737.63 / MAX: 771.07 MIN: 741.4 / MAX: 770.88 MIN: 741.99 / MAX: 776.56 MIN: 738.34 / MAX: 772.36 MIN: 362.57 / MAX: 433.51 MIN: 360.87 / MAX: 433.13 MIN: 363.29 / MAX: 431.61 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU g f e d c b a 7 14 21 28 35 10.48 10.48 10.47 10.47 30.43 30.44 30.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU g f e d c b a 90 180 270 360 450 398.13 399.24 398.91 398.52 213.79 213.62 213.94 MIN: 379.09 / MAX: 404.71 MIN: 387.9 / MAX: 408.93 MIN: 386.2 / MAX: 407.29 MIN: 382.1 / MAX: 404.98 MIN: 197.29 / MAX: 236.32 MIN: 197.2 / MAX: 235.23 MIN: 201.64 / MAX: 242.71 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU g f e d c b a 13 26 39 52 65 20.05 20.01 20.00 20.03 56.02 56.06 56.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time g f e d c b a 4 8 12 16 20 5.47725 5.45227 5.46153 5.45329 13.83170 13.76660 13.87390
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time g f e d c b a 4 8 12 16 20 5.62278 5.61454 5.62040 5.60747 14.13990 14.17830 14.23690
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU g f e d c b a 20 40 60 80 100 74.71 74.43 74.50 74.71 42.43 42.20 42.44 MIN: 66.29 / MAX: 79.68 MIN: 65.68 / MAX: 83.49 MIN: 66.5 / MAX: 80.32 MIN: 66.12 / MAX: 81.09 MIN: 36.31 / MAX: 62.36 MIN: 36.84 / MAX: 61.97 MIN: 36.14 / MAX: 61.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU g f e d c b a 60 120 180 240 300 107.04 107.39 107.27 107.02 282.67 284.22 282.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU g f e d c b a 20 40 60 80 100 74.58 74.87 74.54 74.81 42.19 42.09 42.24 MIN: 67.63 / MAX: 78.73 MIN: 66.72 / MAX: 80.96 MIN: 65.97 / MAX: 82.9 MIN: 66.88 / MAX: 80.7 MIN: 36.21 / MAX: 65.64 MIN: 37.13 / MAX: 58.71 MIN: 36.59 / MAX: 61.56 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU g f e d c b a 60 120 180 240 300 107.24 106.76 107.24 106.90 284.31 284.99 283.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU g f e d c b a 14 28 42 56 70 64.77 64.31 64.68 64.41 37.79 37.79 37.80 MIN: 55.8 / MAX: 69.46 MIN: 50.85 / MAX: 70.77 MIN: 38.02 / MAX: 72.52 MIN: 37.44 / MAX: 73.04 MIN: 33.29 / MAX: 54.88 MIN: 32.97 / MAX: 53.7 MIN: 33.35 / MAX: 56.45 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU g f e d c b a 70 140 210 280 350 123.41 124.30 123.61 124.12 317.33 317.28 317.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU g f e d c b a 5 10 15 20 25 21.47 21.65 21.40 21.57 14.12 14.03 14.23 MIN: 17.62 / MAX: 28.13 MIN: 19.48 / MAX: 24.27 MIN: 19.07 / MAX: 25.3 MIN: 19.5 / MAX: 24.76 MIN: 11.51 / MAX: 26.04 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 25.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU g f e d c b a 200 400 600 800 1000 372.26 369.26 373.64 370.57 849.30 854.51 842.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU g f e d c b a 2 4 6 8 10 7.74 7.67 7.77 7.70 4.88 4.89 4.88 MIN: 6.06 / MAX: 12.66 MIN: 5.32 / MAX: 16.6 MIN: 5.42 / MAX: 16.35 MIN: 5.51 / MAX: 16.06 MIN: 3.9 / MAX: 14.94 MIN: 3.93 / MAX: 13.44 MIN: 3.95 / MAX: 16.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU g f e d c b a 500 1000 1500 2000 2500 1031.60 1041.87 1028.64 1036.99 2455.51 2450.26 2454.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU g f e d c b a 6 12 18 24 30 23.42 23.28 23.32 23.20 15.83 15.98 16.02 MIN: 20.46 / MAX: 32.43 MIN: 15.73 / MAX: 30.77 MIN: 19.49 / MAX: 30.99 MIN: 15.1 / MAX: 31.6 MIN: 12.38 / MAX: 32.97 MIN: 12.74 / MAX: 33.34 MIN: 12.5 / MAX: 33.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU g f e d c b a 160 320 480 640 800 341.36 343.49 342.81 344.67 757.38 750.49 748.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU g f e d c b a 1.0935 2.187 3.2805 4.374 5.4675 4.52 4.50 4.51 4.51 4.86 4.85 4.86 MIN: 2.77 / MAX: 13.57 MIN: 2.98 / MAX: 13.86 MIN: 2.96 / MAX: 16.06 MIN: 2.98 / MAX: 13.05 MIN: 4.34 / MAX: 12.27 MIN: 4.25 / MAX: 12.86 MIN: 4.23 / MAX: 12.81 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU g f e d c b a 2K 4K 6K 8K 10K 3533.64 3548.78 3544.18 3540.88 9845.27 9849.07 9837.58 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU g f e d c b a 9 18 27 36 45 36.98 37.01 36.98 40.40 38.75 38.66 38.50 MIN: 32.61 / MAX: 41.91 MIN: 32.25 / MAX: 43.6 MIN: 32.02 / MAX: 44.78 MIN: 26.93 / MAX: 74.83 MIN: 37.46 / MAX: 43.52 MIN: 37.22 / MAX: 43.52 MIN: 36.77 / MAX: 44.23 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU g f e d c b a 300 600 900 1200 1500 432.20 431.94 432.32 395.66 1237.29 1239.67 1244.69 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU g f e d c b a 0.0788 0.1576 0.2364 0.3152 0.394 0.35 0.35 0.35 0.35 0.34 0.34 0.34 MIN: 0.23 / MAX: 8.63 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.09 MIN: 0.29 / MAX: 7.09 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU g f e d c b a 30K 60K 90K 120K 150K 45097.99 44968.43 44933.27 44958.07 123484.28 120728.22 120606.38 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU g f e d c b a 7 14 21 28 35 29.72 29.95 30.10 30.02 30.89 31.00 30.72 MIN: 19.46 / MAX: 38.99 MIN: 19.01 / MAX: 38.08 MIN: 22.61 / MAX: 39.15 MIN: 18.78 / MAX: 38.72 MIN: 29.48 / MAX: 36.29 MIN: 29.59 / MAX: 36.33 MIN: 29.51 / MAX: 35.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU g f e d c b a 300 600 900 1200 1500 538.01 533.74 530.99 532.59 1551.63 1546.02 1560.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU g f e d c b a 2 4 6 8 10 6.79 6.76 6.80 6.79 4.16 4.16 4.17 MIN: 3.79 / MAX: 15.41 MIN: 4.04 / MAX: 15.47 MIN: 4.04 / MAX: 15.37 MIN: 3.8 / MAX: 15.48 MIN: 3.43 / MAX: 10.26 MIN: 3.42 / MAX: 11.2 MIN: 3.39 / MAX: 10.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU g f e d c b a 600 1200 1800 2400 3000 1175.58 1180.85 1174.60 1175.67 2881.14 2880.58 2873.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU g f e d c b a 0.1215 0.243 0.3645 0.486 0.6075 0.49 0.49 0.49 0.49 0.54 0.54 0.54 MIN: 0.3 / MAX: 8.84 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 9.28 MIN: 0.45 / MAX: 5.03 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 7.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU g f e d c b a 20K 40K 60K 80K 100K 32008.03 31951.64 32032.06 32002.62 86789.80 87359.23 86884.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU g f e d c b a 4 8 12 16 20 15.37 15.38 15.36 15.36 16.02 16.02 16.26 MIN: 7.99 / MAX: 23.98 MIN: 7.99 / MAX: 24 MIN: 8.02 / MAX: 23.81 MIN: 8.08 / MAX: 24.34 MIN: 14.63 / MAX: 33.79 MIN: 14.41 / MAX: 30.55 MIN: 14.71 / MAX: 28.14 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU g f e d c b a 600 1200 1800 2400 3000 1039.37 1038.47 1039.82 1039.61 2987.33 2986.46 2945.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU g f e d c b a 3 6 9 12 15 10.06 10.09 10.06 10.01 5.90 5.91 5.89 MIN: 5.2 / MAX: 19.38 MIN: 5.4 / MAX: 19.17 MIN: 5.29 / MAX: 19.07 MIN: 5.7 / MAX: 19.52 MIN: 4.83 / MAX: 13.4 MIN: 4.84 / MAX: 12.9 MIN: 4.67 / MAX: 18.4 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU g f e d c b a 400 800 1200 1600 2000 793.90 791.74 793.75 797.64 2029.79 2028.01 2033.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU g f e d c b a 2 4 6 8 10 7.96 7.97 7.96 7.93 8.24 8.27 8.28 MIN: 4.19 / MAX: 14.2 MIN: 4.37 / MAX: 16.86 MIN: 4.19 / MAX: 16.59 MIN: 4.2 / MAX: 16.92 MIN: 7.62 / MAX: 23.32 MIN: 7.37 / MAX: 25.18 MIN: 7.44 / MAX: 23.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU g f e d c b a 1200 2400 3600 4800 6000 2006.09 2004.76 2007.53 2013.77 5802.65 5780.44 5776.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU g f e d c b a 0.7065 1.413 2.1195 2.826 3.5325 3.12 3.14 3.11 3.11 2.05 2.05 2.03 MIN: 1.88 / MAX: 11.92 MIN: 1.93 / MAX: 11.65 MIN: 1.93 / MAX: 9.72 MIN: 1.94 / MAX: 11.57 MIN: 1.62 / MAX: 6.96 MIN: 1.6 / MAX: 7 MIN: 1.66 / MAX: 7.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU g f e d c b a 1300 2600 3900 5200 6500 2557.66 2539.97 2562.54 2564.78 5840.53 5836.27 5882.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time g f e d c b a 4 8 12 16 20 6.60085 6.59563 6.58270 6.58745 16.53500 16.43650 16.34680
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace g f e d c b a 16 32 48 64 80 69.96 70.54 70.19 71.61 27.49 28.65 26.89 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace g f e d c b a 14 28 42 56 70 62.81 61.28 62.33 62.44 27.06 29.46 26.99 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 8 16 24 32 40 33.37 33.28 33.26 33.22 33.46 33.38 33.34
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 160 320 480 640 800 239.52 240.16 240.23 240.55 716.14 717.97 718.92
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only g f e d c b a 16 32 48 64 80 72.01 71.96 71.44 72.00 26.12 26.24 26.20
Apache Hadoop Operation: Rename - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 1000000 g f e d c b a 20K 40K 60K 80K 100K 85763 85815 84360 81208 66827 72129 73078
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream g f e d c b a 80 160 240 320 400 325.51 324.96 325.74 325.88 347.37 347.22 347.66
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream g f e d c b a 15 30 45 60 75 24.46 24.52 24.47 24.48 68.63 68.66 68.60
Apache Hadoop Operation: Rename - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 1000000 g f e d c b a 20K 40K 60K 80K 100K 84810 82501 84041 83921 74638 71679 73239
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 4 8 12 16 20 15.69 15.72 15.62 15.72 16.89 17.07 16.91
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 300 600 900 1200 1500 509.14 508.21 511.41 508.09 1418.90 1403.07 1417.07
Apache Hadoop Operation: Delete - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 g f e d c b a 20K 40K 60K 80K 100K 113895 110803 113225 112613 97031 86715 90114
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream g f e d c b a 130 260 390 520 650 607.16 607.82 607.91 606.10 605.92 605.73 605.04
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream g f e d c b a 9 18 27 36 45 13.07 13.09 12.94 13.07 39.45 39.47 39.50
Apache Hadoop Operation: Delete - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 g f e d c b a 20K 40K 60K 80K 100K 110828 111198 113327 111012 90147 97314 98932
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream g f e d c b a 30 60 90 120 150 144.11 143.69 144.10 143.76 145.26 150.61 150.59
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream g f e d c b a 40 80 120 160 200 55.43 55.54 55.46 55.61 164.61 159.06 158.92
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream g f e d c b a 130 260 390 520 650 608.72 606.79 606.76 606.58 605.88 606.67 605.76
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream g f e d c b a 9 18 27 36 45 13.06 13.09 13.12 13.13 39.42 39.45 39.44
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream g f e d c b a 30 60 90 120 150 112.48 112.41 112.06 112.25 118.78 118.95 118.75
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream g f e d c b a 40 80 120 160 200 70.93 71.04 71.27 71.14 201.54 201.25 201.39
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only g f e d c b a 0.1958 0.3916 0.5874 0.7832 0.979 0.34 0.34 0.34 0.34 0.87 0.86 0.86
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig g f e d c b a 12 24 36 48 60 55.17 55.15 55.09 55.17 27.41 27.24 27.35
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream g f e d c b a 8 16 24 32 40 31.06 31.03 30.99 31.05 35.68 35.64 35.63
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream g f e d c b a 150 300 450 600 750 257.28 257.50 257.89 257.27 671.26 672.37 672.46
Apache Hadoop Operation: Open - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 g f e d c b a 300K 600K 900K 1200K 1500K 1107420 1303781 1204819 1248439 185874 173822 215332
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj g f e d 5 10 15 20 25 22.26 22.27 22.29 22.35 MIN: 22.18 / MAX: 22.43 MIN: 22.2 / MAX: 22.44 MIN: 22.22 / MAX: 22.46 MIN: 22.28 / MAX: 22.5
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only g f e d 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only g f e d 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Apache Hadoop Operation: File Status - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 g f e d c b a 400K 800K 1200K 1600K 2000K 2049180 1964637 235627 600601 1893939 161970 1886792
Apache Hadoop Operation: Open - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 g f e d c b a 300K 600K 900K 1200K 1500K 654022 1221001 251004 278319 683995 1020408 1126126
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj g f e d 6 12 18 24 30 23.71 23.50 23.53 23.35 MIN: 23.61 / MAX: 23.93 MIN: 23.4 / MAX: 23.74 MIN: 23.43 / MAX: 23.73 MIN: 23.26 / MAX: 23.57
Apache Hadoop Operation: File Status - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 g f e d c b a 500K 1000K 1500K 2000K 2500K 2036660 1795332 320924 1818182 284252 1941748 2173913
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 g f e d 9 18 27 36 45 37.95 38.02 38.07 38.11 1. (CXX) g++ options: -O3 -fopenmp
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream g f e d c b a 20 40 60 80 100 73.19 73.22 73.26 73.31 74.50 74.56 74.32
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream g f e d c b a 70 140 210 280 350 109.22 109.09 109.09 108.91 321.51 321.18 322.25
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream g f e d c b a 11 22 33 44 55 49.03 49.07 49.01 49.09 49.02 49.11 49.37
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream g f e d c b a 110 220 330 440 550 162.99 162.93 163.14 162.85 489.11 488.13 485.67
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream g f e d c b a 20 40 60 80 100 110.98 110.89 111.09 111.11 111.03 110.92 111.01
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream g f e d c b a 50 100 150 200 250 71.90 71.94 71.91 71.92 215.65 215.93 215.64
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 20 40 60 80 100 109.90 110.00 109.97 110.11 109.58 109.23 109.80
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 50 100 150 200 250 72.69 72.57 72.66 72.46 218.52 219.53 218.15
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream g f e d c b a 11 22 33 44 55 48.98 49.07 49.06 48.87 49.21 48.97 49.01
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream g f e d c b a 110 220 330 440 550 163.23 162.90 162.93 163.56 487.05 489.45 489.12
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 1.1241 2.2482 3.3723 4.4964 5.6205 4.9787 4.9877 4.9859 4.9960 4.6348 4.6476 4.6508
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream g f e d c b a 1100 2200 3300 4400 5500 1602.52 1600.53 1599.15 1599.21 5153.66 5138.83 5137.01
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K g f e d c b a 1.1707 2.3414 3.5121 4.6828 5.8535 4.143 4.138 4.114 4.107 5.049 5.149 5.203 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown g f e d 5 10 15 20 25 21.83 21.77 21.99 21.89 MIN: 21.69 / MAX: 22.17 MIN: 21.63 / MAX: 22.18 MIN: 21.84 / MAX: 22.32 MIN: 21.74 / MAX: 22.23
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown g f e d 5 10 15 20 25 22.42 22.44 22.34 22.39 MIN: 22.22 / MAX: 22.85 MIN: 22.25 / MAX: 22.78 MIN: 22.15 / MAX: 22.75 MIN: 22.2 / MAX: 22.85
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace g f e d c b a 8 16 24 32 40 35.38 35.54 35.03 35.57 14.81 14.46 15.11 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 150M 300M 450M 600M 750M 286530000 285920000 285880000 286250000 715030000 718140000 711640000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Apache Hadoop Operation: Create - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 1000000 g f e d c b a 15K 30K 45K 60K 75K 70922 70537 70057 71296 44001 44437 46145
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj g f e d c b a 12 24 36 48 60 22.19 22.15 22.16 22.26 53.69 53.81 53.57 MIN: 22.12 / MAX: 22.33 MIN: 22.07 / MAX: 22.32 MIN: 22.08 / MAX: 22.35 MIN: 22.18 / MAX: 22.42 MIN: 52.63 / MAX: 55.24 MIN: 52.72 / MAX: 55.86 MIN: 52.17 / MAX: 55.38
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 130M 260M 390M 520M 650M 281730000 283030000 281830000 282920000 622630000 610950000 622560000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 600M 1200M 1800M 2400M 3000M 1065700000 1065300000 1065100000 1065200000 2999800000 2995400000 3005800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 600M 1200M 1800M 2400M 3000M 1118200000 1120500000 1117800000 1120800000 2564900000 2571100000 2559800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon g f e d 6 12 18 24 30 24.96 24.89 24.83 24.85 MIN: 24.9 / MAX: 25.13 MIN: 24.81 / MAX: 25.06 MIN: 24.76 / MAX: 24.96 MIN: 24.78 / MAX: 25
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 90M 180M 270M 360M 450M 274070000 273390000 273480000 273760000 424400000 429620000 425810000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 500M 1000M 1500M 2000M 2500M 1056200000 1057100000 1057500000 1059500000 2206800000 2212100000 2207700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 400M 800M 1200M 1600M 2000M 1099300000 1094600000 1095400000 1093300000 2010300000 2001900000 1994400000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 50M 100M 150M 200M 250M 194670000 194500000 196040000 193850000 214910000 216150000 216080000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 20M 40M 60M 80M 100M 100170000 99441000 97005000 99594000 109140000 108080000 109870000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 12M 24M 36M 48M 60M 49556000 49977000 50380000 50258000 55165000 55588000 52911000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 300M 600M 900M 1200M 1500M 1033400000 1024600000 1032000000 1035000000 1254800000 1214200000 1192100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 300M 600M 900M 1200M 1500M 1047100000 1041900000 1046600000 1047100000 1184800000 1190300000 1183500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 130M 260M 390M 520M 650M 543050000 545020000 545140000 545360000 603650000 602470000 594230000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 3M 6M 9M 12M 15M 12256000 12681000 12366000 12683000 14225000 14021000 13909000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 g f e d c b a 6M 12M 18M 24M 30M 22727000 25199000 25207000 24627000 28227000 27736000 27901000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 150M 300M 450M 600M 750M 682070000 693340000 692920000 689150000 674930000 692760000 699740000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 80M 160M 240M 320M 400M 357810000 350450000 357990000 363310000 366990000 366930000 369430000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 70M 140M 210M 280M 350M 277410000 276390000 277780000 278030000 306760000 305110000 307540000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 40M 80M 120M 160M 200M 190750000 189880000 191230000 188930000 194510000 196590000 196220000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 30M 60M 90M 120M 150M 138460000 138580000 138620000 138600000 153670000 153690000 153850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 30M 60M 90M 120M 150M 104800000 105740000 105480000 105650000 118550000 114010000 117490000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 17M 34M 51M 68M 85M 68678000 68861000 68846000 67054000 76924000 77019000 77181000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 g f e d c b a 13M 26M 39M 52M 65M 52854000 52879000 52827000 52665000 57519000 59296000 59401000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 g f e d c b a 8M 16M 24M 32M 40M 35236000 35271000 35315000 35228000 39453000 39486000 39499000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Apache Hadoop Operation: Create - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 1000000 g f e d c b a 16K 32K 48K 64K 80K 72706 69920 70897 72134 52260 52119 53665
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj g f e d c b a 13 26 39 52 65 23.88 23.94 23.94 23.87 56.93 56.69 56.49 MIN: 23.79 / MAX: 24.08 MIN: 23.84 / MAX: 24.16 MIN: 23.84 / MAX: 24.18 MIN: 23.78 / MAX: 24.08 MIN: 55.56 / MAX: 59.67 MIN: 55.42 / MAX: 58.97 MIN: 55.29 / MAX: 58.38
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon g f e d 7 14 21 28 35 27.91 27.83 27.83 27.74 MIN: 27.81 / MAX: 28.17 MIN: 27.73 / MAX: 28.13 MIN: 27.72 / MAX: 28.1 MIN: 27.64 / MAX: 27.98
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model g f e d c b a 7 14 21 28 35 27.75 26.97 27.46 27.33 12.04 12.10 12.31 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens g f e d c b a 7 14 21 28 35 27.70 26.87 26.80 26.74 11.33 11.32 11.02 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
Remhos Test: Sample Remap Example OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example g f e d c b a 7 14 21 28 35 30.75 30.73 30.85 30.76 16.24 16.79 16.35 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown g f e d c b a 12 24 36 48 60 21.58 21.59 21.44 21.48 55.40 55.39 54.90 MIN: 21.43 / MAX: 21.89 MIN: 21.45 / MAX: 21.84 MIN: 21.3 / MAX: 21.78 MIN: 21.32 / MAX: 21.8 MIN: 53.71 / MAX: 58.99 MIN: 54.02 / MAX: 57.64 MIN: 53.27 / MAX: 57.28
Kripke OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 g f e d 50M 100M 150M 200M 250M 237175700 236591000 236243900 240994500 1. (CXX) g++ options: -O3 -fopenmp -ldl
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown g f e d c b a 13 26 39 52 65 22.77 22.66 22.57 22.59 56.81 56.46 56.09 MIN: 22.57 / MAX: 23.16 MIN: 22.45 / MAX: 22.99 MIN: 22.39 / MAX: 22.93 MIN: 22.39 / MAX: 22.98 MIN: 55.27 / MAX: 59.91 MIN: 54.53 / MAX: 59.89 MIN: 54.05 / MAX: 59.82
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU g f e d 0.1426 0.2852 0.4278 0.5704 0.713 0.629108 0.630325 0.633975 0.628236 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU g f e d 0.6893 1.3786 2.0679 2.7572 3.4465 3.05458 3.05674 3.06370 3.05991 MIN: 2.97 MIN: 2.97 MIN: 2.97 MIN: 2.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU g f e d 0.8649 1.7298 2.5947 3.4596 4.3245 3.82381 3.81823 3.84421 3.81576 MIN: 3.29 MIN: 3.25 MIN: 3.27 MIN: 3.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only g f e d c b a 0.4118 0.8236 1.2354 1.6472 2.059 0.72 0.72 0.72 0.72 1.83 1.83 1.83
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only g f e d c b a 0.414 0.828 1.242 1.656 2.07 0.72 0.72 0.72 0.72 1.82 1.84 1.84
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon g f e d c b a 13 26 39 52 65 24.82 24.70 24.73 24.69 59.79 59.91 60.14 MIN: 24.74 / MAX: 25 MIN: 24.63 / MAX: 24.84 MIN: 24.67 / MAX: 24.86 MIN: 24.62 / MAX: 24.84 MIN: 58.46 / MAX: 62.03 MIN: 58.66 / MAX: 61.96 MIN: 58.97 / MAX: 62
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon g f e d c b a 15 30 45 60 75 28.48 28.32 28.31 28.36 67.50 67.20 67.34 MIN: 28.37 / MAX: 28.69 MIN: 28.23 / MAX: 28.55 MIN: 28.21 / MAX: 28.56 MIN: 28.26 / MAX: 28.59 MIN: 65.64 / MAX: 71.17 MIN: 65.48 / MAX: 70.41 MIN: 65.61 / MAX: 70.54
Apache Hadoop Operation: Rename - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 100000 g f e d c b a 20K 40K 60K 80K 100K 80386 79491 83822 82102 67159 69348 75529
Apache Hadoop Operation: Rename - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 100000 g f e d c b a 20K 40K 60K 80K 100K 82237 81633 82237 82372 77101 73046 70522
Apache Hadoop Operation: Delete - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 100000 g f e d c b a 20K 40K 60K 80K 100K 102564 99404 98039 105708 73475 90827 87566
Apache Hadoop Operation: Delete - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 100000 g f e d c b a 20K 40K 60K 80K 100K 103950 96993 100604 101010 90580 73801 91075
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU g f e d 0.2575 0.515 0.7725 1.03 1.2875 1.12723 1.00136 1.14432 1.03749 MIN: 0.93 MIN: 0.92 MIN: 1.07 MIN: 0.92 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU g f e d 0.5772 1.1544 1.7316 2.3088 2.886 2.51441 2.49714 2.56522 2.49408 MIN: 2.3 MIN: 2.26 MIN: 2.32 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU g f e d 0.148 0.296 0.444 0.592 0.74 0.647700 0.653182 0.657610 0.652259 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Apache Hadoop Operation: Open - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 100000 g f e d c b a 120K 240K 360K 480K 600K 546448 578035 552486 578035 401606 469484 460829
Apache Hadoop Operation: Open - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 g f e d c b a 110K 220K 330K 440K 550K 460829 523560 294985 529101 403226 404858 420168
Apache Hadoop Operation: File Status - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 g f e d c b a 160K 320K 480K 640K 800K 487805 478469 613497 591716 729927 458716 515464
Apache Hadoop Operation: File Status - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 g f e d c b a 200K 400K 600K 800K 1000K 561798 709220 389105 632911 657895 862069 529101
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p g f e d c b a 3 6 9 12 15 11.02 10.74 10.98 10.91 12.62 12.59 12.48 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Hadoop Operation: Create - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 100000 g f e d c b a 13K 26K 39K 52K 65K 58928 59382 58824 57971 35075 37425 40733
Apache Hadoop Operation: Create - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 100000 g f e d c b a 13K 26K 39K 52K 65K 60680 58343 58617 58617 43937 41288 43649
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU g f e d 0.2388 0.4776 0.7164 0.9552 1.194 1.04567 1.06144 1.05425 1.02875 MIN: 0.98 MIN: 0.98 MIN: 0.97 MIN: 0.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU g f e d 0.2881 0.5762 0.8643 1.1524 1.4405 1.27918 1.20653 1.28043 1.25758 MIN: 1.24 MIN: 1.18 MIN: 1.24 MIN: 1.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU g f e d 0.1378 0.2756 0.4134 0.5512 0.689 0.612320 0.600834 0.575794 0.603950 MIN: 0.53 MIN: 0.53 MIN: 0.52 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K g f e d c b a 20 40 60 80 100 67.81 67.39 67.72 66.99 90.42 91.32 90.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU g f e d 0.48 0.96 1.44 1.92 2.4 2.11813 2.13062 2.12570 2.13332 MIN: 1.99 MIN: 1.97 MIN: 2.01 MIN: 2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU g f e d 0.3539 0.7078 1.0617 1.4156 1.7695 1.55118 1.57282 1.54911 1.55824 MIN: 1.52 MIN: 1.53 MIN: 1.51 MIN: 1.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU g f e d 0.3019 0.6038 0.9057 1.2076 1.5095 1.33564 1.34183 1.33861 1.33789 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p g f e d c b a 30 60 90 120 150 118.48 118.49 119.31 118.95 143.55 138.34 141.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K g f e d c b a 40 80 120 160 200 161.32 160.80 162.05 161.85 161.50 166.69 163.01 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K g f e d c b a 40 80 120 160 200 160.32 161.85 162.61 163.19 163.06 166.38 163.46 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU g f e d 0.4315 0.863 1.2945 1.726 2.1575 1.91274 1.91422 1.91781 1.91374 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU g f e d 0.1914 0.3828 0.5742 0.7656 0.957 0.843492 0.850691 0.844434 0.847805 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU g f e d 0.7615 1.523 2.2845 3.046 3.8075 3.38156 3.37956 3.38436 3.37782 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 g f e d 0.3728 0.7456 1.1184 1.4912 1.864 1.648 1.657 1.654 1.657 1. (CXX) g++ options: -O3 -fopenmp
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p g f e d c b a 110 220 330 440 550 528.53 521.52 525.17 526.22 431.90 427.69 422.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p g f e d c b a 130 260 390 520 650 586.75 585.37 597.01 604.99 516.91 542.61 510.36 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Phoronix Test Suite v10.8.5