extra tests2 Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&grs&sor .
extra tests2 Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution a b c d e f g 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads) Supermicro H13DSH (1.5 BIOS) 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07 astdrmfb AlmaLinux 9.2 5.14.0-284.25.1.el9_2.x86_64 (x86_64) GCC 11.3.1 20221121 ext4 1024x768 AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) Supermicro H13SSW (1.1 BIOS) 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details - a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 Java Details - OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS) Python Details - Python 3.9.16 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
extra tests2 hadoop: File Status - 50 - 1000000 hadoop: Open - 100 - 1000000 hadoop: Open - 50 - 1000000 deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream openvino: Handwritten English Recognition FP16-INT8 - CPU deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream openvino: Handwritten English Recognition FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time openvino: Weld Porosity Detection FP16 - CPU deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream liquid-dsp: 96 - 256 - 32 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream openvino: Face Detection FP16-INT8 - CPU deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream blender: Pabellon Barcelona - CPU-Only openvino: Face Detection Retail FP16-INT8 - CPU blender: Classroom - CPU-Only blender: BMW27 - CPU-Only openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU blender: Fishy Cat - CPU-Only openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Person Detection FP32 - CPU specfem3d: Layered Halfspace openvino: Person Detection FP16 - CPU blender: Barbershop - CPU-Only deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream brl-cad: VGR Performance Metric embree: Pathtracer - Crown openvino: Machine Translation EN To DE FP16 - CPU openvino: Vehicle Detection FP16 - CPU oidn: RTLightmap.hdr.4096x4096 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only ospray: gravity_spheres_volume/dim_512/scivis/real_time oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only ospray: gravity_spheres_volume/dim_512/ao/real_time embree: Pathtracer ISPC - Crown specfem3d: Mount St. Helens liquid-dsp: 96 - 256 - 512 ospray: gravity_spheres_volume/dim_512/pathtracer/real_time specfem3d: Homogeneous Halfspace openvino: Vehicle Detection FP16-INT8 - CPU embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj openvino: Person Vehicle Bike Detection FP16 - CPU embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon specfem3d: Water-layered Halfspace openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU specfem3d: Tomographic Model liquid-dsp: 96 - 256 - 57 openvino: Road Segmentation ADAS FP16 - CPU hadoop: File Status - 50 - 100000 liquid-dsp: 64 - 256 - 512 liquid-dsp: 64 - 256 - 32 build-linux-kernel: defconfig hadoop: File Status - 100 - 1000000 openvino: Face Detection FP16 - CPU remhos: Sample Remap Example openvino: Face Detection FP16-INT8 - CPU liquid-dsp: 64 - 256 - 57 hadoop: Open - 100 - 100000 openvino: Person Detection FP32 - CPU openvino: Person Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Vehicle Detection FP16 - CPU hadoop: Create - 100 - 100000 openvino: Vehicle Detection FP16-INT8 - CPU hadoop: Create - 100 - 1000000 openvino: Person Vehicle Bike Detection FP16 - CPU hadoop: File Status - 100 - 100000 liquid-dsp: 32 - 256 - 512 openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU tidb: oltp_read_write - 128 tidb: oltp_read_write - 64 openvino: Road Segmentation ADAS FP16 - CPU hadoop: Create - 50 - 100000 hadoop: Open - 50 - 100000 hadoop: Delete - 100 - 100000 ospray: particle_volume/pathtracer/real_time hadoop: Delete - 50 - 100000 hadoop: Create - 50 - 1000000 cassandra: Writes tidb: oltp_point_select - 1 svt-av1: Preset 8 - Bosphorus 4K tidb: oltp_read_write - 32 hadoop: Delete - 100 - 1000000 tidb: oltp_update_non_index - 1 tidb: oltp_read_write - 1 hadoop: Rename - 100 - 1000000 tidb: oltp_update_non_index - 128 svt-av1: Preset 4 - Bosphorus 4K hadoop: Delete - 50 - 1000000 tidb: oltp_update_index - 1 svt-av1: Preset 12 - Bosphorus 1080p hadoop: Rename - 100 - 100000 liquid-dsp: 2 - 256 - 512 tidb: oltp_point_select - 128 liquid-dsp: 32 - 256 - 57 tidb: oltp_update_non_index - 64 svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p hadoop: Rename - 50 - 1000000 nekrs: TurboPipe Periodic svt-av1: Preset 4 - Bosphorus 1080p hadoop: Rename - 50 - 100000 liquid-dsp: 1 - 256 - 512 tidb: oltp_update_index - 64 deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream liquid-dsp: 2 - 256 - 32 onednn: IP Shapes 1D - bf16bf16bf16 - CPU liquid-dsp: 32 - 256 - 32 liquid-dsp: 8 - 256 - 512 liquid-dsp: 2 - 256 - 57 tidb: oltp_point_select - 64 liquid-dsp: 1 - 256 - 57 nekrs: Kershaw liquid-dsp: 4 - 256 - 512 liquid-dsp: 1 - 256 - 32 tidb: oltp_update_index - 128 liquid-dsp: 16 - 256 - 512 liquid-dsp: 8 - 256 - 32 liquid-dsp: 16 - 256 - 32 liquid-dsp: 4 - 256 - 32 openvino: Age Gender Recognition Retail 0013 FP16 - CPU tidb: oltp_update_non_index - 32 tidb: oltp_point_select - 32 openvino: Handwritten English Recognition FP16-INT8 - CPU deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream openvino: Face Detection Retail FP16-INT8 - CPU deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream tidb: oltp_point_select - 16 tidb: oltp_update_index - 32 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream onednn: IP Shapes 3D - u8s8f32 - CPU tidb: oltp_read_write - 16 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream onednn: IP Shapes 3D - f32 - CPU openvino: Weld Porosity Detection FP16 - CPU liquid-dsp: 8 - 256 - 57 deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Handwritten English Recognition FP16 - CPU liquid-dsp: 4 - 256 - 57 svt-av1: Preset 12 - Bosphorus 4K tidb: oltp_update_non_index - 16 liquid-dsp: 16 - 256 - 57 svt-av1: Preset 13 - Bosphorus 4K onednn: IP Shapes 3D - bf16bf16bf16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU onednn: IP Shapes 1D - f32 - CPU kripke: easywave: e2Asean Grid + BengkuluSept2007 Source - 2400 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU embree: Pathtracer ISPC - Asian Dragon Obj tidb: oltp_update_index - 16 embree: Pathtracer - Crown onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream onednn: Convolution Batch Shapes Auto - f32 - CPU deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream onednn: Recurrent Neural Network Training - u8s8f32 - CPU embree: Pathtracer ISPC - Asian Dragon easywave: e2Asean Grid + BengkuluSept2007 Source - 240 embree: Pathtracer - Asian Dragon openvkl: vklBenchmarkCPU Scalar deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon Obj openvkl: vklBenchmarkCPU ISPC easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 onednn: Recurrent Neural Network Training - f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU oidn: RTLightmap.hdr.4096x4096 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only openradioss: Bumper Beam a b c d e f g 2173913 215332 1126126 5137.0114 1244.69 49.3258 39.4951 218.1464 39.4391 489.1203 485.6725 215.6383 718.9189 158.924 322.2505 1560.03 30.41 5776.94 15.986 15.9528 2945.26 201.3925 3005800000 68.5988 56.01 1417.0706 80.54 9837.58 66.42 26.2 120606.38 33.22 86884.64 283.97 26.885983804 282.55 254.88 672.4635 772162 54.9017 317.22 2033.17 0.86 1.84 13.8739 1.83 14.2369 56.0871 11.024765775 711640000 16.3468 15.10511773 2873.24 60.1449 53.5733 2454.09 56.4853 67.3378 26.985020908 5882.91 842.91 12.312946652 2559800000 748.44 529101 622560000 2207700000 27.354 1886792 393.6 16.346 213.94 1994400000 420168 42.24 42.44 37.8 5.89 40733 4.17 46145 4.88 515464 425810000 2.03 14.23 85757 79090 16.02 43649 460829 87566 215.096 91075 53665 248095 4331 90.811 58974 90114 1328 2540 73078 51105 5.203 98932 1212 422.994 75529 27901000 159242 1192100000 41281 141.219 510.361 73239 6767710000 12.477 70522 13909000 35.628 77181000 1183500000 109870000 117490000 127567 59401000 11106900000 52911000 39499000 27087 216080000 307540000 594230000 153850000 0.54 28735 104627 38.5 16.9074 4.86 4.6508 18361 347.6612 38331 118.7509 16.26 369430000 150.5897 485.7175 8.28 30.72 196220000 163.459 18095 699740000 163.013 0.34 74.3211 12558 109.8027 49.3653 33.3412 49.0092 605.758 605.0388 111.0131 1941748 173822 1020408 5138.8341 1239.67 49.1666 39.4681 219.5263 39.4539 489.4464 488.1264 215.9254 717.9693 159.0596 321.1829 1546.02 30.44 5780.44 15.9785 15.9888 2986.46 201.2528 2995400000 68.6617 56.06 1403.065 80.76 9849.07 66.64 26.24 120728.22 33.17 87359.23 284.99 28.65210863 284.22 255.3 672.3734 768517 55.3925 317.28 2028.01 0.86 1.84 13.7666 1.83 14.1783 56.4551 11.318495709 718140000 16.4365 14.46058698 2880.58 59.9124 53.8135 2450.26 56.6902 67.1951 29.460761197 5836.27 854.51 12.100595947 2571100000 750.49 862069 610950000 2212100000 27.241 161970 393.23 16.791 213.62 2001900000 404858 42.09 42.2 37.79 5.91 37425 4.16 44437 4.89 458716 429620000 2.05 14.03 89099 80183 15.98 41288 469484 90827 214.074 73801 52119 256661 4405 91.322 61520 86715 1312 2510 72129 5.149 97314 427.686 69348 27736000 159728 1214200000 39759 138.338 542.611 71679 6757360000 12.591 73046 14021000 24371 35.6422 77019000 1190300000 108080000 114010000 130802 59296000 11240300000 55588000 39486000 27464 216150000 305110000 602470000 153690000 0.54 28914 106180 38.66 17.0669 4.85 4.6476 67515 17817 347.2189 36950 118.9482 16.02 366930000 150.6055 487.3599 8.27 31 196590000 166.378 18068 692760000 166.692 0.34 74.5553 109.2332 49.1082 33.3798 48.9703 606.6693 605.7307 110.9152 284252 185874 683995 5153.6644 1237.29 47.1535 39.4464 218.5162 39.4183 487.0522 489.1106 215.647 716.1404 164.6131 321.5082 1551.63 30.43 5802.65 15.9872 15.9778 2987.33 201.5402 2999800000 68.6287 56.02 1418.9041 80.41 9845.27 66.72 26.12 123484.28 33.03 86789.8 284.31 27.490850157 282.67 254.72 671.2554 762529 55.4037 317.33 2029.79 0.87 1.82 13.8317 1.83 14.1399 56.8078 11.32735977 715030000 16.535 14.808273365 2881.14 59.7929 53.6927 2455.51 56.9327 67.5038 27.060235079 5840.53 849.3 12.040917877 2564900000 757.38 657895 622630000 2206800000 27.408 1893939 393.37 16.243 213.79 2010300000 403226 42.19 42.43 37.79 5.9 35075 4.16 44001 4.88 729927 424400000 2.05 14.12 78469 15.83 43937 401606 73475 214.136 90580 52260 270480 4471 90.417 59630 97031 1381 2485 66827 52865 5.049 90147 1189 431.895 67159 28227000 149962 1254800000 39106 143.545 516.906 74638 6754170000 12.617 77101 14225000 23324 35.6825 76924000 1184800000 109140000 118550000 57519000 10826700000 55165000 39453000 26546 214910000 306760000 603650000 153670000 0.54 38.75 16.8863 4.86 4.6348 65406 17565 347.3728 37368 118.7801 16.02 366990000 145.2562 507.4786 8.24 30.89 194510000 163.055 674930000 161.495 0.34 74.5031 12681 109.5847 49.0173 33.4637 49.2055 605.8765 605.9183 111.025 1818182 1248439 278319 1599.2079 395.66 16.1648 13.0694 72.455 13.1312 163.5559 162.8502 71.9189 240.5529 55.6132 108.909 532.59 10.47 2013.77 5.57469 5.57001 1039.61 71.137 1065200000 24.4833 20.03 508.087 224.15 3540.88 182.99 72 44958.07 90.03 32002.62 106.9 71.614294327 107.02 670.87 257.2728 298064 21.4811 124.12 797.64 0.34 0.72 5.45329 0.72 5.60747 22.585 26.736414118 286250000 6.58745 35.571684908 1175.67 24.6872 22.2614 1036.99 23.8733 28.3643 62.441749585 2564.78 370.57 27.330985588 1120800000 344.67 632911 282920000 1059500000 55.173 600601 761.59 30.761 398.52 1093300000 529101 74.81 74.71 64.41 10.01 57971 6.79 71296 7.7 591716 273760000 3.11 21.57 59727 55334 23.2 58617 578035 105708 151.905 101010 72134 197866 5898 66.988 46977 112613 1693 81208 4.107 111012 1479 526.216 82102 24627000 129492 1035000000 34224 118.946 604.986 83921 7934570000 10.91 82372 12683000 21108 31.0525 67054000 1.03749 1047100000 99594000 105650000 115675 52665000 10318900000 50258000 35228000 193850000 278030000 545360000 138600000 0.49 26273 98149 40.4 15.7246 4.51 4.996 17612 325.8763 0.60395 36480 112.2499 1.25758 15.36 363310000 143.7643 493.596 7.93 30.02 188930000 163.189 18563 689150000 161.854 1.02875 0.35 2.49408 240994500 98.98 73.3058 849.163 838.516 1.55824 0.652259 23.3531 12622 21.8943 0.628236 0.847805 110.1114 847.38 3.81576 49.0863 33.2242 2.13332 48.8729 1642.51 27.7438 1.657 24.845 191 606.5773 606.101 1.33789 22.3922 22.3517 487 38.105 1641.92 3.05991 1.91374 111.1081 3.37782 1643.99 0.34 0.72 0.72 320924 1204819 251004 1599.1543 432.32 16.1392 12.9433 72.6571 13.1187 162.9298 163.1361 71.9146 240.2349 55.4634 109.0938 530.99 10.47 2007.53 5.54107 5.56353 1039.82 71.2727 1065100000 24.4725 20 511.4098 224.1 3544.18 182.56 71.44 44933.27 90.31 32032.06 107.24 70.189028506 107.27 670.64 257.894 296125 21.4357 123.61 793.75 0.34 0.72 5.46153 0.72 5.6204 22.5694 26.799143446 285880000 6.5827 35.030134889 1174.6 24.7343 22.1577 1028.64 23.9393 28.3141 62.325146828 2562.54 373.64 27.459821308 1117800000 342.81 389105 281830000 1057500000 55.093 235627 761.16 30.845 398.91 1095400000 294985 74.54 74.5 64.68 10.06 58824 6.8 70057 7.77 613497 273480000 3.11 21.4 60145 53893 23.32 58617 552486 98039 151.506 100604 70897 195798 5976 67.721 46737 113225 1708 3209 84360 42138 4.114 113327 1490 525.173 83822 25207000 129904 1032000000 33881 119.307 597.011 84041 7931010000 10.984 82237 12366000 21271 30.9867 68846000 1.14432 1046600000 97005000 105480000 118657 52827000 10264000000 50380000 35315000 24611 196040000 277780000 545140000 138620000 0.49 26285 96907 36.98 15.6245 4.51 4.9859 70250 17117 325.7416 0.575794 36784 112.0574 1.28043 15.36 357990000 144.1013 494.2575 7.96 30.1 191230000 162.608 18557 692920000 162.051 1.05425 0.35 2.56522 236243900 99.415 73.2609 851.659 849.712 1.54911 0.65761 23.5283 12567 21.9909 0.633975 0.844434 109.9654 841.078 3.84421 49.0094 33.2625 2.1257 49.0598 1639.36 27.8293 1.654 24.8282 190 606.755 607.913 1.33861 22.3422 22.2911 487 38.067 1641 3.0637 1.91781 111.0888 3.38436 1643.97 0.34 0.72 0.72 1795332 1303781 1221001 1600.5275 431.94 16.1307 13.0853 72.5736 13.0934 162.8976 162.9294 71.9401 240.1642 55.5428 109.09 533.74 10.48 2004.76 5.5732 5.55581 1038.47 71.044 1065300000 24.5238 20.01 508.2088 223.95 3548.78 181.7 71.96 44968.43 90.26 31951.64 106.76 70.542255905 107.39 667.87 257.5046 295603 21.5913 124.3 791.74 0.34 0.72 5.45227 0.72 5.61454 22.6566 26.873168455 285920000 6.59563 35.535073001 1180.85 24.7047 22.149 1041.87 23.9354 28.3237 61.281769124 2539.97 369.26 26.973757395 1120500000 343.49 709220 283030000 1057100000 55.148 1964637 760.57 30.725 399.24 1094600000 523560 74.87 74.43 64.31 10.09 59382 6.76 70537 7.67 478469 273390000 3.14 21.65 60310 54956 23.28 58343 578035 99404 151.78 96993 69920 196287 5954 67.393 47141 110803 1697 3218 85815 41424 4.138 111198 1483 521.518 79491 25199000 130389 1024600000 34470 118.486 585.368 82501 7955790000 10.736 81633 12681000 21067 31.0292 68861000 1.00136 1041900000 99441000 105740000 119092 52879000 9976450000 49977000 35271000 24830 194500000 276390000 545020000 138580000 0.49 26695 97368 37.01 15.721 4.5 4.9877 70105 324.9568 0.600834 36125 112.413 1.20653 15.38 350450000 143.6922 494.2211 7.97 29.95 189880000 161.847 693340000 160.798 1.06144 0.35 2.49714 236591000 97.987 73.2163 849.344 851.494 1.57282 0.653182 23.5042 12692 21.7746 0.630325 0.850691 110.0001 845.308 3.81823 49.0712 33.2751 2.13062 49.0714 1636.44 27.826 1.657 24.887 191 606.7852 607.8171 1.34183 22.4407 22.2676 488 38.015 1636.76 3.05674 1.91422 110.8852 3.37956 1642.35 0.34 0.72 0.72 2036660 1107420 654022 1602.5221 432.2 16.0672 13.073 72.6879 13.0596 163.2282 162.9937 71.9003 239.5176 55.4264 109.2191 538.01 10.48 2006.09 5.57553 5.56539 1039.37 70.9275 1065700000 24.4607 20.05 509.139 224.12 3533.64 183.29 72.01 45097.99 90.63 32008.03 107.24 69.955609165 107.04 669.09 257.2808 295522 21.5847 123.41 793.9 0.34 0.72 5.47725 0.72 5.62278 22.7745 27.696631371 286530000 6.60085 35.378600021 1175.58 24.8193 22.1901 1031.6 23.8796 28.4793 62.810924376 2557.66 372.26 27.746475162 1118200000 341.36 561798 281730000 1056200000 55.172 2049180 759.92 30.75 398.13 1099300000 460829 74.58 74.71 64.77 10.06 58928 6.79 70922 7.74 487805 274070000 3.12 21.47 59944 55301 23.42 60680 546448 102564 151.681 103950 72706 197092 67.811 46993 113895 1705 3195 85763 41695 4.143 110828 1481 528.533 80386 22727000 1033400000 34107 118.481 586.748 84810 7964910000 11.016 82237 12256000 31.0574 68678000 1.12723 1047100000 100170000 104800000 118549 52854000 10500600000 49556000 35236000 24574 194670000 277410000 543050000 138460000 0.49 96840 36.98 15.6892 4.52 4.9787 69923 17135 325.5075 0.61232 36088 112.4779 1.27918 15.37 357810000 144.1053 495.6033 7.96 29.72 190750000 160.322 18735 682070000 161.324 1.04567 0.35 2.51441 237175700 97.529 73.1867 837.595 848.032 1.55118 0.6477 23.7091 12627 21.8305 0.629108 0.843492 109.9037 847.417 3.82381 49.0259 33.3674 2.11813 48.9837 1631.99 27.91 1.648 24.9619 191 608.7163 607.1628 1.33564 22.4215 22.2559 489 37.95 1637.37 3.05458 1.91274 110.9759 3.38156 1641.4 0.34 0.72 0.72 OpenBenchmarking.org
Apache Hadoop Operation: File Status - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 a g b d f e c 500K 1000K 1500K 2000K 2500K 2173913 2036660 1941748 1818182 1795332 320924 284252
Apache Hadoop Operation: Open - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 f d e g a c b 300K 600K 900K 1200K 1500K 1303781 1248439 1204819 1107420 215332 185874 173822
Apache Hadoop Operation: Open - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 f a b c g d e 300K 600K 900K 1200K 1500K 1221001 1126126 1020408 683995 654022 278319 251004
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c b a g f d e 1100 2200 3300 4400 5500 5153.66 5138.83 5137.01 1602.52 1600.53 1599.21 1599.15
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c e g f d 300 600 900 1200 1500 1244.69 1239.67 1237.29 432.32 432.20 431.94 395.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d e f g 11 22 33 44 55 49.33 49.17 47.15 16.16 16.14 16.13 16.07
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c f g d e 9 18 27 36 45 39.50 39.47 39.45 13.09 13.07 13.07 12.94
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c a g e f d 50 100 150 200 250 219.53 218.52 218.15 72.69 72.66 72.57 72.46
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a c d e f g 9 18 27 36 45 39.45 39.44 39.42 13.13 13.12 13.09 13.06
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream b a c d g e f 110 220 330 440 550 489.45 489.12 487.05 163.56 163.23 162.93 162.90
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream c b a e g f d 110 220 330 440 550 489.11 488.13 485.67 163.14 162.99 162.93 162.85
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream b c a f d e g 50 100 150 200 250 215.93 215.65 215.64 71.94 71.92 71.91 71.90
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 160 320 480 640 800 718.92 717.97 716.14 240.55 240.23 240.16 239.52
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream c b a d f e g 40 80 120 160 200 164.61 159.06 158.92 55.61 55.54 55.46 55.43
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a c b g e f d 70 140 210 280 350 322.25 321.51 321.18 109.22 109.09 109.09 108.91
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a c b g f d e 300 600 900 1200 1500 1560.03 1551.63 1546.02 538.01 533.74 532.59 530.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU b c a g f e d 7 14 21 28 35 30.44 30.43 30.41 10.48 10.48 10.47 10.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU c b a d e g f 1200 2400 3600 4800 6000 5802.65 5780.44 5776.94 2013.77 2007.53 2006.09 2004.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time c a b g d f e 4 8 12 16 20 15.98720 15.98600 15.97850 5.57553 5.57469 5.57320 5.54107
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time b c a d g e f 4 8 12 16 20 15.98880 15.97780 15.95280 5.57001 5.56539 5.56353 5.55581
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU c b a e d g f 600 1200 1800 2400 3000 2987.33 2986.46 2945.26 1039.82 1039.61 1039.37 1038.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream c a b e d f g 40 80 120 160 200 201.54 201.39 201.25 71.27 71.14 71.04 70.93
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 a c b g f d e 600M 1200M 1800M 2400M 3000M 3005800000 2999800000 2995400000 1065700000 1065300000 1065200000 1065100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b c a f d e g 15 30 45 60 75 68.66 68.63 68.60 24.52 24.48 24.47 24.46
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU b c a g d f e 13 26 39 52 65 56.06 56.02 56.01 20.05 20.03 20.01 20.00 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream c a b e g f d 300 600 900 1200 1500 1418.90 1417.07 1403.07 511.41 509.14 508.21 508.09
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only c a b f e g d 50 100 150 200 250 80.41 80.54 80.76 223.95 224.10 224.12 224.15
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU b c a f e d g 2K 4K 6K 8K 10K 9849.07 9845.27 9837.58 3548.78 3544.18 3540.88 3533.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only a b c f e d g 40 80 120 160 200 66.42 66.64 66.72 181.70 182.56 182.99 183.29
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only c a b e f d g 16 32 48 64 80 26.12 26.20 26.24 71.44 71.96 72.00 72.01
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c b a g f d e 30K 60K 90K 120K 150K 123484.28 120728.22 120606.38 45097.99 44968.43 44958.07 44933.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only c b a d f e g 20 40 60 80 100 33.03 33.17 33.22 90.03 90.26 90.31 90.63
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b a c e g d f 20K 40K 60K 80K 100K 87359.23 86884.64 86789.80 32032.06 32008.03 32002.62 31951.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU b c a g e d f 60 120 180 240 300 284.99 284.31 283.97 107.24 107.24 106.90 106.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a c b g e f d 16 32 48 64 80 26.89 27.49 28.65 69.96 70.19 70.54 71.61 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU b c a f e g d 60 120 180 240 300 284.22 282.67 282.55 107.39 107.27 107.04 107.02 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only c a b f g e d 140 280 420 560 700 254.72 254.88 255.30 667.87 669.09 670.64 670.87
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c e f g d 150 300 450 600 750 672.46 672.37 671.26 257.89 257.50 257.28 257.27
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c d e f g 170K 340K 510K 680K 850K 772162 768517 762529 298064 296125 295603 295522 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown c b a f g d e 12 24 36 48 60 55.40 55.39 54.90 21.59 21.58 21.48 21.44 MIN: 53.71 / MAX: 58.99 MIN: 54.02 / MAX: 57.64 MIN: 53.27 / MAX: 57.28 MIN: 21.45 / MAX: 21.84 MIN: 21.43 / MAX: 21.89 MIN: 21.32 / MAX: 21.8 MIN: 21.3 / MAX: 21.78
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU c b a f d e g 70 140 210 280 350 317.33 317.28 317.22 124.30 124.12 123.61 123.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a c b d g e f 400 800 1200 1600 2000 2033.17 2029.79 2028.01 797.64 793.90 793.75 791.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only c b a g f e d 0.1958 0.3916 0.5874 0.7832 0.979 0.87 0.86 0.86 0.34 0.34 0.34 0.34
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only b a c g f e d 0.414 0.828 1.242 1.656 2.07 1.84 1.84 1.82 0.72 0.72 0.72 0.72
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a c b g e d f 4 8 12 16 20 13.87390 13.83170 13.76660 5.47725 5.46153 5.45329 5.45227
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only c b a g f e d 0.4118 0.8236 1.2354 1.6472 2.059 1.83 1.83 1.83 0.72 0.72 0.72 0.72
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b c g e f d 4 8 12 16 20 14.23690 14.17830 14.13990 5.62278 5.62040 5.61454 5.60747
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown c b a g f d e 13 26 39 52 65 56.81 56.46 56.09 22.77 22.66 22.59 22.57 MIN: 55.27 / MAX: 59.91 MIN: 54.53 / MAX: 59.89 MIN: 54.05 / MAX: 59.82 MIN: 22.57 / MAX: 23.16 MIN: 22.45 / MAX: 22.99 MIN: 22.39 / MAX: 22.98 MIN: 22.39 / MAX: 22.93
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens a b c d e f g 7 14 21 28 35 11.02 11.32 11.33 26.74 26.80 26.87 27.70 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 b c a g d f e 150M 300M 450M 600M 750M 718140000 715030000 711640000 286530000 286250000 285920000 285880000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time c b a g f d e 4 8 12 16 20 16.53500 16.43650 16.34680 6.60085 6.59563 6.58745 6.58270
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace b c a e g f d 8 16 24 32 40 14.46 14.81 15.11 35.03 35.38 35.54 35.57 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU c b a f d g e 600 1200 1800 2400 3000 2881.14 2880.58 2873.24 1180.85 1175.67 1175.58 1174.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon a b c g e f d 13 26 39 52 65 60.14 59.91 59.79 24.82 24.73 24.70 24.69 MIN: 58.97 / MAX: 62 MIN: 58.66 / MAX: 61.96 MIN: 58.46 / MAX: 62.03 MIN: 24.74 / MAX: 25 MIN: 24.67 / MAX: 24.86 MIN: 24.63 / MAX: 24.84 MIN: 24.62 / MAX: 24.84
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj b c a d g e f 12 24 36 48 60 53.81 53.69 53.57 22.26 22.19 22.16 22.15 MIN: 52.72 / MAX: 55.86 MIN: 52.63 / MAX: 55.24 MIN: 52.17 / MAX: 55.38 MIN: 22.18 / MAX: 22.42 MIN: 22.12 / MAX: 22.33 MIN: 22.08 / MAX: 22.35 MIN: 22.07 / MAX: 22.32
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU c a b f d g e 500 1000 1500 2000 2500 2455.51 2454.09 2450.26 1041.87 1036.99 1031.60 1028.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj c b a e f g d 13 26 39 52 65 56.93 56.69 56.49 23.94 23.94 23.88 23.87 MIN: 55.56 / MAX: 59.67 MIN: 55.42 / MAX: 58.97 MIN: 55.29 / MAX: 58.38 MIN: 23.84 / MAX: 24.18 MIN: 23.84 / MAX: 24.16 MIN: 23.79 / MAX: 24.08 MIN: 23.78 / MAX: 24.08
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon c a b g d f e 15 30 45 60 75 67.50 67.34 67.20 28.48 28.36 28.32 28.31 MIN: 65.64 / MAX: 71.17 MIN: 65.61 / MAX: 70.54 MIN: 65.48 / MAX: 70.41 MIN: 28.37 / MAX: 28.69 MIN: 28.26 / MAX: 28.59 MIN: 28.23 / MAX: 28.55 MIN: 28.21 / MAX: 28.56
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a c b f e d g 14 28 42 56 70 26.99 27.06 29.46 61.28 62.33 62.44 62.81 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a c b d e g f 1300 2600 3900 5200 6500 5882.91 5840.53 5836.27 2564.78 2562.54 2557.66 2539.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b c a e g d f 200 400 600 800 1000 854.51 849.30 842.91 373.64 372.26 370.57 369.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model c b a f d e g 7 14 21 28 35 12.04 12.10 12.31 26.97 27.33 27.46 27.75 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 b c a d f g e 600M 1200M 1800M 2400M 3000M 2571100000 2564900000 2559800000 1120800000 1120500000 1118200000 1117800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU c b a d f e g 160 320 480 640 800 757.38 750.49 748.44 344.67 343.49 342.81 341.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Hadoop Operation: File Status - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 b f c d g a e 200K 400K 600K 800K 1000K 862069 709220 657895 632911 561798 529101 389105
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 c a b f d e g 130M 260M 390M 520M 650M 622630000 622560000 610950000 283030000 282920000 281830000 281730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 b a c d e f g 500M 1000M 1500M 2000M 2500M 2212100000 2207700000 2206800000 1059500000 1057500000 1057100000 1056200000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig b a c e f g d 12 24 36 48 60 27.24 27.35 27.41 55.09 55.15 55.17 55.17
Apache Hadoop Operation: File Status - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 g f c a d e b 400K 800K 1200K 1600K 2000K 2049180 1964637 1893939 1886792 600601 235627 161970
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU b c a g f e d 160 320 480 640 800 393.23 393.37 393.60 759.92 760.57 761.16 761.59 MIN: 360.87 / MAX: 433.13 MIN: 362.57 / MAX: 433.51 MIN: 363.29 / MAX: 431.61 MIN: 737.63 / MAX: 771.07 MIN: 741.4 / MAX: 770.88 MIN: 741.99 / MAX: 776.56 MIN: 738.34 / MAX: 772.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Remhos Test: Sample Remap Example OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example c a b f g d e 7 14 21 28 35 16.24 16.35 16.79 30.73 30.75 30.76 30.85 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU b c a g d e f 90 180 270 360 450 213.62 213.79 213.94 398.13 398.52 398.91 399.24 MIN: 197.2 / MAX: 235.23 MIN: 197.29 / MAX: 236.32 MIN: 201.64 / MAX: 242.71 MIN: 379.09 / MAX: 404.71 MIN: 382.1 / MAX: 404.98 MIN: 386.2 / MAX: 407.29 MIN: 387.9 / MAX: 408.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 c b a g e f d 400M 800M 1200M 1600M 2000M 2010300000 2001900000 1994400000 1099300000 1095400000 1094600000 1093300000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Apache Hadoop Operation: Open - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 d f g a b c e 110K 220K 330K 440K 550K 529101 523560 460829 420168 404858 403226 294985
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU b c a e g d f 20 40 60 80 100 42.09 42.19 42.24 74.54 74.58 74.81 74.87 MIN: 37.13 / MAX: 58.71 MIN: 36.21 / MAX: 65.64 MIN: 36.59 / MAX: 61.56 MIN: 65.97 / MAX: 82.9 MIN: 67.63 / MAX: 78.73 MIN: 66.88 / MAX: 80.7 MIN: 66.72 / MAX: 80.96 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU b c a f e d g 20 40 60 80 100 42.20 42.43 42.44 74.43 74.50 74.71 74.71 MIN: 36.84 / MAX: 61.97 MIN: 36.31 / MAX: 62.36 MIN: 36.14 / MAX: 61.98 MIN: 65.68 / MAX: 83.49 MIN: 66.5 / MAX: 80.32 MIN: 66.12 / MAX: 81.09 MIN: 66.29 / MAX: 79.68 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU b c a f d e g 14 28 42 56 70 37.79 37.79 37.80 64.31 64.41 64.68 64.77 MIN: 32.97 / MAX: 53.7 MIN: 33.29 / MAX: 54.88 MIN: 33.35 / MAX: 56.45 MIN: 50.85 / MAX: 70.77 MIN: 37.44 / MAX: 73.04 MIN: 38.02 / MAX: 72.52 MIN: 55.8 / MAX: 69.46 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a c b d e g f 3 6 9 12 15 5.89 5.90 5.91 10.01 10.06 10.06 10.09 MIN: 4.67 / MAX: 18.4 MIN: 4.83 / MAX: 13.4 MIN: 4.84 / MAX: 12.9 MIN: 5.7 / MAX: 19.52 MIN: 5.29 / MAX: 19.07 MIN: 5.2 / MAX: 19.38 MIN: 5.4 / MAX: 19.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Hadoop Operation: Create - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 100000 f g e d a b c 13K 26K 39K 52K 65K 59382 58928 58824 57971 40733 37425 35075
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU b c a f d g e 2 4 6 8 10 4.16 4.16 4.17 6.76 6.79 6.79 6.80 MIN: 3.42 / MAX: 11.2 MIN: 3.43 / MAX: 10.26 MIN: 3.39 / MAX: 10.07 MIN: 4.04 / MAX: 15.47 MIN: 3.8 / MAX: 15.48 MIN: 3.79 / MAX: 15.41 MIN: 4.04 / MAX: 15.37 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Hadoop Operation: Create - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 1000000 d g f e a b c 15K 30K 45K 60K 75K 71296 70922 70537 70057 46145 44437 44001
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a c b f d g e 2 4 6 8 10 4.88 4.88 4.89 7.67 7.70 7.74 7.77 MIN: 3.95 / MAX: 16.05 MIN: 3.9 / MAX: 14.94 MIN: 3.93 / MAX: 13.44 MIN: 5.32 / MAX: 16.6 MIN: 5.51 / MAX: 16.06 MIN: 6.06 / MAX: 12.66 MIN: 5.42 / MAX: 16.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Hadoop Operation: File Status - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 c e d a g f b 160K 320K 480K 640K 800K 729927 613497 591716 515464 487805 478469 458716
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 b a c g d e f 90M 180M 270M 360M 450M 429620000 425810000 424400000 274070000 273760000 273480000 273390000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e g f 0.7065 1.413 2.1195 2.826 3.5325 2.03 2.05 2.05 3.11 3.11 3.12 3.14 MIN: 1.66 / MAX: 7.51 MIN: 1.6 / MAX: 7 MIN: 1.62 / MAX: 6.96 MIN: 1.94 / MAX: 11.57 MIN: 1.93 / MAX: 9.72 MIN: 1.88 / MAX: 11.92 MIN: 1.93 / MAX: 11.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU b c a e g d f 5 10 15 20 25 14.03 14.12 14.23 21.40 21.47 21.57 21.65 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 26.04 MIN: 11.51 / MAX: 25.86 MIN: 19.07 / MAX: 25.3 MIN: 17.62 / MAX: 28.13 MIN: 19.5 / MAX: 24.76 MIN: 19.48 / MAX: 24.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
TiDB Community Server Test: oltp_read_write - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 b a f e g d 20K 40K 60K 80K 100K 89099 85757 60310 60145 59944 59727
TiDB Community Server Test: oltp_read_write - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 b a c d g f e 20K 40K 60K 80K 100K 80183 79090 78469 55334 55301 54956 53893
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU c b a d f e g 6 12 18 24 30 15.83 15.98 16.02 23.20 23.28 23.32 23.42 MIN: 12.38 / MAX: 32.97 MIN: 12.74 / MAX: 33.34 MIN: 12.5 / MAX: 33.94 MIN: 15.1 / MAX: 31.6 MIN: 15.73 / MAX: 30.77 MIN: 19.49 / MAX: 30.99 MIN: 20.46 / MAX: 32.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Hadoop Operation: Create - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 100000 g e d f c a b 13K 26K 39K 52K 65K 60680 58617 58617 58343 43937 43649 41288
Apache Hadoop Operation: Open - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 100000 f d e g b a c 120K 240K 360K 480K 600K 578035 578035 552486 546448 469484 460829 401606
Apache Hadoop Operation: Delete - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 100000 d g f e b a c 20K 40K 60K 80K 100K 105708 102564 99404 98039 90827 87566 73475
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time a c b d f g e 50 100 150 200 250 215.10 214.14 214.07 151.91 151.78 151.68 151.51
Apache Hadoop Operation: Delete - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 100000 g d e f a c b 20K 40K 60K 80K 100K 103950 101010 100604 96993 91075 90580 73801
Apache Hadoop Operation: Create - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 1000000 g d e f a c b 16K 32K 48K 64K 80K 72706 72134 70897 69920 53665 52260 52119
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes c b a d g f e 60K 120K 180K 240K 300K 270480 256661 248095 197866 197092 196287 195798
TiDB Community Server Test: oltp_point_select - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 e f d c b a 1300 2600 3900 5200 6500 5976 5954 5898 4471 4405 4331
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K b a c g e f d 20 40 60 80 100 91.32 90.81 90.42 67.81 67.72 67.39 66.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TiDB Community Server Test: oltp_read_write - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 32 b c a f g d e 13K 26K 39K 52K 65K 61520 59630 58974 47141 46993 46977 46737
Apache Hadoop Operation: Delete - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 g e d f c a b 20K 40K 60K 80K 100K 113895 113225 112613 110803 97031 90114 86715
TiDB Community Server Test: oltp_update_non_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 e g f d c a b 400 800 1200 1600 2000 1708 1705 1697 1693 1381 1328 1312
TiDB Community Server Test: oltp_read_write - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 f e g a b c 700 1400 2100 2800 3500 3218 3209 3195 2540 2510 2485
Apache Hadoop Operation: Rename - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 1000000 f g e d a b c 20K 40K 60K 80K 100K 85815 85763 84360 81208 73078 72129 66827
TiDB Community Server Test: oltp_update_non_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 c a e g f 11K 22K 33K 44K 55K 52865 51105 42138 41695 41424
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c g f e d 1.1707 2.3414 3.5121 4.6828 5.8535 5.203 5.149 5.049 4.143 4.138 4.114 4.107 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Hadoop Operation: Delete - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 e f d g a b c 20K 40K 60K 80K 100K 113327 111198 111012 110828 98932 97314 90147
TiDB Community Server Test: oltp_update_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 e f g d a c 300 600 900 1200 1500 1490 1483 1481 1479 1212 1189
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p g d e f c b a 110 220 330 440 550 528.53 526.22 525.17 521.52 431.90 427.69 422.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Hadoop Operation: Rename - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 100000 e d g f a b c 20K 40K 60K 80K 100K 83822 82102 80386 79491 75529 69348 67159
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 c a b e f d g 6M 12M 18M 24M 30M 28227000 27901000 27736000 25207000 25199000 24627000 22727000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_point_select - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 b a c f e d 30K 60K 90K 120K 150K 159728 159242 149962 130389 129904 129492
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 c b a d g e f 300M 600M 900M 1200M 1500M 1254800000 1214200000 1192100000 1035000000 1033400000 1032000000 1024600000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_update_non_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 64 a b c f d g e 9K 18K 27K 36K 45K 41281 39759 39106 34470 34224 34107 33881
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p c a b e d f g 30 60 90 120 150 143.55 141.22 138.34 119.31 118.95 118.49 118.48 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p d e g f b c a 130 260 390 520 650 604.99 597.01 586.75 585.37 542.61 516.91 510.36 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Hadoop Operation: Rename - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 1000000 g e d f c a b 20K 40K 60K 80K 100K 84810 84041 83921 82501 74638 73239 71679
nekRS Input: TurboPipe Periodic OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic g f d e a b c 2000M 4000M 6000M 8000M 10000M 7964910000 7955790000 7934570000 7931010000 6767710000 6757360000 6754170000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p c b a g e d f 3 6 9 12 15 12.62 12.59 12.48 11.02 10.98 10.91 10.74 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Hadoop Operation: Rename - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 100000 d g e f c b a 20K 40K 60K 80K 100K 82372 82237 82237 81633 77101 73046 70522
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 c b a d f e g 3M 6M 9M 12M 15M 14225000 14021000 13909000 12683000 12681000 12366000 12256000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_update_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 b c e d f 5K 10K 15K 20K 25K 24371 23324 21271 21108 21067
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream e f d g a b c 8 16 24 32 40 30.99 31.03 31.05 31.06 35.63 35.64 35.68
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b c f e g d 17M 34M 51M 68M 85M 77181000 77019000 76924000 68861000 68846000 68678000 67054000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU f d g e 0.2575 0.515 0.7725 1.03 1.2875 1.00136 1.03749 1.12723 1.14432 MIN: 0.92 MIN: 0.92 MIN: 0.93 MIN: 1.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 b c a g d e f 300M 600M 900M 1200M 1500M 1190300000 1184800000 1183500000 1047100000 1047100000 1046600000 1041900000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a c b g d f e 20M 40M 60M 80M 100M 109870000 109140000 108080000 100170000 99594000 99441000 97005000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 c a b f d e g 30M 60M 90M 120M 150M 118550000 117490000 114010000 105740000 105650000 105480000 104800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_point_select - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 b a f e g d 30K 60K 90K 120K 150K 130802 127567 119092 118657 118549 115675
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b c f g e d 13M 26M 39M 52M 65M 59401000 59296000 57519000 52879000 52854000 52827000 52665000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
nekRS Input: Kershaw OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw b a c g d e f 2000M 4000M 6000M 8000M 10000M 11240300000 11106900000 10826700000 10500600000 10318900000 10264000000 9976450000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 b c a e d f g 12M 24M 36M 48M 60M 55588000 55165000 52911000 50380000 50258000 49977000 49556000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b c e f g d 8M 16M 24M 32M 40M 39499000 39486000 39453000 35315000 35271000 35236000 35228000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_update_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 b a c f e g 6K 12K 18K 24K 30K 27464 27087 26546 24830 24611 24574
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 b a c e g f d 50M 100M 150M 200M 250M 216150000 216080000 214910000 196040000 194670000 194500000 193850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a c b d e g f 70M 140M 210M 280M 350M 307540000 306760000 305110000 278030000 277780000 277410000 276390000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 c b a d e f g 130M 260M 390M 520M 650M 603650000 602470000 594230000 545360000 545140000 545020000 543050000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 a b c e d f g 30M 60M 90M 120M 150M 153850000 153690000 153670000 138620000 138600000 138580000 138460000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU d e f g a b c 0.1215 0.243 0.3645 0.486 0.6075 0.49 0.49 0.49 0.49 0.54 0.54 0.54 MIN: 0.3 / MAX: 9.28 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 8.84 MIN: 0.45 / MAX: 7.64 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 5.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
TiDB Community Server Test: oltp_update_non_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 b a f e d 6K 12K 18K 24K 30K 28914 28735 26695 26285 26273
TiDB Community Server Test: oltp_point_select - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 b a d f e g 20K 40K 60K 80K 100K 106180 104627 98149 97368 96907 96840
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU e g f a b c d 9 18 27 36 45 36.98 36.98 37.01 38.50 38.66 38.75 40.40 MIN: 32.02 / MAX: 44.78 MIN: 32.61 / MAX: 41.91 MIN: 32.25 / MAX: 43.6 MIN: 36.77 / MAX: 44.23 MIN: 37.22 / MAX: 43.52 MIN: 37.46 / MAX: 43.52 MIN: 26.93 / MAX: 74.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream e g f d c a b 4 8 12 16 20 15.62 15.69 15.72 15.72 16.89 16.91 17.07
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU f d e g b a c 1.0935 2.187 3.2805 4.374 5.4675 4.50 4.51 4.51 4.52 4.85 4.86 4.86 MIN: 2.98 / MAX: 13.86 MIN: 2.98 / MAX: 13.05 MIN: 2.96 / MAX: 16.06 MIN: 2.77 / MAX: 13.57 MIN: 4.25 / MAX: 12.86 MIN: 4.23 / MAX: 12.81 MIN: 4.34 / MAX: 12.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c b a g e f d 1.1241 2.2482 3.3723 4.4964 5.6205 4.6348 4.6476 4.6508 4.9787 4.9859 4.9877 4.9960
TiDB Community Server Test: oltp_point_select - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 e f g b c 15K 30K 45K 60K 75K 70250 70105 69923 67515 65406
TiDB Community Server Test: oltp_update_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 a b d c g e 4K 8K 12K 16K 20K 18361 17817 17612 17565 17135 17117
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream f g e d b c a 80 160 240 320 400 324.96 325.51 325.74 325.88 347.22 347.37 347.66
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU e f d g 0.1378 0.2756 0.4134 0.5512 0.689 0.575794 0.600834 0.603950 0.612320 MIN: 0.52 MIN: 0.53 MIN: 0.53 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
TiDB Community Server Test: oltp_read_write - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 16 a c b e d f g 8K 16K 24K 32K 40K 38331 37368 36950 36784 36480 36125 36088
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream e d f g a c b 30 60 90 120 150 112.06 112.25 112.41 112.48 118.75 118.78 118.95
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU f d g e 0.2881 0.5762 0.8643 1.1524 1.4405 1.20653 1.25758 1.27918 1.28043 MIN: 1.18 MIN: 1.21 MIN: 1.24 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU d e g f b c a 4 8 12 16 20 15.36 15.36 15.37 15.38 16.02 16.02 16.26 MIN: 8.08 / MAX: 24.34 MIN: 8.02 / MAX: 23.81 MIN: 7.99 / MAX: 23.98 MIN: 7.99 / MAX: 24 MIN: 14.41 / MAX: 30.55 MIN: 14.63 / MAX: 33.79 MIN: 14.71 / MAX: 28.14 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 a c b d e g f 80M 160M 240M 320M 400M 369430000 366990000 366930000 363310000 357990000 357810000 350450000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream f d e g c a b 30 60 90 120 150 143.69 143.76 144.10 144.11 145.26 150.59 150.61
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b d f e g c 110 220 330 440 550 485.72 487.36 493.60 494.22 494.26 495.60 507.48
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU d e g f c b a 2 4 6 8 10 7.93 7.96 7.96 7.97 8.24 8.27 8.28 MIN: 4.2 / MAX: 16.92 MIN: 4.19 / MAX: 16.59 MIN: 4.19 / MAX: 14.2 MIN: 4.37 / MAX: 16.86 MIN: 7.62 / MAX: 23.32 MIN: 7.37 / MAX: 25.18 MIN: 7.44 / MAX: 23.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU g f d e a c b 7 14 21 28 35 29.72 29.95 30.02 30.10 30.72 30.89 31.00 MIN: 19.46 / MAX: 38.99 MIN: 19.01 / MAX: 38.08 MIN: 18.78 / MAX: 38.72 MIN: 22.61 / MAX: 39.15 MIN: 29.51 / MAX: 35.07 MIN: 29.48 / MAX: 36.29 MIN: 29.59 / MAX: 36.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 b a c e g f d 40M 80M 120M 160M 200M 196590000 196220000 194510000 191230000 190750000 189880000 188930000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K b a d c e f g 40 80 120 160 200 166.38 163.46 163.19 163.06 162.61 161.85 160.32 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TiDB Community Server Test: oltp_update_non_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 g d e a b 4K 8K 12K 16K 20K 18735 18563 18557 18095 18068
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a f e b d g c 150M 300M 450M 600M 750M 699740000 693340000 692920000 692760000 689150000 682070000 674930000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a e d c g f 40 80 120 160 200 166.69 163.01 162.05 161.85 161.50 161.32 160.80 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU d g e f 0.2388 0.4776 0.7164 0.9552 1.194 1.02875 1.04567 1.05425 1.06144 MIN: 0.96 MIN: 0.98 MIN: 0.97 MIN: 0.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g 0.0788 0.1576 0.2364 0.3152 0.394 0.34 0.34 0.34 0.35 0.35 0.35 0.35 MIN: 0.29 / MAX: 7.33 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.09 MIN: 0.23 / MAX: 9.09 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.63 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU d f g e 0.5772 1.1544 1.7316 2.3088 2.886 2.49408 2.49714 2.51441 2.56522 MIN: 2.3 MIN: 2.26 MIN: 2.3 MIN: 2.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Kripke OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 d g f e 50M 100M 150M 200M 250M 240994500 237175700 236591000 236243900 1. (CXX) g++ options: -O3 -fopenmp -ldl
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 g f d e 20 40 60 80 100 97.53 97.99 98.98 99.42 1. (CXX) g++ options: -O3 -fopenmp
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream g f e d a c b 20 40 60 80 100 73.19 73.22 73.26 73.31 74.32 74.50 74.56
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU g d f e 200 400 600 800 1000 837.60 849.16 849.34 851.66 MIN: 796.61 MIN: 806.44 MIN: 805.8 MIN: 809.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU d g e f 200 400 600 800 1000 838.52 848.03 849.71 851.49 MIN: 796.3 MIN: 807.34 MIN: 805.98 MIN: 807.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU e g d f 0.3539 0.7078 1.0617 1.4156 1.7695 1.54911 1.55118 1.55824 1.57282 MIN: 1.51 MIN: 1.52 MIN: 1.51 MIN: 1.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU g d f e 0.148 0.296 0.444 0.592 0.74 0.647700 0.652259 0.653182 0.657610 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj g e f d 6 12 18 24 30 23.71 23.53 23.50 23.35 MIN: 23.61 / MAX: 23.93 MIN: 23.43 / MAX: 23.73 MIN: 23.4 / MAX: 23.74 MIN: 23.26 / MAX: 23.57
TiDB Community Server Test: oltp_update_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 f c g d e a 3K 6K 9K 12K 15K 12692 12681 12627 12622 12567 12558
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown e d g f 5 10 15 20 25 21.99 21.89 21.83 21.77 MIN: 21.84 / MAX: 22.32 MIN: 21.74 / MAX: 22.23 MIN: 21.69 / MAX: 22.17 MIN: 21.63 / MAX: 22.18
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU d g f e 0.1426 0.2852 0.4278 0.5704 0.713 0.628236 0.629108 0.630325 0.633975 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU g e d f 0.1914 0.3828 0.5742 0.7656 0.957 0.843492 0.844434 0.847805 0.850691 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c a g e f d 20 40 60 80 100 109.23 109.58 109.80 109.90 109.97 110.00 110.11
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU e f d g 200 400 600 800 1000 841.08 845.31 847.38 847.42 MIN: 798.46 MIN: 803.78 MIN: 806.33 MIN: 806.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU d f g e 0.8649 1.7298 2.5947 3.4596 4.3245 3.81576 3.81823 3.82381 3.84421 MIN: 3.26 MIN: 3.25 MIN: 3.29 MIN: 3.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream e c g f d b a 11 22 33 44 55 49.01 49.02 49.03 49.07 49.09 49.11 49.37
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream d e f a g b c 8 16 24 32 40 33.22 33.26 33.28 33.34 33.37 33.38 33.46
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU g e f d 0.48 0.96 1.44 1.92 2.4 2.11813 2.12570 2.13062 2.13332 MIN: 1.99 MIN: 2.01 MIN: 1.97 MIN: 2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream d b g a e f c 11 22 33 44 55 48.87 48.97 48.98 49.01 49.06 49.07 49.21
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU g f e d 400 800 1200 1600 2000 1631.99 1636.44 1639.36 1642.51 MIN: 1581.62 MIN: 1585.81 MIN: 1581.93 MIN: 1593.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon g e f d 7 14 21 28 35 27.91 27.83 27.83 27.74 MIN: 27.81 / MAX: 28.17 MIN: 27.72 / MAX: 28.1 MIN: 27.73 / MAX: 28.13 MIN: 27.64 / MAX: 27.98
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 g e d f 0.3728 0.7456 1.1184 1.4912 1.864 1.648 1.654 1.657 1.657 1. (CXX) g++ options: -O3 -fopenmp
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon g f d e 6 12 18 24 30 24.96 24.89 24.85 24.83 MIN: 24.9 / MAX: 25.13 MIN: 24.81 / MAX: 25.06 MIN: 24.78 / MAX: 25 MIN: 24.76 / MAX: 24.96
OpenVKL Benchmark: vklBenchmarkCPU Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar g f d e 40 80 120 160 200 191 191 191 190 MIN: 13 / MAX: 3483 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3471 MIN: 13 / MAX: 3484
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a c d b e f g 130 260 390 520 650 605.76 605.88 606.58 606.67 606.76 606.79 608.72
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d g f e 130 260 390 520 650 605.04 605.73 605.92 606.10 607.16 607.82 607.91
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU g d e f 0.3019 0.6038 0.9057 1.2076 1.5095 1.33564 1.33789 1.33861 1.34183 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown f g d e 5 10 15 20 25 22.44 22.42 22.39 22.34 MIN: 22.25 / MAX: 22.78 MIN: 22.22 / MAX: 22.85 MIN: 22.2 / MAX: 22.85 MIN: 22.15 / MAX: 22.75
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj d e f g 5 10 15 20 25 22.35 22.29 22.27 22.26 MIN: 22.28 / MAX: 22.5 MIN: 22.22 / MAX: 22.46 MIN: 22.2 / MAX: 22.44 MIN: 22.18 / MAX: 22.43
OpenVKL Benchmark: vklBenchmarkCPU ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC g f e d 110 220 330 440 550 489 488 487 487 MIN: 36 / MAX: 6969 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6949
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 g f e d 9 18 27 36 45 37.95 38.02 38.07 38.11 1. (CXX) g++ options: -O3 -fopenmp
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU f g e d 400 800 1200 1600 2000 1636.76 1637.37 1641.00 1641.92 MIN: 1585.98 MIN: 1584.58 MIN: 1595.55 MIN: 1584.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU g f d e 0.6893 1.3786 2.0679 2.7572 3.4465 3.05458 3.05674 3.05991 3.06370 MIN: 2.97 MIN: 2.97 MIN: 2.96 MIN: 2.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU g d f e 0.4315 0.863 1.2945 1.726 2.1575 1.91274 1.91374 1.91422 1.91781 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream f b g a c e d 20 40 60 80 100 110.89 110.92 110.98 111.01 111.03 111.09 111.11
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU d f g e 0.7615 1.523 2.2845 3.046 3.8075 3.37782 3.37956 3.38156 3.38436 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU g f e d 400 800 1200 1600 2000 1641.40 1642.35 1643.97 1643.99 MIN: 1589.91 MIN: 1586.17 MIN: 1590.89 MIN: 1588.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only g f e d 0.0765 0.153 0.2295 0.306 0.3825 0.34 0.34 0.34 0.34
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only g f e d 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only g f e d 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Phoronix Test Suite v10.8.5