extra tests2 Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37 .
extra tests2 Processor Motherboard Memory Disk Graphics OS Kernel Compiler File-System Screen Resolution a b c d e f g 2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads) Supermicro H13DSH (1.5 BIOS) 24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07 astdrmfb AlmaLinux 9.2 5.14.0-284.25.1.el9_2.x86_64 (x86_64) GCC 11.3.1 20221121 ext4 1024x768 AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads) Supermicro H13SSW (1.1 BIOS) 12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details - a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 - g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111 Java Details - OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS) Python Details - Python 3.9.16 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
extra tests2 remhos: Sample Remap Example specfem3d: Mount St. Helens specfem3d: Layered Halfspace specfem3d: Tomographic Model specfem3d: Homogeneous Halfspace specfem3d: Water-layered Halfspace nekrs: Kershaw nekrs: TurboPipe Periodic embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time build-linux-kernel: defconfig liquid-dsp: 1 - 256 - 32 liquid-dsp: 1 - 256 - 57 liquid-dsp: 2 - 256 - 32 liquid-dsp: 2 - 256 - 57 liquid-dsp: 4 - 256 - 32 liquid-dsp: 4 - 256 - 57 liquid-dsp: 8 - 256 - 32 liquid-dsp: 8 - 256 - 57 liquid-dsp: 1 - 256 - 512 liquid-dsp: 16 - 256 - 32 liquid-dsp: 16 - 256 - 57 liquid-dsp: 2 - 256 - 512 liquid-dsp: 32 - 256 - 32 liquid-dsp: 32 - 256 - 57 liquid-dsp: 4 - 256 - 512 liquid-dsp: 64 - 256 - 32 liquid-dsp: 64 - 256 - 57 liquid-dsp: 8 - 256 - 512 liquid-dsp: 96 - 256 - 32 liquid-dsp: 96 - 256 - 57 liquid-dsp: 16 - 256 - 512 liquid-dsp: 32 - 256 - 512 liquid-dsp: 64 - 256 - 512 liquid-dsp: 96 - 256 - 512 tidb: oltp_read_write - 1 tidb: oltp_read_write - 16 tidb: oltp_read_write - 32 tidb: oltp_read_write - 64 tidb: oltp_point_select - 1 tidb: oltp_read_write - 128 tidb: oltp_update_index - 1 tidb: oltp_point_select - 16 tidb: oltp_point_select - 32 tidb: oltp_point_select - 64 tidb: oltp_update_index - 16 tidb: oltp_update_index - 32 tidb: oltp_update_index - 64 tidb: oltp_point_select - 128 tidb: oltp_update_index - 128 tidb: oltp_update_non_index - 1 tidb: oltp_update_non_index - 16 tidb: oltp_update_non_index - 32 tidb: oltp_update_non_index - 64 tidb: oltp_update_non_index - 128 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU cassandra: Writes hadoop: Open - 50 - 100000 hadoop: Open - 100 - 100000 hadoop: Open - 50 - 1000000 hadoop: Create - 50 - 100000 hadoop: Delete - 50 - 100000 hadoop: Open - 100 - 1000000 hadoop: Rename - 50 - 100000 hadoop: Create - 100 - 100000 hadoop: Create - 50 - 1000000 hadoop: Delete - 100 - 100000 hadoop: Delete - 50 - 1000000 hadoop: Rename - 100 - 100000 hadoop: Rename - 50 - 1000000 hadoop: Create - 100 - 1000000 hadoop: Delete - 100 - 1000000 hadoop: Rename - 100 - 1000000 hadoop: File Status - 50 - 100000 hadoop: File Status - 100 - 100000 hadoop: File Status - 50 - 1000000 hadoop: File Status - 100 - 1000000 kripke: brl-cad: VGR Performance Metric easywave: e2Asean Grid + BengkuluSept2007 Source - 240 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 easywave: e2Asean Grid + BengkuluSept2007 Source - 2400 embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer - Crown embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Crown openvkl: vklBenchmarkCPU Scalar openvkl: vklBenchmarkCPU ISPC onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only a b c d e f g 16.346 11.024765775 26.885983804 12.312946652 15.10511773 26.985020908 11106900000 6767710000 54.9017 56.0871 60.1449 53.5733 67.3378 56.4853 5.203 90.811 163.459 163.013 12.477 141.219 422.994 510.361 1.83 1.84 0.86 15.986 15.9528 215.096 14.2369 13.8739 16.3468 27.354 39499000 59401000 77181000 117490000 153850000 196220000 307540000 369430000 13909000 594230000 699740000 27901000 1183500000 1192100000 52911000 2207700000 1994400000 109870000 3005800000 2559800000 216080000 425810000 622560000 711640000 2540 38331 58974 79090 4331 85757 1212 104627 127567 12558 18361 159242 27087 1328 18095 28735 41281 51105 39.4951 605.0388 1417.0706 16.9074 672.4635 35.628 201.3925 118.7509 485.6725 49.3653 5137.0114 4.6508 215.6383 111.0131 49.3258 485.7175 489.1203 49.0092 218.1464 109.8027 322.2505 74.3211 68.5988 347.6612 718.9189 33.3412 158.924 150.5897 39.4391 605.758 26.2 66.42 33.22 254.88 80.54 30.41 393.6 282.55 42.44 283.97 42.24 2033.17 5.89 56.01 213.94 5882.91 2.03 748.44 16.02 2873.24 4.17 2945.26 16.26 9837.58 4.86 842.91 14.23 317.22 37.8 5776.94 8.28 2454.09 4.88 1560.03 30.72 86884.64 0.54 1244.69 38.5 120606.38 0.34 248095 460829 420168 1126126 43649 91075 215332 70522 40733 53665 87566 98932 75529 73239 46145 90114 73078 529101 515464 2173913 1886792 772162 16.791 11.318495709 28.65210863 12.100595947 14.46058698 29.460761197 11240300000 6757360000 55.3925 56.4551 59.9124 53.8135 67.1951 56.6902 5.149 91.322 166.378 166.692 12.591 138.338 427.686 542.611 1.83 1.84 0.86 15.9785 15.9888 214.074 14.1783 13.7666 16.4365 27.241 39486000 59296000 77019000 114010000 153690000 196590000 305110000 366930000 14021000 602470000 692760000 27736000 1190300000 1214200000 55588000 2212100000 2001900000 108080000 2995400000 2571100000 216150000 429620000 610950000 718140000 2510 36950 61520 80183 4405 89099 67515 106180 130802 17817 24371 159728 27464 1312 18068 28914 39759 39.4681 605.7307 1403.065 17.0669 672.3734 35.6422 201.2528 118.9482 488.1264 49.1082 5138.8341 4.6476 215.9254 110.9152 49.1666 487.3599 489.4464 48.9703 219.5263 109.2332 321.1829 74.5553 68.6617 347.2189 717.9693 33.3798 159.0596 150.6055 39.4539 606.6693 26.24 66.64 33.17 255.3 80.76 30.44 393.23 284.22 42.2 284.99 42.09 2028.01 5.91 56.06 213.62 5836.27 2.05 750.49 15.98 2880.58 4.16 2986.46 16.02 9849.07 4.85 854.51 14.03 317.28 37.79 5780.44 8.27 2450.26 4.89 1546.02 31 87359.23 0.54 1239.67 38.66 120728.22 0.34 256661 469484 404858 1020408 41288 73801 173822 73046 37425 52119 90827 97314 69348 71679 44437 86715 72129 862069 458716 1941748 161970 768517 16.243 11.32735977 27.490850157 12.040917877 14.808273365 27.060235079 10826700000 6754170000 55.4037 56.8078 59.7929 53.6927 67.5038 56.9327 5.049 90.417 163.055 161.495 12.617 143.545 431.895 516.906 1.83 1.82 0.87 15.9872 15.9778 214.136 14.1399 13.8317 16.535 27.408 39453000 57519000 76924000 118550000 153670000 194510000 306760000 366990000 14225000 603650000 674930000 28227000 1184800000 1254800000 55165000 2206800000 2010300000 109140000 2999800000 2564900000 214910000 424400000 622630000 715030000 2485 37368 59630 78469 4471 1189 65406 12681 17565 23324 149962 26546 1381 39106 52865 39.4464 605.9183 1418.9041 16.8863 671.2554 35.6825 201.5402 118.7801 489.1106 49.0173 5153.6644 4.6348 215.647 111.025 47.1535 507.4786 487.0522 49.2055 218.5162 109.5847 321.5082 74.5031 68.6287 347.3728 716.1404 33.4637 164.6131 145.2562 39.4183 605.8765 26.12 66.72 33.03 254.72 80.41 30.43 393.37 282.67 42.43 284.31 42.19 2029.79 5.9 56.02 213.79 5840.53 2.05 757.38 15.83 2881.14 4.16 2987.33 16.02 9845.27 4.86 849.3 14.12 317.33 37.79 5802.65 8.24 2455.51 4.88 1551.63 30.89 86789.8 0.54 1237.29 38.75 123484.28 0.34 270480 401606 403226 683995 43937 90580 185874 77101 35075 52260 73475 90147 67159 74638 44001 97031 66827 657895 729927 284252 1893939 762529 30.761 26.736414118 71.614294327 27.330985588 35.571684908 62.441749585 10318900000 7934570000 21.4811 22.585 24.6872 22.2614 28.3643 23.8733 4.107 66.988 163.189 161.854 10.91 118.946 526.216 604.986 0.72 0.72 0.34 5.57469 5.57001 151.905 5.60747 5.45329 6.58745 55.173 35228000 52665000 67054000 105650000 138600000 188930000 278030000 363310000 12683000 545360000 689150000 24627000 1047100000 1035000000 50258000 1059500000 1093300000 99594000 1065200000 1120800000 193850000 273760000 282920000 286250000 36480 46977 55334 5898 59727 1479 98149 115675 12622 17612 21108 129492 1693 18563 26273 34224 13.0694 606.101 508.087 15.7246 257.2728 31.0525 71.137 112.2499 162.8502 49.0863 1599.2079 4.996 71.9189 111.1081 16.1648 493.596 163.5559 48.8729 72.455 110.1114 108.909 73.3058 24.4833 325.8763 240.5529 33.2242 55.6132 143.7643 13.1312 606.5773 72 182.99 90.03 670.87 224.15 10.47 761.59 107.02 74.71 106.9 74.81 797.64 10.01 20.03 398.52 2564.78 3.11 344.67 23.2 1175.67 6.79 1039.61 15.36 3540.88 4.51 370.57 21.57 124.12 64.41 2013.77 7.93 1036.99 7.7 532.59 30.02 32002.62 0.49 395.66 40.4 44958.07 0.35 197866 578035 529101 278319 58617 101010 1248439 82372 57971 72134 105708 111012 82102 83921 71296 112613 81208 632911 591716 1818182 600601 240994500 298064 1.657 38.105 98.98 24.845 22.3517 21.8943 27.7438 23.3531 22.3922 191 487 2.13332 1.55824 1.33789 3.81576 0.628236 3.05991 3.37782 0.847805 1.91374 2.49408 0.652259 1.03749 1.25758 0.60395 1.02875 1641.92 1642.51 1643.99 838.516 849.163 847.38 0.72 0.72 0.34 30.845 26.799143446 70.189028506 27.459821308 35.030134889 62.325146828 10264000000 7931010000 21.4357 22.5694 24.7343 22.1577 28.3141 23.9393 4.114 67.721 162.608 162.051 10.984 119.307 525.173 597.011 0.72 0.72 0.34 5.54107 5.56353 151.506 5.6204 5.46153 6.5827 55.093 35315000 52827000 68846000 105480000 138620000 191230000 277780000 357990000 12366000 545140000 692920000 25207000 1046600000 1032000000 50380000 1057500000 1095400000 97005000 1065100000 1117800000 196040000 273480000 281830000 285880000 3209 36784 46737 53893 5976 60145 1490 70250 96907 118657 12567 17117 21271 129904 24611 1708 18557 26285 33881 42138 12.9433 607.913 511.4098 15.6245 257.894 30.9867 71.2727 112.0574 163.1361 49.0094 1599.1543 4.9859 71.9146 111.0888 16.1392 494.2575 162.9298 49.0598 72.6571 109.9654 109.0938 73.2609 24.4725 325.7416 240.2349 33.2625 55.4634 144.1013 13.1187 606.755 71.44 182.56 90.31 670.64 224.1 10.47 761.16 107.27 74.5 107.24 74.54 793.75 10.06 20 398.91 2562.54 3.11 342.81 23.32 1174.6 6.8 1039.82 15.36 3544.18 4.51 373.64 21.4 123.61 64.68 2007.53 7.96 1028.64 7.77 530.99 30.1 32032.06 0.49 432.32 36.98 44933.27 0.35 195798 552486 294985 251004 58617 100604 1204819 82237 58824 70897 98039 113327 83822 84041 70057 113225 84360 389105 613497 320924 235627 236243900 296125 1.654 38.067 99.415 24.8282 22.2911 21.9909 27.8293 23.5283 22.3422 190 487 2.1257 1.54911 1.33861 3.84421 0.633975 3.0637 3.38436 0.844434 1.91781 2.56522 0.65761 1.14432 1.28043 0.575794 1.05425 1641 1639.36 1643.97 849.712 851.659 841.078 0.72 0.72 0.34 30.725 26.873168455 70.542255905 26.973757395 35.535073001 61.281769124 9976450000 7955790000 21.5913 22.6566 24.7047 22.149 28.3237 23.9354 4.138 67.393 161.847 160.798 10.736 118.486 521.518 585.368 0.72 0.72 0.34 5.5732 5.55581 151.78 5.61454 5.45227 6.59563 55.148 35271000 52879000 68861000 105740000 138580000 189880000 276390000 350450000 12681000 545020000 693340000 25199000 1041900000 1024600000 49977000 1057100000 1094600000 99441000 1065300000 1120500000 194500000 273390000 283030000 285920000 3218 36125 47141 54956 5954 60310 1483 70105 97368 119092 12692 21067 130389 24830 1697 26695 34470 41424 13.0853 607.8171 508.2088 15.721 257.5046 31.0292 71.044 112.413 162.9294 49.0712 1600.5275 4.9877 71.9401 110.8852 16.1307 494.2211 162.8976 49.0714 72.5736 110.0001 109.09 73.2163 24.5238 324.9568 240.1642 33.2751 55.5428 143.6922 13.0934 606.7852 71.96 181.7 90.26 667.87 223.95 10.48 760.57 107.39 74.43 106.76 74.87 791.74 10.09 20.01 399.24 2539.97 3.14 343.49 23.28 1180.85 6.76 1038.47 15.38 3548.78 4.5 369.26 21.65 124.3 64.31 2004.76 7.97 1041.87 7.67 533.74 29.95 31951.64 0.49 431.94 37.01 44968.43 0.35 196287 578035 523560 1221001 58343 96993 1303781 81633 59382 69920 99404 111198 79491 82501 70537 110803 85815 709220 478469 1795332 1964637 236591000 295603 1.657 38.015 97.987 24.887 22.2676 21.7746 27.826 23.5042 22.4407 191 488 2.13062 1.57282 1.34183 3.81823 0.630325 3.05674 3.37956 0.850691 1.91422 2.49714 0.653182 1.00136 1.20653 0.600834 1.06144 1636.76 1636.44 1642.35 851.494 849.344 845.308 0.72 0.72 0.34 30.75 27.696631371 69.955609165 27.746475162 35.378600021 62.810924376 10500600000 7964910000 21.5847 22.7745 24.8193 22.1901 28.4793 23.8796 4.143 67.811 160.322 161.324 11.016 118.481 528.533 586.748 0.72 0.72 0.34 5.57553 5.56539 151.681 5.62278 5.47725 6.60085 55.172 35236000 52854000 68678000 104800000 138460000 190750000 277410000 357810000 12256000 543050000 682070000 22727000 1047100000 1033400000 49556000 1056200000 1099300000 100170000 1065700000 1118200000 194670000 274070000 281730000 286530000 3195 36088 46993 55301 59944 1481 69923 96840 118549 12627 17135 24574 1705 18735 34107 41695 13.073 607.1628 509.139 15.6892 257.2808 31.0574 70.9275 112.4779 162.9937 49.0259 1602.5221 4.9787 71.9003 110.9759 16.0672 495.6033 163.2282 48.9837 72.6879 109.9037 109.2191 73.1867 24.4607 325.5075 239.5176 33.3674 55.4264 144.1053 13.0596 608.7163 72.01 183.29 90.63 669.09 224.12 10.48 759.92 107.04 74.71 107.24 74.58 793.9 10.06 20.05 398.13 2557.66 3.12 341.36 23.42 1175.58 6.79 1039.37 15.37 3533.64 4.52 372.26 21.47 123.41 64.77 2006.09 7.96 1031.6 7.74 538.01 29.72 32008.03 0.49 432.2 36.98 45097.99 0.35 197092 546448 460829 654022 60680 103950 1107420 82237 58928 72706 102564 110828 80386 84810 70922 113895 85763 561798 487805 2036660 2049180 237175700 295522 1.648 37.95 97.529 24.9619 22.2559 21.8305 27.91 23.7091 22.4215 191 489 2.11813 1.55118 1.33564 3.82381 0.629108 3.05458 3.38156 0.843492 1.91274 2.51441 0.6477 1.12723 1.27918 0.61232 1.04567 1637.37 1631.99 1641.4 848.032 837.595 847.417 0.72 0.72 0.34 OpenBenchmarking.org
Remhos Test: Sample Remap Example OpenBenchmarking.org Seconds, Fewer Is Better Remhos 1.0 Test: Sample Remap Example a b c d e f g 7 14 21 28 35 16.35 16.79 16.24 30.76 30.85 30.73 30.75 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens a b c d e f g 7 14 21 28 35 11.02 11.32 11.33 26.74 26.80 26.87 27.70 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a b c d e f g 16 32 48 64 80 26.89 28.65 27.49 71.61 70.19 70.54 69.96 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model a b c d e f g 7 14 21 28 35 12.31 12.10 12.04 27.33 27.46 26.97 27.75 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace a b c d e f g 8 16 24 32 40 15.11 14.46 14.81 35.57 35.03 35.54 35.38 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a b c d e f g 14 28 42 56 70 26.99 29.46 27.06 62.44 62.33 61.28 62.81 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
nekRS Input: Kershaw OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw a b c d e f g 2000M 4000M 6000M 8000M 10000M 11106900000 11240300000 10826700000 10318900000 10264000000 9976450000 10500600000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
nekRS Input: TurboPipe Periodic OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic a b c d e f g 2000M 4000M 6000M 8000M 10000M 6767710000 6757360000 6754170000 7934570000 7931010000 7955790000 7964910000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown a b c d e f g 12 24 36 48 60 54.90 55.39 55.40 21.48 21.44 21.59 21.58 MIN: 53.27 / MAX: 57.28 MIN: 54.02 / MAX: 57.64 MIN: 53.71 / MAX: 58.99 MIN: 21.32 / MAX: 21.8 MIN: 21.3 / MAX: 21.78 MIN: 21.45 / MAX: 21.84 MIN: 21.43 / MAX: 21.89
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown a b c d e f g 13 26 39 52 65 56.09 56.46 56.81 22.59 22.57 22.66 22.77 MIN: 54.05 / MAX: 59.82 MIN: 54.53 / MAX: 59.89 MIN: 55.27 / MAX: 59.91 MIN: 22.39 / MAX: 22.98 MIN: 22.39 / MAX: 22.93 MIN: 22.45 / MAX: 22.99 MIN: 22.57 / MAX: 23.16
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon a b c d e f g 13 26 39 52 65 60.14 59.91 59.79 24.69 24.73 24.70 24.82 MIN: 58.97 / MAX: 62 MIN: 58.66 / MAX: 61.96 MIN: 58.46 / MAX: 62.03 MIN: 24.62 / MAX: 24.84 MIN: 24.67 / MAX: 24.86 MIN: 24.63 / MAX: 24.84 MIN: 24.74 / MAX: 25
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj a b c d e f g 12 24 36 48 60 53.57 53.81 53.69 22.26 22.16 22.15 22.19 MIN: 52.17 / MAX: 55.38 MIN: 52.72 / MAX: 55.86 MIN: 52.63 / MAX: 55.24 MIN: 22.18 / MAX: 22.42 MIN: 22.08 / MAX: 22.35 MIN: 22.07 / MAX: 22.32 MIN: 22.12 / MAX: 22.33
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon a b c d e f g 15 30 45 60 75 67.34 67.20 67.50 28.36 28.31 28.32 28.48 MIN: 65.61 / MAX: 70.54 MIN: 65.48 / MAX: 70.41 MIN: 65.64 / MAX: 71.17 MIN: 28.26 / MAX: 28.59 MIN: 28.21 / MAX: 28.56 MIN: 28.23 / MAX: 28.55 MIN: 28.37 / MAX: 28.69
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c d e f g 13 26 39 52 65 56.49 56.69 56.93 23.87 23.94 23.94 23.88 MIN: 55.29 / MAX: 58.38 MIN: 55.42 / MAX: 58.97 MIN: 55.56 / MAX: 59.67 MIN: 23.78 / MAX: 24.08 MIN: 23.84 / MAX: 24.18 MIN: 23.84 / MAX: 24.16 MIN: 23.79 / MAX: 24.08
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c d e f g 1.1707 2.3414 3.5121 4.6828 5.8535 5.203 5.149 5.049 4.107 4.114 4.138 4.143 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b c d e f g 20 40 60 80 100 90.81 91.32 90.42 66.99 67.72 67.39 67.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b c d e f g 40 80 120 160 200 163.46 166.38 163.06 163.19 162.61 161.85 160.32 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b c d e f g 40 80 120 160 200 163.01 166.69 161.50 161.85 162.05 160.80 161.32 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b c d e f g 3 6 9 12 15 12.48 12.59 12.62 10.91 10.98 10.74 11.02 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b c d e f g 30 60 90 120 150 141.22 138.34 143.55 118.95 119.31 118.49 118.48 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b c d e f g 110 220 330 440 550 422.99 427.69 431.90 526.22 525.17 521.52 528.53 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b c d e f g 130 260 390 520 650 510.36 542.61 516.91 604.99 597.01 585.37 586.75 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b c d e f g 0.4118 0.8236 1.2354 1.6472 2.059 1.83 1.83 1.83 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b c d e f g 0.414 0.828 1.242 1.656 2.07 1.84 1.84 1.82 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.0 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b c d e f g 0.1958 0.3916 0.5874 0.7832 0.979 0.86 0.86 0.87 0.34 0.34 0.34 0.34
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time a b c d e f g 4 8 12 16 20 15.98600 15.97850 15.98720 5.57469 5.54107 5.57320 5.57553
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time a b c d e f g 4 8 12 16 20 15.95280 15.98880 15.97780 5.57001 5.56353 5.55581 5.56539
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time a b c d e f g 50 100 150 200 250 215.10 214.07 214.14 151.91 151.51 151.78 151.68
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b c d e f g 4 8 12 16 20 14.23690 14.17830 14.13990 5.60747 5.62040 5.61454 5.62278
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b c d e f g 4 8 12 16 20 13.87390 13.76660 13.83170 5.45329 5.46153 5.45227 5.47725
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b c d e f g 4 8 12 16 20 16.34680 16.43650 16.53500 6.58745 6.58270 6.59563 6.60085
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig a b c d e f g 12 24 36 48 60 27.35 27.24 27.41 55.17 55.09 55.15 55.17
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 8M 16M 24M 32M 40M 39499000 39486000 39453000 35228000 35315000 35271000 35236000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 13M 26M 39M 52M 65M 59401000 59296000 57519000 52665000 52827000 52879000 52854000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 17M 34M 51M 68M 85M 77181000 77019000 76924000 67054000 68846000 68861000 68678000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 30M 60M 90M 120M 150M 117490000 114010000 118550000 105650000 105480000 105740000 104800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 30M 60M 90M 120M 150M 153850000 153690000 153670000 138600000 138620000 138580000 138460000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 40M 80M 120M 160M 200M 196220000 196590000 194510000 188930000 191230000 189880000 190750000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 70M 140M 210M 280M 350M 307540000 305110000 306760000 278030000 277780000 276390000 277410000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 80M 160M 240M 320M 400M 369430000 366930000 366990000 363310000 357990000 350450000 357810000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 1 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 3M 6M 9M 12M 15M 13909000 14021000 14225000 12683000 12366000 12681000 12256000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 130M 260M 390M 520M 650M 594230000 602470000 603650000 545360000 545140000 545020000 543050000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 150M 300M 450M 600M 750M 699740000 692760000 674930000 689150000 692920000 693340000 682070000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 2 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 6M 12M 18M 24M 30M 27901000 27736000 28227000 24627000 25207000 25199000 22727000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 300M 600M 900M 1200M 1500M 1183500000 1190300000 1184800000 1047100000 1046600000 1041900000 1047100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 300M 600M 900M 1200M 1500M 1192100000 1214200000 1254800000 1035000000 1032000000 1024600000 1033400000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 4 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 12M 24M 36M 48M 60M 52911000 55588000 55165000 50258000 50380000 49977000 49556000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 500M 1000M 1500M 2000M 2500M 2207700000 2212100000 2206800000 1059500000 1057500000 1057100000 1056200000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 400M 800M 1200M 1600M 2000M 1994400000 2001900000 2010300000 1093300000 1095400000 1094600000 1099300000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 8 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 20M 40M 60M 80M 100M 109870000 108080000 109140000 99594000 97005000 99441000 100170000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 32 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 32 a b c d e f g 600M 1200M 1800M 2400M 3000M 3005800000 2995400000 2999800000 1065200000 1065100000 1065300000 1065700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 57 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 57 a b c d e f g 600M 1200M 1800M 2400M 3000M 2559800000 2571100000 2564900000 1120800000 1117800000 1120500000 1118200000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 16 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 50M 100M 150M 200M 250M 216080000 216150000 214910000 193850000 196040000 194500000 194670000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 32 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 90M 180M 270M 360M 450M 425810000 429620000 424400000 273760000 273480000 273390000 274070000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 64 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 130M 260M 390M 520M 650M 622560000 610950000 622630000 282920000 281830000 283030000 281730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP Threads: 96 - Buffer Length: 256 - Filter Length: 512 OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 96 - Buffer Length: 256 - Filter Length: 512 a b c d e f g 150M 300M 450M 600M 750M 711640000 718140000 715030000 286250000 285880000 285920000 286530000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TiDB Community Server Test: oltp_read_write - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 1 a b c e f g 700 1400 2100 2800 3500 2540 2510 2485 3209 3218 3195
TiDB Community Server Test: oltp_read_write - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 16 a b c d e f g 8K 16K 24K 32K 40K 38331 36950 37368 36480 36784 36125 36088
TiDB Community Server Test: oltp_read_write - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 32 a b c d e f g 13K 26K 39K 52K 65K 58974 61520 59630 46977 46737 47141 46993
TiDB Community Server Test: oltp_read_write - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 64 a b c d e f g 20K 40K 60K 80K 100K 79090 80183 78469 55334 53893 54956 55301
TiDB Community Server Test: oltp_point_select - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 1 a b c d e f 1300 2600 3900 5200 6500 4331 4405 4471 5898 5976 5954
TiDB Community Server Test: oltp_read_write - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_read_write - Threads: 128 a b d e f g 20K 40K 60K 80K 100K 85757 89099 59727 60145 60310 59944
TiDB Community Server Test: oltp_update_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 1 a c d e f g 300 600 900 1200 1500 1212 1189 1479 1490 1483 1481
TiDB Community Server Test: oltp_point_select - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 16 b c e f g 15K 30K 45K 60K 75K 67515 65406 70250 70105 69923
TiDB Community Server Test: oltp_point_select - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 32 a b d e f g 20K 40K 60K 80K 100K 104627 106180 98149 96907 97368 96840
TiDB Community Server Test: oltp_point_select - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 64 a b d e f g 30K 60K 90K 120K 150K 127567 130802 115675 118657 119092 118549
TiDB Community Server Test: oltp_update_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 16 a c d e f g 3K 6K 9K 12K 15K 12558 12681 12622 12567 12692 12627
TiDB Community Server Test: oltp_update_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 32 a b c d e g 4K 8K 12K 16K 20K 18361 17817 17565 17612 17117 17135
TiDB Community Server Test: oltp_update_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 64 b c d e f 5K 10K 15K 20K 25K 24371 23324 21108 21271 21067
TiDB Community Server Test: oltp_point_select - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_point_select - Threads: 128 a b c d e f 30K 60K 90K 120K 150K 159242 159728 149962 129492 129904 130389
TiDB Community Server Test: oltp_update_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_index - Threads: 128 a b c e f g 6K 12K 18K 24K 30K 27087 27464 26546 24611 24830 24574
TiDB Community Server Test: oltp_update_non_index - Threads: 1 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 1 a b c d e f g 400 800 1200 1600 2000 1328 1312 1381 1693 1708 1697 1705
TiDB Community Server Test: oltp_update_non_index - Threads: 16 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 16 a b d e g 4K 8K 12K 16K 20K 18095 18068 18563 18557 18735
TiDB Community Server Test: oltp_update_non_index - Threads: 32 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 32 a b d e f 6K 12K 18K 24K 30K 28735 28914 26273 26285 26695
TiDB Community Server Test: oltp_update_non_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 64 a b c d e f g 9K 18K 27K 36K 45K 41281 39759 39106 34224 33881 34470 34107
TiDB Community Server Test: oltp_update_non_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better TiDB Community Server 7.3 Test: oltp_update_non_index - Threads: 128 a c e f g 11K 22K 33K 44K 55K 51105 52865 42138 41424 41695
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e f g 9 18 27 36 45 39.50 39.47 39.45 13.07 12.94 13.09 13.07
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e f g 130 260 390 520 650 605.04 605.73 605.92 606.10 607.91 607.82 607.16
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 300 600 900 1200 1500 1417.07 1403.07 1418.90 508.09 511.41 508.21 509.14
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 4 8 12 16 20 16.91 17.07 16.89 15.72 15.62 15.72 15.69
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e f g 150 300 450 600 750 672.46 672.37 671.26 257.27 257.89 257.50 257.28
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c d e f g 8 16 24 32 40 35.63 35.64 35.68 31.05 30.99 31.03 31.06
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e f g 40 80 120 160 200 201.39 201.25 201.54 71.14 71.27 71.04 70.93
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c d e f g 30 60 90 120 150 118.75 118.95 118.78 112.25 112.06 112.41 112.48
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d e f g 110 220 330 440 550 485.67 488.13 489.11 162.85 163.14 162.93 162.99
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d e f g 11 22 33 44 55 49.37 49.11 49.02 49.09 49.01 49.07 49.03
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 1100 2200 3300 4400 5500 5137.01 5138.83 5153.66 1599.21 1599.15 1600.53 1602.52
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 1.1241 2.2482 3.3723 4.4964 5.6205 4.6508 4.6476 4.6348 4.9960 4.9859 4.9877 4.9787
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d e f g 50 100 150 200 250 215.64 215.93 215.65 71.92 71.91 71.94 71.90
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d e f g 20 40 60 80 100 111.01 110.92 111.03 111.11 111.09 110.89 110.98
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d e f g 11 22 33 44 55 49.33 49.17 47.15 16.16 16.14 16.13 16.07
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d e f g 110 220 330 440 550 485.72 487.36 507.48 493.60 494.26 494.22 495.60
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e f g 110 220 330 440 550 489.12 489.45 487.05 163.56 162.93 162.90 163.23
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e f g 11 22 33 44 55 49.01 48.97 49.21 48.87 49.06 49.07 48.98
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 50 100 150 200 250 218.15 219.53 218.52 72.46 72.66 72.57 72.69
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 20 40 60 80 100 109.80 109.23 109.58 110.11 109.97 110.00 109.90
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e f g 70 140 210 280 350 322.25 321.18 321.51 108.91 109.09 109.09 109.22
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e f g 20 40 60 80 100 74.32 74.56 74.50 73.31 73.26 73.22 73.19
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e f g 15 30 45 60 75 68.60 68.66 68.63 24.48 24.47 24.52 24.46
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e f g 80 160 240 320 400 347.66 347.22 347.37 325.88 325.74 324.96 325.51
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 160 320 480 640 800 718.92 717.97 716.14 240.55 240.23 240.16 239.52
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e f g 8 16 24 32 40 33.34 33.38 33.46 33.22 33.26 33.28 33.37
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e f g 40 80 120 160 200 158.92 159.06 164.61 55.61 55.46 55.54 55.43
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c d e f g 30 60 90 120 150 150.59 150.61 145.26 143.76 144.10 143.69 144.11
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e f g 9 18 27 36 45 39.44 39.45 39.42 13.13 13.12 13.09 13.06
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e f g 130 260 390 520 650 605.76 606.67 605.88 606.58 606.76 606.79 608.72
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only a b c d e f g 16 32 48 64 80 26.20 26.24 26.12 72.00 71.44 71.96 72.01
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only a b c d e f g 40 80 120 160 200 66.42 66.64 66.72 182.99 182.56 181.70 183.29
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only a b c d e f g 20 40 60 80 100 33.22 33.17 33.03 90.03 90.31 90.26 90.63
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only a b c d e f g 140 280 420 560 700 254.88 255.30 254.72 670.87 670.64 667.87 669.09
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only a b c d e f g 50 100 150 200 250 80.54 80.76 80.41 224.15 224.10 223.95 224.12
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU a b c d e f g 7 14 21 28 35 30.41 30.44 30.43 10.47 10.47 10.48 10.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU a b c d e f g 160 320 480 640 800 393.60 393.23 393.37 761.59 761.16 760.57 759.92 MIN: 363.29 / MAX: 431.61 MIN: 360.87 / MAX: 433.13 MIN: 362.57 / MAX: 433.51 MIN: 738.34 / MAX: 772.36 MIN: 741.99 / MAX: 776.56 MIN: 741.4 / MAX: 770.88 MIN: 737.63 / MAX: 771.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU a b c d e f g 60 120 180 240 300 282.55 284.22 282.67 107.02 107.27 107.39 107.04 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU a b c d e f g 20 40 60 80 100 42.44 42.20 42.43 74.71 74.50 74.43 74.71 MIN: 36.14 / MAX: 61.98 MIN: 36.84 / MAX: 61.97 MIN: 36.31 / MAX: 62.36 MIN: 66.12 / MAX: 81.09 MIN: 66.5 / MAX: 80.32 MIN: 65.68 / MAX: 83.49 MIN: 66.29 / MAX: 79.68 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU a b c d e f g 60 120 180 240 300 283.97 284.99 284.31 106.90 107.24 106.76 107.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU a b c d e f g 20 40 60 80 100 42.24 42.09 42.19 74.81 74.54 74.87 74.58 MIN: 36.59 / MAX: 61.56 MIN: 37.13 / MAX: 58.71 MIN: 36.21 / MAX: 65.64 MIN: 66.88 / MAX: 80.7 MIN: 65.97 / MAX: 82.9 MIN: 66.72 / MAX: 80.96 MIN: 67.63 / MAX: 78.73 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a b c d e f g 400 800 1200 1600 2000 2033.17 2028.01 2029.79 797.64 793.75 791.74 793.90 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU a b c d e f g 3 6 9 12 15 5.89 5.91 5.90 10.01 10.06 10.09 10.06 MIN: 4.67 / MAX: 18.4 MIN: 4.84 / MAX: 12.9 MIN: 4.83 / MAX: 13.4 MIN: 5.7 / MAX: 19.52 MIN: 5.29 / MAX: 19.07 MIN: 5.4 / MAX: 19.17 MIN: 5.2 / MAX: 19.38 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU a b c d e f g 13 26 39 52 65 56.01 56.06 56.02 20.03 20.00 20.01 20.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU a b c d e f g 90 180 270 360 450 213.94 213.62 213.79 398.52 398.91 399.24 398.13 MIN: 201.64 / MAX: 242.71 MIN: 197.2 / MAX: 235.23 MIN: 197.29 / MAX: 236.32 MIN: 382.1 / MAX: 404.98 MIN: 386.2 / MAX: 407.29 MIN: 387.9 / MAX: 408.93 MIN: 379.09 / MAX: 404.71 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e f g 1300 2600 3900 5200 6500 5882.91 5836.27 5840.53 2564.78 2562.54 2539.97 2557.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU a b c d e f g 0.7065 1.413 2.1195 2.826 3.5325 2.03 2.05 2.05 3.11 3.11 3.14 3.12 MIN: 1.66 / MAX: 7.51 MIN: 1.6 / MAX: 7 MIN: 1.62 / MAX: 6.96 MIN: 1.94 / MAX: 11.57 MIN: 1.93 / MAX: 9.72 MIN: 1.93 / MAX: 11.65 MIN: 1.88 / MAX: 11.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU a b c d e f g 160 320 480 640 800 748.44 750.49 757.38 344.67 342.81 343.49 341.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU a b c d e f g 6 12 18 24 30 16.02 15.98 15.83 23.20 23.32 23.28 23.42 MIN: 12.5 / MAX: 33.94 MIN: 12.74 / MAX: 33.34 MIN: 12.38 / MAX: 32.97 MIN: 15.1 / MAX: 31.6 MIN: 19.49 / MAX: 30.99 MIN: 15.73 / MAX: 30.77 MIN: 20.46 / MAX: 32.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d e f g 600 1200 1800 2400 3000 2873.24 2880.58 2881.14 1175.67 1174.60 1180.85 1175.58 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d e f g 2 4 6 8 10 4.17 4.16 4.16 6.79 6.80 6.76 6.79 MIN: 3.39 / MAX: 10.07 MIN: 3.42 / MAX: 11.2 MIN: 3.43 / MAX: 10.26 MIN: 3.8 / MAX: 15.48 MIN: 4.04 / MAX: 15.37 MIN: 4.04 / MAX: 15.47 MIN: 3.79 / MAX: 15.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU a b c d e f g 600 1200 1800 2400 3000 2945.26 2986.46 2987.33 1039.61 1039.82 1038.47 1039.37 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU a b c d e f g 4 8 12 16 20 16.26 16.02 16.02 15.36 15.36 15.38 15.37 MIN: 14.71 / MAX: 28.14 MIN: 14.41 / MAX: 30.55 MIN: 14.63 / MAX: 33.79 MIN: 8.08 / MAX: 24.34 MIN: 8.02 / MAX: 23.81 MIN: 7.99 / MAX: 24 MIN: 7.99 / MAX: 23.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d e f g 2K 4K 6K 8K 10K 9837.58 9849.07 9845.27 3540.88 3544.18 3548.78 3533.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d e f g 1.0935 2.187 3.2805 4.374 5.4675 4.86 4.85 4.86 4.51 4.51 4.50 4.52 MIN: 4.23 / MAX: 12.81 MIN: 4.25 / MAX: 12.86 MIN: 4.34 / MAX: 12.27 MIN: 2.98 / MAX: 13.05 MIN: 2.96 / MAX: 16.06 MIN: 2.98 / MAX: 13.86 MIN: 2.77 / MAX: 13.57 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d e f g 200 400 600 800 1000 842.91 854.51 849.30 370.57 373.64 369.26 372.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d e f g 5 10 15 20 25 14.23 14.03 14.12 21.57 21.40 21.65 21.47 MIN: 11.51 / MAX: 25.86 MIN: 11.59 / MAX: 26.04 MIN: 11.51 / MAX: 26.04 MIN: 19.5 / MAX: 24.76 MIN: 19.07 / MAX: 25.3 MIN: 19.48 / MAX: 24.27 MIN: 17.62 / MAX: 28.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU a b c d e f g 70 140 210 280 350 317.22 317.28 317.33 124.12 123.61 124.30 123.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU a b c d e f g 14 28 42 56 70 37.80 37.79 37.79 64.41 64.68 64.31 64.77 MIN: 33.35 / MAX: 56.45 MIN: 32.97 / MAX: 53.7 MIN: 33.29 / MAX: 54.88 MIN: 37.44 / MAX: 73.04 MIN: 38.02 / MAX: 72.52 MIN: 50.85 / MAX: 70.77 MIN: 55.8 / MAX: 69.46 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d e f g 1200 2400 3600 4800 6000 5776.94 5780.44 5802.65 2013.77 2007.53 2004.76 2006.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d e f g 2 4 6 8 10 8.28 8.27 8.24 7.93 7.96 7.97 7.96 MIN: 7.44 / MAX: 23.35 MIN: 7.37 / MAX: 25.18 MIN: 7.62 / MAX: 23.32 MIN: 4.2 / MAX: 16.92 MIN: 4.19 / MAX: 16.59 MIN: 4.37 / MAX: 16.86 MIN: 4.19 / MAX: 14.2 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d e f g 500 1000 1500 2000 2500 2454.09 2450.26 2455.51 1036.99 1028.64 1041.87 1031.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d e f g 2 4 6 8 10 4.88 4.89 4.88 7.70 7.77 7.67 7.74 MIN: 3.95 / MAX: 16.05 MIN: 3.93 / MAX: 13.44 MIN: 3.9 / MAX: 14.94 MIN: 5.51 / MAX: 16.06 MIN: 5.42 / MAX: 16.35 MIN: 5.32 / MAX: 16.6 MIN: 6.06 / MAX: 12.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a b c d e f g 300 600 900 1200 1500 1560.03 1546.02 1551.63 532.59 530.99 533.74 538.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU a b c d e f g 7 14 21 28 35 30.72 31.00 30.89 30.02 30.10 29.95 29.72 MIN: 29.51 / MAX: 35.07 MIN: 29.59 / MAX: 36.33 MIN: 29.48 / MAX: 36.29 MIN: 18.78 / MAX: 38.72 MIN: 22.61 / MAX: 39.15 MIN: 19.01 / MAX: 38.08 MIN: 19.46 / MAX: 38.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d e f g 20K 40K 60K 80K 100K 86884.64 87359.23 86789.80 32002.62 32032.06 31951.64 32008.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d e f g 0.1215 0.243 0.3645 0.486 0.6075 0.54 0.54 0.54 0.49 0.49 0.49 0.49 MIN: 0.45 / MAX: 7.64 MIN: 0.45 / MAX: 7.81 MIN: 0.45 / MAX: 5.03 MIN: 0.3 / MAX: 9.28 MIN: 0.3 / MAX: 9.07 MIN: 0.3 / MAX: 8.2 MIN: 0.3 / MAX: 8.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d e f g 300 600 900 1200 1500 1244.69 1239.67 1237.29 395.66 432.32 431.94 432.20 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d e f g 9 18 27 36 45 38.50 38.66 38.75 40.40 36.98 37.01 36.98 MIN: 36.77 / MAX: 44.23 MIN: 37.22 / MAX: 43.52 MIN: 37.46 / MAX: 43.52 MIN: 26.93 / MAX: 74.83 MIN: 32.02 / MAX: 44.78 MIN: 32.25 / MAX: 43.6 MIN: 32.61 / MAX: 41.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g 30K 60K 90K 120K 150K 120606.38 120728.22 123484.28 44958.07 44933.27 44968.43 45097.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d e f g 0.0788 0.1576 0.2364 0.3152 0.394 0.34 0.34 0.34 0.35 0.35 0.35 0.35 MIN: 0.29 / MAX: 7.33 MIN: 0.29 / MAX: 10.87 MIN: 0.29 / MAX: 7.09 MIN: 0.23 / MAX: 9.09 MIN: 0.23 / MAX: 8.84 MIN: 0.23 / MAX: 9.15 MIN: 0.23 / MAX: 8.63 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes a b c d e f g 60K 120K 180K 240K 300K 248095 256661 270480 197866 195798 196287 197092
Apache Hadoop Operation: Open - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 100000 a b c d e f g 120K 240K 360K 480K 600K 460829 469484 401606 578035 552486 578035 546448
Apache Hadoop Operation: Open - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 a b c d e f g 110K 220K 330K 440K 550K 420168 404858 403226 529101 294985 523560 460829
Apache Hadoop Operation: Open - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 a b c d e f g 300K 600K 900K 1200K 1500K 1126126 1020408 683995 278319 251004 1221001 654022
Apache Hadoop Operation: Create - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 100000 a b c d e f g 13K 26K 39K 52K 65K 43649 41288 43937 58617 58617 58343 60680
Apache Hadoop Operation: Delete - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 100000 a b c d e f g 20K 40K 60K 80K 100K 91075 73801 90580 101010 100604 96993 103950
Apache Hadoop Operation: Open - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 a b c d e f g 300K 600K 900K 1200K 1500K 215332 173822 185874 1248439 1204819 1303781 1107420
Apache Hadoop Operation: Rename - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 100000 a b c d e f g 20K 40K 60K 80K 100K 70522 73046 77101 82372 82237 81633 82237
Apache Hadoop Operation: Create - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 100000 a b c d e f g 13K 26K 39K 52K 65K 40733 37425 35075 57971 58824 59382 58928
Apache Hadoop Operation: Create - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 1000000 a b c d e f g 16K 32K 48K 64K 80K 53665 52119 52260 72134 70897 69920 72706
Apache Hadoop Operation: Delete - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 100000 a b c d e f g 20K 40K 60K 80K 100K 87566 90827 73475 105708 98039 99404 102564
Apache Hadoop Operation: Delete - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 a b c d e f g 20K 40K 60K 80K 100K 98932 97314 90147 111012 113327 111198 110828
Apache Hadoop Operation: Rename - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 100000 a b c d e f g 20K 40K 60K 80K 100K 75529 69348 67159 82102 83822 79491 80386
Apache Hadoop Operation: Rename - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 1000000 a b c d e f g 20K 40K 60K 80K 100K 73239 71679 74638 83921 84041 82501 84810
Apache Hadoop Operation: Create - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 1000000 a b c d e f g 15K 30K 45K 60K 75K 46145 44437 44001 71296 70057 70537 70922
Apache Hadoop Operation: Delete - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 a b c d e f g 20K 40K 60K 80K 100K 90114 86715 97031 112613 113225 110803 113895
Apache Hadoop Operation: Rename - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 1000000 a b c d e f g 20K 40K 60K 80K 100K 73078 72129 66827 81208 84360 85815 85763
Apache Hadoop Operation: File Status - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 a b c d e f g 200K 400K 600K 800K 1000K 529101 862069 657895 632911 389105 709220 561798
Apache Hadoop Operation: File Status - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 a b c d e f g 160K 320K 480K 640K 800K 515464 458716 729927 591716 613497 478469 487805
Apache Hadoop Operation: File Status - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 a b c d e f g 500K 1000K 1500K 2000K 2500K 2173913 1941748 284252 1818182 320924 1795332 2036660
Apache Hadoop Operation: File Status - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 a b c d e f g 400K 800K 1200K 1600K 2000K 1886792 161970 1893939 600601 235627 1964637 2049180
Kripke OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.6 d e f g 50M 100M 150M 200M 250M 240994500 236243900 236591000 237175700 1. (CXX) g++ options: -O3 -fopenmp -ldl
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c d e f g 170K 340K 510K 680K 850K 772162 768517 762529 298064 296125 295603 295522 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 d e f g 0.3728 0.7456 1.1184 1.4912 1.864 1.657 1.654 1.657 1.648 1. (CXX) g++ options: -O3 -fopenmp
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 d e f g 9 18 27 36 45 38.11 38.07 38.02 37.95 1. (CXX) g++ options: -O3 -fopenmp
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400 d e f g 20 40 60 80 100 98.98 99.42 97.99 97.53 1. (CXX) g++ options: -O3 -fopenmp
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon d e f g 6 12 18 24 30 24.85 24.83 24.89 24.96 MIN: 24.78 / MAX: 25 MIN: 24.76 / MAX: 24.96 MIN: 24.81 / MAX: 25.06 MIN: 24.9 / MAX: 25.13
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj d e f g 5 10 15 20 25 22.35 22.29 22.27 22.26 MIN: 22.28 / MAX: 22.5 MIN: 22.22 / MAX: 22.46 MIN: 22.2 / MAX: 22.44 MIN: 22.18 / MAX: 22.43
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown d e f g 5 10 15 20 25 21.89 21.99 21.77 21.83 MIN: 21.74 / MAX: 22.23 MIN: 21.84 / MAX: 22.32 MIN: 21.63 / MAX: 22.18 MIN: 21.69 / MAX: 22.17
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon d e f g 7 14 21 28 35 27.74 27.83 27.83 27.91 MIN: 27.64 / MAX: 27.98 MIN: 27.72 / MAX: 28.1 MIN: 27.73 / MAX: 28.13 MIN: 27.81 / MAX: 28.17
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj d e f g 6 12 18 24 30 23.35 23.53 23.50 23.71 MIN: 23.26 / MAX: 23.57 MIN: 23.43 / MAX: 23.73 MIN: 23.4 / MAX: 23.74 MIN: 23.61 / MAX: 23.93
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown d e f g 5 10 15 20 25 22.39 22.34 22.44 22.42 MIN: 22.2 / MAX: 22.85 MIN: 22.15 / MAX: 22.75 MIN: 22.25 / MAX: 22.78 MIN: 22.22 / MAX: 22.85
OpenVKL Benchmark: vklBenchmarkCPU Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar d e f g 40 80 120 160 200 191 190 191 191 MIN: 13 / MAX: 3471 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3484 MIN: 13 / MAX: 3483
OpenVKL Benchmark: vklBenchmarkCPU ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC d e f g 110 220 330 440 550 487 487 488 489 MIN: 36 / MAX: 6949 MIN: 36 / MAX: 6956 MIN: 36 / MAX: 6952 MIN: 36 / MAX: 6969
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU d e f g 0.48 0.96 1.44 1.92 2.4 2.13332 2.12570 2.13062 2.11813 MIN: 2 MIN: 2.01 MIN: 1.97 MIN: 1.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU d e f g 0.3539 0.7078 1.0617 1.4156 1.7695 1.55824 1.54911 1.57282 1.55118 MIN: 1.51 MIN: 1.51 MIN: 1.53 MIN: 1.52 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU d e f g 0.3019 0.6038 0.9057 1.2076 1.5095 1.33789 1.33861 1.34183 1.33564 MIN: 1.31 MIN: 1.31 MIN: 1.31 MIN: 1.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU d e f g 0.8649 1.7298 2.5947 3.4596 4.3245 3.81576 3.84421 3.81823 3.82381 MIN: 3.26 MIN: 3.27 MIN: 3.25 MIN: 3.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU d e f g 0.1426 0.2852 0.4278 0.5704 0.713 0.628236 0.633975 0.630325 0.629108 MIN: 0.6 MIN: 0.6 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU d e f g 0.6893 1.3786 2.0679 2.7572 3.4465 3.05991 3.06370 3.05674 3.05458 MIN: 2.96 MIN: 2.97 MIN: 2.97 MIN: 2.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU d e f g 0.7615 1.523 2.2845 3.046 3.8075 3.37782 3.38436 3.37956 3.38156 MIN: 3.33 MIN: 3.33 MIN: 3.33 MIN: 3.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU d e f g 0.1914 0.3828 0.5742 0.7656 0.957 0.847805 0.844434 0.850691 0.843492 MIN: 0.83 MIN: 0.83 MIN: 0.83 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU d e f g 0.4315 0.863 1.2945 1.726 2.1575 1.91374 1.91781 1.91422 1.91274 MIN: 1.88 MIN: 1.88 MIN: 1.88 MIN: 1.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU d e f g 0.5772 1.1544 1.7316 2.3088 2.886 2.49408 2.56522 2.49714 2.51441 MIN: 2.3 MIN: 2.32 MIN: 2.26 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU d e f g 0.148 0.296 0.444 0.592 0.74 0.652259 0.657610 0.653182 0.647700 MIN: 0.57 MIN: 0.57 MIN: 0.57 MIN: 0.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU d e f g 0.2575 0.515 0.7725 1.03 1.2875 1.03749 1.14432 1.00136 1.12723 MIN: 0.92 MIN: 1.07 MIN: 0.92 MIN: 0.93 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU d e f g 0.2881 0.5762 0.8643 1.1524 1.4405 1.25758 1.28043 1.20653 1.27918 MIN: 1.21 MIN: 1.24 MIN: 1.18 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU d e f g 0.1378 0.2756 0.4134 0.5512 0.689 0.603950 0.575794 0.600834 0.612320 MIN: 0.53 MIN: 0.52 MIN: 0.53 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU d e f g 0.2388 0.4776 0.7164 0.9552 1.194 1.02875 1.05425 1.06144 1.04567 MIN: 0.96 MIN: 0.97 MIN: 0.98 MIN: 0.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU d e f g 400 800 1200 1600 2000 1641.92 1641.00 1636.76 1637.37 MIN: 1584.81 MIN: 1595.55 MIN: 1585.98 MIN: 1584.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU d e f g 400 800 1200 1600 2000 1642.51 1639.36 1636.44 1631.99 MIN: 1593.16 MIN: 1581.93 MIN: 1585.81 MIN: 1581.62 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU d e f g 400 800 1200 1600 2000 1643.99 1643.97 1642.35 1641.40 MIN: 1588.03 MIN: 1590.89 MIN: 1586.17 MIN: 1589.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU d e f g 200 400 600 800 1000 838.52 849.71 851.49 848.03 MIN: 796.3 MIN: 805.98 MIN: 807.97 MIN: 807.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU d e f g 200 400 600 800 1000 849.16 851.66 849.34 837.60 MIN: 806.44 MIN: 809.45 MIN: 805.8 MIN: 796.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU d e f g 200 400 600 800 1000 847.38 841.08 845.31 847.42 MIN: 806.33 MIN: 798.46 MIN: 803.78 MIN: 806.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only d e f g 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only d e f g 0.162 0.324 0.486 0.648 0.81 0.72 0.72 0.72 0.72
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only d e f g 0.0765 0.153 0.2295 0.306 0.3825 0.34 0.34 0.34 0.34
Phoronix Test Suite v10.8.5