xeon 8380 2P 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209058-NE-XEON8380230&grw&sro&rro .
xeon 8380 2P Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution A B C D 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Ice Lake IEH 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP Ubuntu 22.10 5.19.0-15-generic (x86_64) GNOME Shell X Server 1.21.1.3 1.3.211 GCC 12.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000375 Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
xeon 8380 2P mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU compress-7zip: Compression Rating compress-7zip: Decompression Rating build-php: Time To Compile natron: Spaceship build-python: Default build-python: Released Build, PGO + LTO Optimized build-erlang: Time To Compile build-nodejs: Time To Compile build-wasmer: Time To Compile A B C D 11.677 1.685 2.114 8.119 3.641 2.615 1.878 20.247 24.36 816.29 13.12 1507.12 12.9 1533.51 1122.33 17.78 82.84 241.03 4399.55 4.53 2448.4 32.41 230.96 86.36 9683.64 8.25 2019.08 9.88 48360.89 1.32 44836.53 1.42 341336 357733 54.265 1.5 23.647 327.427 138.3 183.569 57.445 12.373 1.705 2.451 8.805 3.782 2.735 2.269 20.571 24.45 814.05 13.13 1510.95 12.91 1534.43 1123.63 17.76 83.18 240.12 4407.61 4.53 2443.95 32.49 232.03 85.92 9694.57 8.24 2023.27 9.85 46527.42 1.4 34045.71 1.74 340384 347845 54.714 1.5 23.67 329.055 137.842 185.516 58.093 13.403 1.889 2.628 8.654 4.417 3.197 2.268 21.146 24.35 816.86 13.09 1511.26 12.81 1543.62 1121.31 17.8 82.93 240.81 4408.17 4.52 2442.14 32.52 229.5 86.92 9685.17 8.25 2017.86 9.88 48351.01 1.35 33880.47 1.81 340658 350969 54.473 1.5 23.758 329.899 138.828 184.913 57.173 13.023 1.823 2.534 8.606 4.237 3.044 2.204 20.945 24.36 816.87 13.07 1515.02 12.91 1531.79 1124.79 17.74 83.05 240.51 4401.74 4.53 2441.15 32.51 227.42 87.69 9677.29 8.25 2020.4 9.87 47659.47 1.37 38908.73 1.6 337572 338639 54.211 1.4 22.961 330.258 138.865 189.027 58.793 OpenBenchmarking.org
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet D C B A 3 6 9 12 15 13.02 13.40 12.37 11.68 MIN: 11.03 / MAX: 24.94 MIN: 10.89 / MAX: 23.76 MIN: 11.35 / MAX: 27.13 MIN: 11.03 / MAX: 18.75 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 D C B A 0.425 0.85 1.275 1.7 2.125 1.823 1.889 1.705 1.685 MIN: 1.79 / MAX: 2.04 MIN: 1.86 / MAX: 2.32 MIN: 1.67 / MAX: 1.94 MIN: 1.66 / MAX: 1.78 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 D C B A 0.5913 1.1826 1.7739 2.3652 2.9565 2.534 2.628 2.451 2.114 MIN: 2.5 / MAX: 2.78 MIN: 2.59 / MAX: 5.14 MIN: 2.2 / MAX: 5.51 MIN: 2.08 / MAX: 2.24 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 D C B A 2 4 6 8 10 8.606 8.654 8.805 8.119 MIN: 8.36 / MAX: 9.61 MIN: 8.05 / MAX: 26.64 MIN: 8.32 / MAX: 33.06 MIN: 7.93 / MAX: 26.5 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 D C B A 0.9938 1.9876 2.9814 3.9752 4.969 4.237 4.417 3.782 3.641 MIN: 4.11 / MAX: 10.84 MIN: 4.36 / MAX: 5.09 MIN: 3.73 / MAX: 4.33 MIN: 3.57 / MAX: 10.94 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 D C B A 0.7193 1.4386 2.1579 2.8772 3.5965 3.044 3.197 2.735 2.615 MIN: 2.97 / MAX: 3.39 MIN: 3.15 / MAX: 3.57 MIN: 2.66 / MAX: 3.26 MIN: 2.56 / MAX: 2.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 D C B A 0.5105 1.021 1.5315 2.042 2.5525 2.204 2.268 2.269 1.878 MIN: 2.17 / MAX: 2.42 MIN: 2.22 / MAX: 2.72 MIN: 2.24 / MAX: 2.34 MIN: 1.83 / MAX: 2.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 D C B A 5 10 15 20 25 20.95 21.15 20.57 20.25 MIN: 20.5 / MAX: 34.43 MIN: 20.36 / MAX: 38.81 MIN: 20.02 / MAX: 33.93 MIN: 19.94 / MAX: 33.14 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU D C B A 6 12 18 24 30 24.36 24.35 24.45 24.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU D C B A 200 400 600 800 1000 816.87 816.86 814.05 816.29 MIN: 598.11 / MAX: 895.97 MIN: 530.61 / MAX: 979.49 MIN: 578.83 / MAX: 898.2 MIN: 551.08 / MAX: 937.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU D C B A 3 6 9 12 15 13.07 13.09 13.13 13.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU D C B A 300 600 900 1200 1500 1515.02 1511.26 1510.95 1507.12 MIN: 1296.34 / MAX: 2174.84 MIN: 1270.98 / MAX: 2162.13 MIN: 1308.26 / MAX: 2249.34 MIN: 1301.34 / MAX: 2203.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU D C B A 3 6 9 12 15 12.91 12.81 12.91 12.90 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU D C B A 300 600 900 1200 1500 1531.79 1543.62 1534.43 1533.51 MIN: 1295.79 / MAX: 2063.8 MIN: 1391.97 / MAX: 2201.98 MIN: 1304.79 / MAX: 2047.48 MIN: 1281.86 / MAX: 2052.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU D C B A 200 400 600 800 1000 1124.79 1121.31 1123.63 1122.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU D C B A 4 8 12 16 20 17.74 17.80 17.76 17.78 MIN: 13.24 / MAX: 128.1 MIN: 12.08 / MAX: 129.12 MIN: 11.23 / MAX: 112.73 MIN: 12.46 / MAX: 116.59 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU D C B A 20 40 60 80 100 83.05 82.93 83.18 82.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU D C B A 50 100 150 200 250 240.51 240.81 240.12 241.03 MIN: 193.29 / MAX: 354.02 MIN: 184.67 / MAX: 352.84 MIN: 184.22 / MAX: 359.75 MIN: 182.52 / MAX: 349.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU D C B A 900 1800 2700 3600 4500 4401.74 4408.17 4407.61 4399.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU D C B A 1.0193 2.0386 3.0579 4.0772 5.0965 4.53 4.52 4.53 4.53 MIN: 4.09 / MAX: 57.12 MIN: 4.09 / MAX: 46.43 MIN: 4.09 / MAX: 62.16 MIN: 4.08 / MAX: 61.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU D C B A 500 1000 1500 2000 2500 2441.15 2442.14 2443.95 2448.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU D C B A 8 16 24 32 40 32.51 32.52 32.49 32.41 MIN: 28.45 / MAX: 103.54 MIN: 27.88 / MAX: 95.98 MIN: 28.03 / MAX: 111.13 MIN: 28.01 / MAX: 107.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU D C B A 50 100 150 200 250 227.42 229.50 232.03 230.96 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU D C B A 20 40 60 80 100 87.69 86.92 85.92 86.36 MIN: 71.33 / MAX: 307.03 MIN: 72.62 / MAX: 308.23 MIN: 72.43 / MAX: 301.8 MIN: 72.38 / MAX: 303.3 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU D C B A 2K 4K 6K 8K 10K 9677.29 9685.17 9694.57 9683.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU D C B A 2 4 6 8 10 8.25 8.25 8.24 8.25 MIN: 7.19 / MAX: 25.48 MIN: 7.2 / MAX: 25.16 MIN: 7.21 / MAX: 25.73 MIN: 7.22 / MAX: 24.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU D C B A 400 800 1200 1600 2000 2020.40 2017.86 2023.27 2019.08 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU D C B A 3 6 9 12 15 9.87 9.88 9.85 9.88 MIN: 8.55 / MAX: 78.35 MIN: 8.6 / MAX: 87.09 MIN: 8.59 / MAX: 76.51 MIN: 8.55 / MAX: 84.71 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU D C B A 10K 20K 30K 40K 50K 47659.47 48351.01 46527.42 48360.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU D C B A 0.315 0.63 0.945 1.26 1.575 1.37 1.35 1.40 1.32 MIN: 0.87 / MAX: 16.55 MIN: 0.86 / MAX: 20.45 MIN: 0.93 / MAX: 17.52 MIN: 0.99 / MAX: 16.57 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU D C B A 10K 20K 30K 40K 50K 38908.73 33880.47 34045.71 44836.53 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU D C B A 0.4073 0.8146 1.2219 1.6292 2.0365 1.60 1.81 1.74 1.42 MIN: 0.43 / MAX: 32.92 MIN: 0.48 / MAX: 18.78 MIN: 0.51 / MAX: 21.83 MIN: 0.36 / MAX: 35.73 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating D C B A 70K 140K 210K 280K 350K 337572 340658 340384 341336 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating D C B A 80K 160K 240K 320K 400K 338639 350969 347845 357733 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile D C B A 12 24 36 48 60 54.21 54.47 54.71 54.27
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship D C B A 0.3375 0.675 1.0125 1.35 1.6875 1.4 1.5 1.5 1.5
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default D C B A 6 12 18 24 30 22.96 23.76 23.67 23.65
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized D C B A 70 140 210 280 350 330.26 329.90 329.06 327.43
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile D C B A 30 60 90 120 150 138.87 138.83 137.84 138.30
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile D C B A 40 80 120 160 200 189.03 184.91 185.52 183.57
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile D C B A 13 26 39 52 65 58.79 57.17 58.09 57.45 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Phoronix Test Suite v10.8.5