xeon 8380 2P 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209058-NE-XEON8380230&sor&grr .
xeon 8380 2P Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution A B C D 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Ice Lake IEH 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP Ubuntu 22.10 5.19.0-15-generic (x86_64) GNOME Shell X Server 1.21.1.3 1.3.211 GCC 12.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000375 Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
xeon 8380 2P build-python: Released Build, PGO + LTO Optimized build-nodejs: Time To Compile build-erlang: Time To Compile mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU natron: Spaceship openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU build-wasmer: Time To Compile build-php: Time To Compile compress-7zip: Decompression Rating compress-7zip: Compression Rating build-python: Default A B C D 327.427 183.569 138.3 20.247 1.878 2.615 3.641 8.119 2.114 1.685 11.677 1507.12 13.12 1533.51 12.9 1.5 816.29 24.36 241.03 82.84 9.88 2019.08 86.36 230.96 32.41 2448.4 17.78 1122.33 1.32 48360.89 8.25 9683.64 1.42 44836.53 4.53 4399.55 57.445 54.265 357733 341336 23.647 329.055 185.516 137.842 20.571 2.269 2.735 3.782 8.805 2.451 1.705 12.373 1510.95 13.13 1534.43 12.91 1.5 814.05 24.45 240.12 83.18 9.85 2023.27 85.92 232.03 32.49 2443.95 17.76 1123.63 1.4 46527.42 8.24 9694.57 1.74 34045.71 4.53 4407.61 58.093 54.714 347845 340384 23.67 329.899 184.913 138.828 21.146 2.268 3.197 4.417 8.654 2.628 1.889 13.403 1511.26 13.09 1543.62 12.81 1.5 816.86 24.35 240.81 82.93 9.88 2017.86 86.92 229.5 32.52 2442.14 17.8 1121.31 1.35 48351.01 8.25 9685.17 1.81 33880.47 4.52 4408.17 57.173 54.473 350969 340658 23.758 330.258 189.027 138.865 20.945 2.204 3.044 4.237 8.606 2.534 1.823 13.023 1515.02 13.07 1531.79 12.91 1.4 816.87 24.36 240.51 83.05 9.87 2020.4 87.69 227.42 32.51 2441.15 17.74 1124.79 1.37 47659.47 8.25 9677.29 1.6 38908.73 4.53 4401.74 58.793 54.211 338639 337572 22.961 OpenBenchmarking.org
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C D 70 140 210 280 350 327.43 329.06 329.90 330.26
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A C B D 40 80 120 160 200 183.57 184.91 185.52 189.03
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile B A C D 30 60 90 120 150 137.84 138.30 138.83 138.87
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B D C 5 10 15 20 25 20.25 20.57 20.95 21.15 MIN: 19.94 / MAX: 33.14 MIN: 20.02 / MAX: 33.93 MIN: 20.5 / MAX: 34.43 MIN: 20.36 / MAX: 38.81 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A D C B 0.5105 1.021 1.5315 2.042 2.5525 1.878 2.204 2.268 2.269 MIN: 1.83 / MAX: 2.33 MIN: 2.17 / MAX: 2.42 MIN: 2.22 / MAX: 2.72 MIN: 2.24 / MAX: 2.34 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B D C 0.7193 1.4386 2.1579 2.8772 3.5965 2.615 2.735 3.044 3.197 MIN: 2.56 / MAX: 2.93 MIN: 2.66 / MAX: 3.26 MIN: 2.97 / MAX: 3.39 MIN: 3.15 / MAX: 3.57 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B D C 0.9938 1.9876 2.9814 3.9752 4.969 3.641 3.782 4.237 4.417 MIN: 3.57 / MAX: 10.94 MIN: 3.73 / MAX: 4.33 MIN: 4.11 / MAX: 10.84 MIN: 4.36 / MAX: 5.09 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A D C B 2 4 6 8 10 8.119 8.606 8.654 8.805 MIN: 7.93 / MAX: 26.5 MIN: 8.36 / MAX: 9.61 MIN: 8.05 / MAX: 26.64 MIN: 8.32 / MAX: 33.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B D C 0.5913 1.1826 1.7739 2.3652 2.9565 2.114 2.451 2.534 2.628 MIN: 2.08 / MAX: 2.24 MIN: 2.2 / MAX: 5.51 MIN: 2.5 / MAX: 2.78 MIN: 2.59 / MAX: 5.14 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B D C 0.425 0.85 1.275 1.7 2.125 1.685 1.705 1.823 1.889 MIN: 1.66 / MAX: 1.78 MIN: 1.67 / MAX: 1.94 MIN: 1.79 / MAX: 2.04 MIN: 1.86 / MAX: 2.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B D C 3 6 9 12 15 11.68 12.37 13.02 13.40 MIN: 11.03 / MAX: 18.75 MIN: 11.35 / MAX: 27.13 MIN: 11.03 / MAX: 24.94 MIN: 10.89 / MAX: 23.76 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C D 300 600 900 1200 1500 1507.12 1510.95 1511.26 1515.02 MIN: 1301.34 / MAX: 2203.83 MIN: 1308.26 / MAX: 2249.34 MIN: 1270.98 / MAX: 2162.13 MIN: 1296.34 / MAX: 2174.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU B A C D 3 6 9 12 15 13.13 13.12 13.09 13.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU D A B C 300 600 900 1200 1500 1531.79 1533.51 1534.43 1543.62 MIN: 1295.79 / MAX: 2063.8 MIN: 1281.86 / MAX: 2052.91 MIN: 1304.79 / MAX: 2047.48 MIN: 1391.97 / MAX: 2201.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU D B A C 3 6 9 12 15 12.91 12.91 12.90 12.81 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship C B A D 0.3375 0.675 1.0125 1.35 1.6875 1.5 1.5 1.5 1.4
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU B A C D 200 400 600 800 1000 814.05 816.29 816.86 816.87 MIN: 578.83 / MAX: 898.2 MIN: 551.08 / MAX: 937.77 MIN: 530.61 / MAX: 979.49 MIN: 598.11 / MAX: 895.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU B D A C 6 12 18 24 30 24.45 24.36 24.36 24.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU B D C A 50 100 150 200 250 240.12 240.51 240.81 241.03 MIN: 184.22 / MAX: 359.75 MIN: 193.29 / MAX: 354.02 MIN: 184.67 / MAX: 352.84 MIN: 182.52 / MAX: 349.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU B D C A 20 40 60 80 100 83.18 83.05 82.93 82.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU B D A C 3 6 9 12 15 9.85 9.87 9.88 9.88 MIN: 8.59 / MAX: 76.51 MIN: 8.55 / MAX: 78.35 MIN: 8.55 / MAX: 84.71 MIN: 8.6 / MAX: 87.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU B D A C 400 800 1200 1600 2000 2023.27 2020.40 2019.08 2017.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU B A C D 20 40 60 80 100 85.92 86.36 86.92 87.69 MIN: 72.43 / MAX: 301.8 MIN: 72.38 / MAX: 303.3 MIN: 72.62 / MAX: 308.23 MIN: 71.33 / MAX: 307.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU B A C D 50 100 150 200 250 232.03 230.96 229.50 227.42 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B D C 8 16 24 32 40 32.41 32.49 32.51 32.52 MIN: 28.01 / MAX: 107.86 MIN: 28.03 / MAX: 111.13 MIN: 28.45 / MAX: 103.54 MIN: 27.88 / MAX: 95.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C D 500 1000 1500 2000 2500 2448.40 2443.95 2442.14 2441.15 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU D B A C 4 8 12 16 20 17.74 17.76 17.78 17.80 MIN: 13.24 / MAX: 128.1 MIN: 11.23 / MAX: 112.73 MIN: 12.46 / MAX: 116.59 MIN: 12.08 / MAX: 129.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU D B A C 200 400 600 800 1000 1124.79 1123.63 1122.33 1121.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A C D B 0.315 0.63 0.945 1.26 1.575 1.32 1.35 1.37 1.40 MIN: 0.99 / MAX: 16.57 MIN: 0.86 / MAX: 20.45 MIN: 0.87 / MAX: 16.55 MIN: 0.93 / MAX: 17.52 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A C D B 10K 20K 30K 40K 50K 48360.89 48351.01 47659.47 46527.42 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU B A C D 2 4 6 8 10 8.24 8.25 8.25 8.25 MIN: 7.21 / MAX: 25.73 MIN: 7.22 / MAX: 24.74 MIN: 7.2 / MAX: 25.16 MIN: 7.19 / MAX: 25.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU B C A D 2K 4K 6K 8K 10K 9694.57 9685.17 9683.64 9677.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A D B C 0.4073 0.8146 1.2219 1.6292 2.0365 1.42 1.60 1.74 1.81 MIN: 0.36 / MAX: 35.73 MIN: 0.43 / MAX: 32.92 MIN: 0.51 / MAX: 21.83 MIN: 0.48 / MAX: 18.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A D B C 10K 20K 30K 40K 50K 44836.53 38908.73 34045.71 33880.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C A B D 1.0193 2.0386 3.0579 4.0772 5.0965 4.52 4.53 4.53 4.53 MIN: 4.09 / MAX: 46.43 MIN: 4.08 / MAX: 61.24 MIN: 4.09 / MAX: 62.16 MIN: 4.09 / MAX: 57.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C B D A 900 1800 2700 3600 4500 4408.17 4407.61 4401.74 4399.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile C A B D 13 26 39 52 65 57.17 57.45 58.09 58.79 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile D A C B 12 24 36 48 60 54.21 54.27 54.47 54.71
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A C B D 80K 160K 240K 320K 400K 357733 350969 347845 338639 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A C B D 70K 140K 210K 280K 350K 341336 340658 340384 337572 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default D A B C 6 12 18 24 30 22.96 23.65 23.67 23.76
Phoronix Test Suite v10.8.5