xeon 8380 2P 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209058-NE-XEON8380230 .
xeon 8380 2P Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution A B C D 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Ice Lake IEH 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP Ubuntu 22.10 5.19.0-15-generic (x86_64) GNOME Shell X Server 1.21.1.3 1.3.211 GCC 12.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Wbc0TK/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000375 Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
xeon 8380 2P compress-7zip: Compression Rating compress-7zip: Decompression Rating build-nodejs: Time To Compile build-php: Time To Compile build-python: Default build-python: Released Build, PGO + LTO Optimized build-erlang: Time To Compile build-wasmer: Time To Compile mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU natron: Spaceship A B C D 341336 357733 183.569 54.265 23.647 327.427 138.3 57.445 11.677 1.685 2.114 8.119 3.641 2.615 1.878 20.247 24.36 816.29 13.12 1507.12 12.9 1533.51 1122.33 17.78 82.84 241.03 4399.55 4.53 2448.4 32.41 230.96 86.36 9683.64 8.25 2019.08 9.88 48360.89 1.32 44836.53 1.42 1.5 340384 347845 185.516 54.714 23.67 329.055 137.842 58.093 12.373 1.705 2.451 8.805 3.782 2.735 2.269 20.571 24.45 814.05 13.13 1510.95 12.91 1534.43 1123.63 17.76 83.18 240.12 4407.61 4.53 2443.95 32.49 232.03 85.92 9694.57 8.24 2023.27 9.85 46527.42 1.4 34045.71 1.74 1.5 340658 350969 184.913 54.473 23.758 329.899 138.828 57.173 13.403 1.889 2.628 8.654 4.417 3.197 2.268 21.146 24.35 816.86 13.09 1511.26 12.81 1543.62 1121.31 17.8 82.93 240.81 4408.17 4.52 2442.14 32.52 229.5 86.92 9685.17 8.25 2017.86 9.88 48351.01 1.35 33880.47 1.81 1.5 337572 338639 189.027 54.211 22.961 330.258 138.865 58.793 13.023 1.823 2.534 8.606 4.237 3.044 2.204 20.945 24.36 816.87 13.07 1515.02 12.91 1531.79 1124.79 17.74 83.05 240.51 4401.74 4.53 2441.15 32.51 227.42 87.69 9677.29 8.25 2020.4 9.87 47659.47 1.37 38908.73 1.6 1.4 OpenBenchmarking.org
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A B C D 70K 140K 210K 280K 350K 341336 340384 340658 337572 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A B C D 80K 160K 240K 320K 400K 357733 347845 350969 338639 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A B C D 40 80 120 160 200 183.57 185.52 184.91 189.03
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile A B C D 12 24 36 48 60 54.27 54.71 54.47 54.21
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default A B C D 6 12 18 24 30 23.65 23.67 23.76 22.96
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C D 70 140 210 280 350 327.43 329.06 329.90 330.26
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile A B C D 30 60 90 120 150 138.30 137.84 138.83 138.87
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile A B C D 13 26 39 52 65 57.45 58.09 57.17 58.79 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C D 3 6 9 12 15 11.68 12.37 13.40 13.02 MIN: 11.03 / MAX: 18.75 MIN: 11.35 / MAX: 27.13 MIN: 10.89 / MAX: 23.76 MIN: 11.03 / MAX: 24.94 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C D 0.425 0.85 1.275 1.7 2.125 1.685 1.705 1.889 1.823 MIN: 1.66 / MAX: 1.78 MIN: 1.67 / MAX: 1.94 MIN: 1.86 / MAX: 2.32 MIN: 1.79 / MAX: 2.04 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C D 0.5913 1.1826 1.7739 2.3652 2.9565 2.114 2.451 2.628 2.534 MIN: 2.08 / MAX: 2.24 MIN: 2.2 / MAX: 5.51 MIN: 2.59 / MAX: 5.14 MIN: 2.5 / MAX: 2.78 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C D 2 4 6 8 10 8.119 8.805 8.654 8.606 MIN: 7.93 / MAX: 26.5 MIN: 8.32 / MAX: 33.06 MIN: 8.05 / MAX: 26.64 MIN: 8.36 / MAX: 9.61 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C D 0.9938 1.9876 2.9814 3.9752 4.969 3.641 3.782 4.417 4.237 MIN: 3.57 / MAX: 10.94 MIN: 3.73 / MAX: 4.33 MIN: 4.36 / MAX: 5.09 MIN: 4.11 / MAX: 10.84 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C D 0.7193 1.4386 2.1579 2.8772 3.5965 2.615 2.735 3.197 3.044 MIN: 2.56 / MAX: 2.93 MIN: 2.66 / MAX: 3.26 MIN: 3.15 / MAX: 3.57 MIN: 2.97 / MAX: 3.39 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C D 0.5105 1.021 1.5315 2.042 2.5525 1.878 2.269 2.268 2.204 MIN: 1.83 / MAX: 2.33 MIN: 2.24 / MAX: 2.34 MIN: 2.22 / MAX: 2.72 MIN: 2.17 / MAX: 2.42 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C D 5 10 15 20 25 20.25 20.57 21.15 20.95 MIN: 19.94 / MAX: 33.14 MIN: 20.02 / MAX: 33.93 MIN: 20.36 / MAX: 38.81 MIN: 20.5 / MAX: 34.43 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C D 6 12 18 24 30 24.36 24.45 24.35 24.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C D 200 400 600 800 1000 816.29 814.05 816.86 816.87 MIN: 551.08 / MAX: 937.77 MIN: 578.83 / MAX: 898.2 MIN: 530.61 / MAX: 979.49 MIN: 598.11 / MAX: 895.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C D 3 6 9 12 15 13.12 13.13 13.09 13.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C D 300 600 900 1200 1500 1507.12 1510.95 1511.26 1515.02 MIN: 1301.34 / MAX: 2203.83 MIN: 1308.26 / MAX: 2249.34 MIN: 1270.98 / MAX: 2162.13 MIN: 1296.34 / MAX: 2174.84 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C D 3 6 9 12 15 12.90 12.91 12.81 12.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C D 300 600 900 1200 1500 1533.51 1534.43 1543.62 1531.79 MIN: 1281.86 / MAX: 2052.91 MIN: 1304.79 / MAX: 2047.48 MIN: 1391.97 / MAX: 2201.98 MIN: 1295.79 / MAX: 2063.8 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C D 200 400 600 800 1000 1122.33 1123.63 1121.31 1124.79 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C D 4 8 12 16 20 17.78 17.76 17.80 17.74 MIN: 12.46 / MAX: 116.59 MIN: 11.23 / MAX: 112.73 MIN: 12.08 / MAX: 129.12 MIN: 13.24 / MAX: 128.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C D 20 40 60 80 100 82.84 83.18 82.93 83.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C D 50 100 150 200 250 241.03 240.12 240.81 240.51 MIN: 182.52 / MAX: 349.89 MIN: 184.22 / MAX: 359.75 MIN: 184.67 / MAX: 352.84 MIN: 193.29 / MAX: 354.02 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C D 900 1800 2700 3600 4500 4399.55 4407.61 4408.17 4401.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C D 1.0193 2.0386 3.0579 4.0772 5.0965 4.53 4.53 4.52 4.53 MIN: 4.08 / MAX: 61.24 MIN: 4.09 / MAX: 62.16 MIN: 4.09 / MAX: 46.43 MIN: 4.09 / MAX: 57.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C D 500 1000 1500 2000 2500 2448.40 2443.95 2442.14 2441.15 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C D 8 16 24 32 40 32.41 32.49 32.52 32.51 MIN: 28.01 / MAX: 107.86 MIN: 28.03 / MAX: 111.13 MIN: 27.88 / MAX: 95.98 MIN: 28.45 / MAX: 103.54 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C D 50 100 150 200 250 230.96 232.03 229.50 227.42 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C D 20 40 60 80 100 86.36 85.92 86.92 87.69 MIN: 72.38 / MAX: 303.3 MIN: 72.43 / MAX: 301.8 MIN: 72.62 / MAX: 308.23 MIN: 71.33 / MAX: 307.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C D 2K 4K 6K 8K 10K 9683.64 9694.57 9685.17 9677.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C D 2 4 6 8 10 8.25 8.24 8.25 8.25 MIN: 7.22 / MAX: 24.74 MIN: 7.21 / MAX: 25.73 MIN: 7.2 / MAX: 25.16 MIN: 7.19 / MAX: 25.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C D 400 800 1200 1600 2000 2019.08 2023.27 2017.86 2020.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C D 3 6 9 12 15 9.88 9.85 9.88 9.87 MIN: 8.55 / MAX: 84.71 MIN: 8.59 / MAX: 76.51 MIN: 8.6 / MAX: 87.09 MIN: 8.55 / MAX: 78.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C D 10K 20K 30K 40K 50K 48360.89 46527.42 48351.01 47659.47 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C D 0.315 0.63 0.945 1.26 1.575 1.32 1.40 1.35 1.37 MIN: 0.99 / MAX: 16.57 MIN: 0.93 / MAX: 17.52 MIN: 0.86 / MAX: 20.45 MIN: 0.87 / MAX: 16.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C D 10K 20K 30K 40K 50K 44836.53 34045.71 33880.47 38908.73 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C D 0.4073 0.8146 1.2219 1.6292 2.0365 1.42 1.74 1.81 1.60 MIN: 0.36 / MAX: 35.73 MIN: 0.51 / MAX: 21.83 MIN: 0.48 / MAX: 18.78 MIN: 0.43 / MAX: 32.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship A B C D 0.3375 0.675 1.0125 1.35 1.6875 1.5 1.5 1.5 1.4
Phoronix Test Suite v10.8.5