tiger christrmas

Intel Core i7-1165G7 testing with a Dell 0GG9PT (3.11.0 BIOS) and Intel Xe TGL GT2 3GB on Ubuntu 21.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2212221-NE-TIGERCHRI19&sor&grr.

tiger christrmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionabcIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (3.11.0 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel Xe TGL GT2 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 21.105.13.0-52-generic (x86_64)GNOME Shell 40.5X Server + Wayland4.6 Mesa 21.2.2OpenCL 3.01.2.182GCC 11.2.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa4 - Thermald 2.4.6 Python Details- Python 3.9.7Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

tiger christrmasfluidx3d: FP32-FP32openvkl: vklBenchmark ISPCopenvkl: vklBenchmark Scalarfluidx3d: FP32-FP16Cfluidx3d: FP32-FP16Sonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUcockroach: KV, 95% Reads - 1024cockroach: KV, 50% Reads - 1024cockroach: KV, 10% Reads - 1024cockroach: KV, 60% Reads - 1024cockroach: KV, 10% Reads - 512cockroach: KV, 95% Reads - 512cockroach: KV, 60% Reads - 512cockroach: KV, 50% Reads - 512onednn: Recurrent Neural Network Training - u8s8f32 - CPUcockroach: KV, 50% Reads - 256cockroach: KV, 95% Reads - 256cockroach: KV, 60% Reads - 256cockroach: KV, 10% Reads - 256cockroach: KV, 50% Reads - 128cockroach: KV, 60% Reads - 128cockroach: KV, 10% Reads - 128cockroach: KV, 95% Reads - 128onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUabc35485326176747483.3619697.716024.214276.616513.515619.720479.817337.315393.27481.7617250.221902.917805.314329.315097.517788.916035.122544.57247.053809.23829.543841.615826.840.686273.540.633638.451.1946.914.21294.5213.5625.91154.2617.61226.8548.6182.2536.12110.699.7411.781.332967.181.243202.145.848511.81282.162457.9011719.55741.618583.556138.663551.422685.823376.276632.2677241.39288.197418.9463140.050310.85032.5381835383316196747492.161985316834.31352316445.414915.520291.718445.815557.47304.3216457.32109316540.315092.216230.118334.710347.121846.27050.253729.763759.523690.155887.780.686248.190.643643.491.1948.24.21295.2513.5426.49150.8917.79224.6548.7282.0636.28110.189.68412.261.412809.481.133515.8447.111911.87062.177197.8301719.68131.662443.536679.070931.420525.827516.167452.2436741.85628.010759.0737640.502810.8262.6127235284316136737502.881995916320.214007.817463.115444.820768.317455.417401.27403.2717022.722070.5186391574814302.416174.610276.621954.474443832.923840.613792.725983.480.666305.720.633709.361.08991.244.02292.0713.6726.25152.2317.75225.248.768236.97108.1410399.311.412802.511.143485.1945.255411.38772.151078.290218.7841.592533.546788.564151.437335.787716.234492.2634545.70457.875968.7759438.278410.40542.53415OpenBenchmarking.org

FluidX3D

Test: FP32-FP32

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 1.4Test: FP32-FP32abc80160240320400354353352

OpenVKL

Benchmark: vklBenchmark ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCacb20406080100858483MIN: 8 / MAX: 1540MIN: 8 / MAX: 1544MIN: 8 / MAX: 1341

OpenVKL

Benchmark: vklBenchmark Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalaracb714212835323131MIN: 3 / MAX: 554MIN: 4 / MAX: 577MIN: 4 / MAX: 557

FluidX3D

Test: FP32-FP16C

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 1.4Test: FP32-FP16Cbac130260390520650619617613

FluidX3D

Test: FP32-FP16S

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 1.4Test: FP32-FP16Sbac150300450600750674674673

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabc160032004800640080007483.367492.167502.88MIN: 7402.63MIN: 7422.76MIN: 7403.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

CockroachDB

Workload: KV, 95% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 1024cba4K8K12K16K20K19959.019853.019697.7

CockroachDB

Workload: KV, 50% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 1024bca4K8K12K16K20K16834.316320.216024.2

CockroachDB

Workload: KV, 10% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 1024acb3K6K9K12K15K14276.614007.813523.0

CockroachDB

Workload: KV, 60% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 1024cab4K8K12K16K20K17463.116513.516445.4

CockroachDB

Workload: KV, 10% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512acb3K6K9K12K15K15619.715444.814915.5

CockroachDB

Workload: KV, 95% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 512cab4K8K12K16K20K20768.320479.820291.7

CockroachDB

Workload: KV, 60% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 512bca4K8K12K16K20K18445.817455.417337.3

CockroachDB

Workload: KV, 50% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512cba4K8K12K16K20K17401.215557.415393.2

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUbca160032004800640080007304.327403.277481.76MIN: 7174.15MIN: 7254.09MIN: 7406.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

CockroachDB

Workload: KV, 50% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256acb4K8K12K16K20K17250.217022.716457.3

CockroachDB

Workload: KV, 95% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 256cab5K10K15K20K25K22070.521902.921093.0

CockroachDB

Workload: KV, 60% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256cab4K8K12K16K20K18639.017805.316540.3

CockroachDB

Workload: KV, 10% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256cba3K6K9K12K15K15748.015092.214329.3

CockroachDB

Workload: KV, 50% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128bac3K6K9K12K15K16230.115097.514302.4

CockroachDB

Workload: KV, 60% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128bac4K8K12K16K20K18334.717788.916174.6

CockroachDB

Workload: KV, 10% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128abc3K6K9K12K15K16035.110347.110276.6

CockroachDB

Workload: KV, 95% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128acb5K10K15K20K25K22544.521954.421846.2

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbac160032004800640080007050.257247.057444.00MIN: 6902.32MIN: 7149.36MIN: 7371.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUbac80016002400320040003729.763809.203832.92MIN: 3656.28MIN: 3757.47MIN: 3776.631. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUbac80016002400320040003759.523829.543840.61MIN: 3679.01MIN: 3738.95MIN: 3764.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbca80016002400320040003690.153792.723841.61MIN: 3590.21MIN: 3658.46MIN: 3764.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUabc130026003900520065005826.845887.785983.48MIN: 5187.09 / MAX: 6444.8MIN: 5507.97 / MAX: 6406.76MIN: 5490.41 / MAX: 6446.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUbac0.1530.3060.4590.6120.7650.680.680.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUbac140028004200560070006248.196273.546305.72MIN: 5347.29 / MAX: 6437.09MIN: 5329.2 / MAX: 6431.98MIN: 5589.83 / MAX: 6446.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUbca0.1440.2880.4320.5760.720.640.630.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUabc80016002400320040003638.453643.493709.36MIN: 3157.2 / MAX: 4036.09MIN: 3224.49 / MAX: 4037.04MIN: 3202.52 / MAX: 4037.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUbac0.24750.4950.74250.991.23751.101.101.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUabc2004006008001000946.91948.20991.24MIN: 622.24 / MAX: 1062.65MIN: 599.96 / MAX: 1070.83MIN: 663.98 / MAX: 1074.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUbac0.94731.89462.84193.78924.73654.214.214.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcab60120180240300292.07294.52295.25MIN: 168.66 / MAX: 324.44MIN: 236.38 / MAX: 320.02MIN: 216.28 / MAX: 319.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcab4812162013.6713.5613.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUacb61218243025.9126.2526.49MIN: 19.52 / MAX: 44.17MIN: 19.73 / MAX: 43.4MIN: 18.18 / MAX: 42.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUacb306090120150154.26152.23150.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUacb4812162017.6117.7517.79MIN: 11.79 / MAX: 32.93MIN: 11.32 / MAX: 35.38MIN: 10.62 / MAX: 34.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUacb50100150200250226.85225.20224.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabc112233445548.6148.7248.76MIN: 32.3 / MAX: 66.49MIN: 37.67 / MAX: 76.65MIN: 37.06 / MAX: 66.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabc2040608010082.2582.0682.001. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUabc91827364536.1236.2836.97MIN: 24.98 / MAX: 54.46MIN: 19.63 / MAX: 54.5MIN: 27.47 / MAX: 55.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUabc20406080100110.69110.18108.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUbac36912159.689.7010.00MIN: 5.26 / MAX: 24.1MIN: 5.1 / MAX: 25.24MIN: 5 / MAX: 25.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUbac90180270360450412.26411.78399.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc0.31730.63460.95191.26921.58651.331.411.41MIN: 0.68 / MAX: 16.83MIN: 0.77 / MAX: 16.27MIN: 0.77 / MAX: 18.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc60012001800240030002967.182809.482802.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbca0.2790.5580.8371.1161.3951.131.141.24MIN: 0.51 / MAX: 31.67MIN: 0.56 / MAX: 16.91MIN: 0.57 / MAX: 11.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbca80016002400320040003515.843485.193202.101. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUcab112233445545.2645.8547.11MIN: 43.5MIN: 44.29MIN: 45.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUcab369121511.3911.8111.87MIN: 11.01MIN: 11.57MIN: 11.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUcab0.48990.97981.46971.95962.44952.151072.162452.17719MIN: 2.01MIN: 2.1MIN: 2.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUbac2468107.830177.901178.29020MIN: 7.18MIN: 7.05MIN: 6.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUcab51015202518.7819.5619.68MIN: 18.3MIN: 19.17MIN: 19.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUcab0.3740.7481.1221.4961.871.592531.618581.66244MIN: 1.46MIN: 1.46MIN: 1.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUbca0.80011.60022.40033.20044.00053.536673.546783.55613MIN: 3.22MIN: 3.41MIN: 3.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUcab36912158.564158.663559.07093MIN: 7.87MIN: 8.02MIN: 8.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUbac0.32340.64680.97021.29361.6171.420521.422681.43733MIN: 1.34MIN: 1.36MIN: 1.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUcab1.31122.62243.93365.24486.5565.787715.823375.82751MIN: 5.58MIN: 5.62MIN: 5.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUbca2468106.167456.234496.27663MIN: 5.24MIN: 5.17MIN: 5.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbca0.51021.02041.53062.04082.5512.243672.263452.26772MIN: 2.16MIN: 2.17MIN: 2.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUabc102030405041.3941.8645.70MIN: 38.87MIN: 39.87MIN: 37.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUcba2468107.875968.010758.19741MIN: 7.75MIN: 7.89MIN: 8.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUcab36912158.775948.946319.07376MIN: 8.65MIN: 8.81MIN: 8.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUcab91827364538.2840.0540.50MIN: 38.09MIN: 39.12MIN: 40.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUcba369121510.4110.8310.85MIN: 10.19MIN: 10.71MIN: 10.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUcab0.58791.17581.76372.35162.93952.534152.538182.61272MIN: 2.38MIN: 2.51MIN: 2.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl


Phoronix Test Suite v10.8.4