10980xe xmas

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2212249-PTS-10980XEX38&sor&grr.

10980xe xmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBllvmpipeRealtek ALC1220Intel I219-V + Intel I211Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.34.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204GCC 11.3.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003302Python Details- Python 3.10.6Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

10980xe xmasopenvkl: vklBenchmark ISPCopenvkl: vklBenchmark Scalarcockroach: KV, 60% Reads - 1024cockroach: KV, 10% Reads - 1024cockroach: KV, 50% Reads - 1024cockroach: KV, 60% Reads - 512cockroach: KV, 95% Reads - 1024cockroach: KV, 10% Reads - 512cockroach: KV, 50% Reads - 512cockroach: KV, 95% Reads - 512cockroach: KV, 50% Reads - 256cockroach: KV, 60% Reads - 256cockroach: KV, 10% Reads - 256cockroach: KV, 95% Reads - 256cockroach: KV, 10% Reads - 128cockroach: KV, 60% Reads - 128cockroach: KV, 50% Reads - 128cockroach: KV, 95% Reads - 128cockroach: MoVR - 128cockroach: MoVR - 256cockroach: MoVR - 512cockroach: MoVR - 1024onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUabc27912766379.653410.664589.871155.786060.550634.165882.689958.65064358240.931390.29525511104.125908.320661.596440.7159.3158.7159.11591626.221672.91647.24912.569911.537928.7322542.413.522537.623.51400.26.41336.1626.71130.5568.8719.9451.5510.13886.6157.95155.1524.18743.067.062533.80.9219270.91.0117554.2811.48534.730740.4665392.28886.140660.6909722.460821.789180.2510995.326182.817321.240849.072159.880497.8930910.9982.696120.67463828012868226.65293863971.97129287025.750153.365233.19082049044.855913.430507.89469510995.925672.720255.496721.9159159158.3157.41703.851657.651686.52909.073939.02930.2962553.023.512542.893.521395.756.43335.8726.77123.4272.8817.4151610.08891.4257.34156.7924.12744.896.842608.450.9119307.71.0117569.4911.48044.886310.4665452.33765.871220.587323.989131.811430.2493355.287173.081551.313539.09889.880197.8660810.99722.691410.66022728012768368.253062.764789.470621.387854.949311.665449.79038347974.856222.428527.794401.91032725698.419727.196412.6152.9154.2158.3155.41695.141683.661664.09900.696893.362928.7592544.833.52536.133.51398.666.43335.7826.73123.1673.0317.47514.2210.08891.0758.07154.8424.51732.926.932573.450.9119371.061.0117563.0911.63564.62630.4656042.285255.561620.6483392.169741.816890.2456425.311072.782391.265889.068429.879867.875811.00652.699180.67337OpenBenchmarking.org

OpenVKL

Benchmark: vklBenchmark ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCcba60120180240300280280279MIN: 35 / MAX: 3614MIN: 35 / MAX: 3659MIN: 35 / MAX: 3612

OpenVKL

Benchmark: vklBenchmark Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarbca306090120150128127127MIN: 14 / MAX: 2045MIN: 14 / MAX: 2048MIN: 14 / MAX: 2044

CockroachDB

Workload: KV, 60% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 1024cba15K30K45K60K75K68368.268226.666379.6

CockroachDB

Workload: KV, 10% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 1024acb11K22K33K44K55K53410.653062.752938.0

CockroachDB

Workload: KV, 50% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 1024cab14K28K42K56K70K64789.464589.863971.9

CockroachDB

Workload: KV, 60% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 512bac15K30K45K60K75K71292.071155.770621.3

CockroachDB

Workload: KV, 95% Reads - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 1024cba20K40K60K80K100K87854.987025.786060.5

CockroachDB

Workload: KV, 10% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512abc11K22K33K44K55K50634.150153.349311.6

CockroachDB

Workload: KV, 50% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512acb14K28K42K56K70K65882.665449.765233.1

CockroachDB

Workload: KV, 95% Reads - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 512bca20K40K60K80K100K90820.090383.089958.6

CockroachDB

Workload: KV, 50% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256abc11K22K33K44K55K50643.049044.847974.8

CockroachDB

Workload: KV, 60% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256acb12K24K36K48K60K58240.956222.455913.4

CockroachDB

Workload: KV, 10% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256abc7K14K21K28K35K31390.230507.828527.7

CockroachDB

Workload: KV, 95% Reads - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 256abc20K40K60K80K100K95255.094695.094401.9

CockroachDB

Workload: KV, 10% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128abc2K4K6K8K10K11104.110995.910327.0

CockroachDB

Workload: KV, 60% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128acb6K12K18K24K30K25908.325698.425672.7

CockroachDB

Workload: KV, 50% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128abc4K8K12K16K20K20661.520255.419727.1

CockroachDB

Workload: KV, 95% Reads - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128bac20K40K60K80K100K96721.996440.796412.6

CockroachDB

Workload: MoVR - Concurrency: 128

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128abc4080120160200159.3159.0152.9

CockroachDB

Workload: MoVR - Concurrency: 256

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256bac4080120160200159.0158.7154.2

CockroachDB

Workload: MoVR - Concurrency: 512

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 512acb4080120160200159.1158.3158.3

CockroachDB

Workload: MoVR - Concurrency: 1024

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 1024abc4080120160200159.0157.4155.4

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUacb4008001200160020001626.221695.141703.85MIN: 1620.15MIN: 1665.73MIN: 1673.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbac4008001200160020001657.651672.901683.66MIN: 1630.09MIN: 1640.76MIN: 1638.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUacb4008001200160020001647.241664.091686.52MIN: 1632.23MIN: 1637.19MIN: 1664.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUcba2004006008001000900.70909.07912.57MIN: 890.6MIN: 898.25MIN: 899.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUcab2004006008001000893.36911.54939.02MIN: 888.73MIN: 899.44MIN: 911.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUacb2004006008001000928.73928.76930.30MIN: 903.28MIN: 907.31MIN: 905.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUacb50010001500200025002542.412544.832553.02MIN: 2094.11 / MAX: 2750.93MIN: 1871.46 / MAX: 2790.25MIN: 2266.79 / MAX: 2802.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUabc0.7921.5842.3763.1683.963.523.513.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUcab50010001500200025002536.132537.622542.89MIN: 1794.28 / MAX: 2795.41MIN: 1888.54 / MAX: 2772.16MIN: 2160.38 / MAX: 2782.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUbca0.7921.5842.3763.1683.963.523.503.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUbca300600900120015001395.751398.661400.20MIN: 1261.08 / MAX: 1475.37MIN: 1331.74 / MAX: 1501.29MIN: 1223.62 / MAX: 1477.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUcba2468106.436.436.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUcba70140210280350335.78335.87336.16MIN: 194.03 / MAX: 366.59MIN: 290.59 / MAX: 351.69MIN: 183.86 / MAX: 354.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUbca61218243026.7726.7326.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcba306090120150123.16123.42130.55MIN: 72.34 / MAX: 133.42MIN: 107.4 / MAX: 156.15MIN: 68.73 / MAX: 416.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcba163248648073.0372.8868.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUbca51015202517.4117.4719.90MIN: 11.5 / MAX: 31.54MIN: 11.53 / MAX: 29.86MIN: 12.44 / MAX: 136.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUbca110220330440550516.00514.22451.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUbca369121510.0810.0810.13MIN: 8.06 / MAX: 25.56MIN: 4.04 / MAX: 32.85MIN: 8.33 / MAX: 15.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUbca2004006008001000891.42891.07886.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUbac132639526557.3457.9558.07MIN: 30.9 / MAX: 78.45MIN: 31.24 / MAX: 77.83MIN: 30.13 / MAX: 73.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUbac306090120150156.79155.15154.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUbac61218243024.1224.1824.51MIN: 17.46 / MAX: 46.27MIN: 20.31 / MAX: 45.97MIN: 16 / MAX: 42.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUbac160320480640800744.89743.06732.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUbca2468106.846.937.06MIN: 3.85 / MAX: 16.37MIN: 3.65 / MAX: 12.42MIN: 3.78 / MAX: 180.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUbca60012001800240030002608.452573.452533.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbca0.2070.4140.6210.8281.0350.910.910.92MIN: 0.54 / MAX: 19.27MIN: 0.55 / MAX: 13.44MIN: 0.53 / MAX: 15.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUcba4K8K12K16K20K19371.0619307.7019270.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc0.22730.45460.68190.90921.13651.011.011.01MIN: 0.59 / MAX: 133.85MIN: 0.58 / MAX: 14.43MIN: 0.59 / MAX: 19.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUbca4K8K12K16K20K17569.4917563.0917554.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUbac369121511.4811.4911.64MIN: 11.25MIN: 11.27MIN: 11.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUcab1.09942.19883.29824.39765.4974.626304.730744.88631MIN: 3.42MIN: 3.18MIN: 3.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUcab0.1050.210.3150.420.5250.4656040.4665390.466545MIN: 0.45MIN: 0.45MIN: 0.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUcab0.5261.0521.5782.1042.632.285252.288802.33760MIN: 2.15MIN: 2.15MIN: 2.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUcba2468105.561625.871226.14066MIN: 5.4MIN: 5.5MIN: 5.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUbca0.15550.3110.46650.6220.77750.5873200.6483390.690972MIN: 0.53MIN: 0.55MIN: 0.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUcab0.89761.79522.69283.59044.4882.169742.460823.98913MIN: 1.96MIN: 2.17MIN: 3.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUabc0.40880.81761.22641.63522.0441.789181.811431.81689MIN: 1.75MIN: 1.78MIN: 1.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUcba0.05650.1130.16950.2260.28250.2456420.2493350.251099MIN: 0.24MIN: 0.24MIN: 0.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbca1.19842.39683.59524.79365.9925.287175.311075.32618MIN: 5.21MIN: 5.23MIN: 5.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUcab0.69331.38662.07992.77323.46652.782392.817323.08155MIN: 2.59MIN: 2.59MIN: 2.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUacb0.29550.5910.88651.1821.47751.240841.265881.31353MIN: 1.14MIN: 1.16MIN: 1.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUcab36912159.068429.072159.09880MIN: 9MIN: 9.02MIN: 8.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUcba36912159.879869.880199.88049MIN: 9.8MIN: 9.8MIN: 9.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUbca2468107.866087.875807.89309MIN: 7.66MIN: 7.66MIN: 7.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUbac369121511.0011.0011.01MIN: 10.84MIN: 10.84MIN: 10.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbac0.60731.21461.82192.42923.03652.691412.696122.69918MIN: 2.66MIN: 2.66MIN: 2.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbca0.15180.30360.45540.60720.7590.6602270.6733700.674638MIN: 0.65MIN: 0.66MIN: 0.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread


Phoronix Test Suite v10.8.5