christmas comet Intel Core i7-10700T testing with a Logic Supply RXM-181 (Z01-0002A026 BIOS) and Intel UHD 630 CML GT2 30GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2212231-NE-CHRISTMAS95&sor&grs .
christmas comet Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution a b c Intel Core i7-10700T @ 4.50GHz (8 Cores / 16 Threads) Logic Supply RXM-181 (Z01-0002A026 BIOS) Intel Comet Lake PCH 32GB 256GB TS256GMTS800 Intel UHD 630 CML GT2 30GB (1200MHz) Realtek ALC233 DELL P2415Q Intel I219-LM + Intel I210 Ubuntu 22.04 5.15.0-52-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.3.204 GCC 11.3.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Python Details - Python 3.10.6 Security Details - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
christmas comet onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU numenta-nab: Bayesian Changepoint fluidx3d: FP32-FP16S numenta-nab: Relative Entropy cockroach: MoVR - 256 onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU numenta-nab: Earthgecko Skyline rav1e: 10 openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU svt-av1: Preset 8 - Bosphorus 1080p cockroach: MoVR - 1024 svt-av1: Preset 12 - Bosphorus 4K onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 1080p numenta-nab: Windowed Gaussian rav1e: 5 onednn: Deconvolution Batch shapes_3d - f32 - CPU build-linux-kernel: allmodconfig openvino: Person Detection FP32 - CPU cockroach: KV, 95% Reads - 128 scikit-learn: TSNE MNIST Dataset openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 4 - Bosphorus 4K openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU cockroach: KV, 60% Reads - 1024 svt-av1: Preset 4 - Bosphorus 1080p rav1e: 6 cockroach: MoVR - 128 numenta-nab: KNN CAD svt-av1: Preset 8 - Bosphorus 4K fluidx3d: FP32-FP16C openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU stargate: 44100 - 512 openvkl: vklBenchmark ISPC cockroach: MoVR - 512 onednn: Convolution Batch Shapes Auto - f32 - CPU nekrs: TurboPipe Periodic openvino: Weld Porosity Detection FP16-INT8 - CPU cockroach: KV, 10% Reads - 256 openvino: Weld Porosity Detection FP16-INT8 - CPU rav1e: 1 cockroach: KV, 95% Reads - 256 numenta-nab: Contextual Anomaly Detector OSE cockroach: KV, 10% Reads - 128 onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU blender: Barbershop - CPU-Only stargate: 480000 - 512 openvino: Face Detection FP16 - CPU cockroach: KV, 50% Reads - 512 openvino: Weld Porosity Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU cockroach: KV, 50% Reads - 1024 blender: BMW27 - CPU-Only openvino: Face Detection FP16 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU cockroach: KV, 10% Reads - 1024 cockroach: KV, 95% Reads - 1024 onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU cockroach: KV, 50% Reads - 256 onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU cockroach: KV, 60% Reads - 512 cockroach: KV, 10% Reads - 512 fluidx3d: FP32-FP32 cockroach: KV, 50% Reads - 128 openvino: Age Gender Recognition Retail 0013 FP16 - CPU onednn: Recurrent Neural Network Training - f32 - CPU cockroach: KV, 60% Reads - 128 build-linux-kernel: defconfig stargate: 192000 - 512 openvino: Vehicle Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU stargate: 480000 - 1024 stargate: 192000 - 1024 scikit-learn: MNIST Dataset blender: Classroom - CPU-Only cockroach: KV, 60% Reads - 256 blender: Pabellon Barcelona - CPU-Only openvino: Person Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU scikit-learn: Sparse Rand Projections, 100 Iterations stargate: 44100 - 1024 cockroach: KV, 95% Reads - 512 onednn: Recurrent Neural Network Inference - f32 - CPU openvino: Machine Translation EN To DE FP16 - CPU stargate: 96000 - 1024 stargate: 96000 - 512 blender: Fishy Cat - CPU-Only openvino: Person Detection FP32 - CPU openvino: Person Detection FP16 - CPU openvkl: vklBenchmark Scalar onednn: IP Shapes 1D - bf16bf16bf16 - CPU a b c 10.8363 2.48501 2.37749 5.11104 11.6929 56.706 357 32.183 174.8 3622.8 2.23303 200.783 8.33 22.07 181.05 51.631 173.6 73.506 5.02536 79.105 356.085 18.272 2.238 8.30947 2642.392 4627.4 31139.2 109.345 2.03 323.225 1.229 3914.1 24536.7 4.392 3.125 174.4 364.397 16.499 173 37.9 105.44 1.69939 93 173.9 17.1397 25130200000 260.1 21256.3 30.73 0.413 30543 69.885 12318.5 3.80985 6869.52 6878.39 3260.14 1.638868 1.37 25227.2 32.08 3609.48 124.59 23641.5 306.09 2907.95 15.5046 21256.4 28002.8 3640.28 25273.3 3.99763 25923.7 22167.5 201 21728.4 2.2 6850.49 24369.6 190.421 0.791489 24.25 1552.62 164.79 2.57 1.731054 0.867297 251.412 902.53 26268.4 1083.6 4564.3 268.31 3294.519 1.798704 29621.9 3625.06 14.9 1.293792 1.204979 417.83 0.85 0.86 46 10.8813 2.47157 2.26968 5.03425 12.5886 60.654 377 32.448 175.2 3783.41 2.2057 193.302 8.621 22.52 177.44 53.451 171.6 75.452 5.05109 80.31 356.327 18.344 2.269 8.38301 2642.66 4705.67 30808.9 108.375 2.03 324.322 1.247 3925.52 24424.9 4.433 3.163 175.2 364.876 16.693 175 38.33 104.28 1.717714 94 172.1 17.1202 25286500000 258.24 21051.4 30.96 0.417 30259.7 70.238 12364.9 3.77816 6897.51 6904.88 3257.91 1.651464 1.37 25164.2 31.98 3595.44 124.97 23487.2 305.77 2900.79 15.5676 21125.9 27948 3657.32 25267.5 4.01087 25936.5 22053.8 201 21711.8 2.21 6861.12 24360.3 189.627 0.794756 24.35 1555.24 164.14 2.57 1.733011 0.870159 252.299 904.16 26246.9 1086.78 4567.98 267.76 3294.87 1.802469 29633.5 3632.36 14.92 1.29592 1.206529 417.45 0.85 0.86 46 47.6781 51.5073 40.0963 67.4107 11.7109 60.22 381 34.335 167 3612.19 2.14039 201.311 8.654 22.85 174.87 53.145 168.9 74.668 5.15364 80.973 364.038 18.648 2.284 8.47927 2688.942 4632.94 30644.9 110.002 2 328.002 1.247 3970.02 24231 4.446 3.155 173.1 360.585 16.673 175 38.12 104.85 1.704004 94 173.1 17.2961 25386800000 257.58 21050.4 31.03 0.416 30253.2 69.625 12422.9 3.78238 6842.96 6850.47 3234.94 1.649194 1.38 25045.3 31.85 3620.42 125.45 23596 307.76 2889.56 15.4711 21160.8 27834.5 3636.49 25133.1 3.99005 26058.7 22080.3 200 21625.8 2.2 6832.22 24267.8 189.634 0.792159 24.35 1548.9 164.12 2.58 1.737779 0.870579 251.815 905.71 26188 1083.92 4555.58 267.65 3302.545 1.802314 29572.8 3629.54 14.93 1.295966 1.206909 417.82 0.85 0.86 46 OpenBenchmarking.org
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c 11 22 33 44 55 10.84 10.88 47.68 MIN: 10.2 MIN: 10.22 MIN: 10.77 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU b a c 12 24 36 48 60 2.47157 2.48501 51.50730 MIN: 2.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU b a c 9 18 27 36 45 2.26968 2.37749 40.09630 MIN: 1.93 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b a c 15 30 45 60 75 5.03425 5.11104 67.41070 MIN: 4.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a c b 3 6 9 12 15 11.69 11.71 12.59 MIN: 7.4 MIN: 7.33 MIN: 7.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Numenta Anomaly Benchmark Detector: Bayesian Changepoint OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint a c b 14 28 42 56 70 56.71 60.22 60.65
FluidX3D Test: FP32-FP16S OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 1.4 Test: FP32-FP16S c b a 80 160 240 320 400 381 377 357
Numenta Anomaly Benchmark Detector: Relative Entropy OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy a b c 8 16 24 32 40 32.18 32.45 34.34
CockroachDB Workload: MoVR - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 256 b a c 40 80 120 160 200 175.2 174.8 167.0
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU c a b 800 1600 2400 3200 4000 3612.19 3622.80 3783.41 MIN: 3493.86 MIN: 3501.52 MIN: 3514.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU c b a 0.5024 1.0048 1.5072 2.0096 2.512 2.14039 2.20570 2.23303 MIN: 1.66 MIN: 1.65 MIN: 1.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Numenta Anomaly Benchmark Detector: Earthgecko Skyline OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline b a c 40 80 120 160 200 193.30 200.78 201.31
rav1e Speed: 10 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 10 c b a 2 4 6 8 10 8.654 8.621 8.330
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 5 10 15 20 25 22.07 22.52 22.85 MIN: 12.92 / MAX: 53.87 MIN: 12.96 / MAX: 56.58 MIN: 13.11 / MAX: 56.73 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 40 80 120 160 200 181.05 177.44 174.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b c a 12 24 36 48 60 53.45 53.15 51.63 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
CockroachDB Workload: MoVR - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 1024 a b c 40 80 120 160 200 173.6 171.6 168.9
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K b c a 20 40 60 80 100 75.45 74.67 73.51 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c 1.1596 2.3192 3.4788 4.6384 5.798 5.02536 5.05109 5.15364 MIN: 4.65 MIN: 4.66 MIN: 4.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K c b a 20 40 60 80 100 80.97 80.31 79.11 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p c b a 80 160 240 320 400 364.04 356.33 356.09 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Numenta Anomaly Benchmark Detector: Windowed Gaussian OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian a b c 5 10 15 20 25 18.27 18.34 18.65
rav1e Speed: 5 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 5 c b a 0.5139 1.0278 1.5417 2.0556 2.5695 2.284 2.269 2.238
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 8.30947 8.38301 8.47927 MIN: 7.97 MIN: 8.01 MIN: 8.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig a b c 600 1200 1800 2400 3000 2642.39 2642.66 2688.94
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU a c b 1000 2000 3000 4000 5000 4627.40 4632.94 4705.67 MIN: 3461.83 / MAX: 5004.23 MIN: 3246.77 / MAX: 4955.57 MIN: 3414.61 / MAX: 4954.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 95% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 128 a b c 7K 14K 21K 28K 35K 31139.2 30808.9 30644.9
Scikit-Learn Benchmark: TSNE MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: TSNE MNIST Dataset b a c 20 40 60 80 100 108.38 109.35 110.00
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c a b 0.4568 0.9136 1.3704 1.8272 2.284 2.00 2.03 2.03 MIN: 0.66 / MAX: 19.69 MIN: 0.7 / MAX: 26.99 MIN: 0.7 / MAX: 5.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p c b a 70 140 210 280 350 328.00 324.32 323.23 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K c b a 0.2806 0.5612 0.8418 1.1224 1.403 1.247 1.247 1.229 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c b a 900 1800 2700 3600 4500 3970.02 3925.52 3914.10 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 60% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 1024 a b c 5K 10K 15K 20K 25K 24536.7 24424.9 24231.0
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p c b a 1.0004 2.0008 3.0012 4.0016 5.002 4.446 4.433 4.392 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
rav1e Speed: 6 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 6 b c a 0.7117 1.4234 2.1351 2.8468 3.5585 3.163 3.155 3.125
CockroachDB Workload: MoVR - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 128 b a c 40 80 120 160 200 175.2 174.4 173.1
Numenta Anomaly Benchmark Detector: KNN CAD OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD c a b 80 160 240 320 400 360.59 364.40 364.88
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K b c a 4 8 12 16 20 16.69 16.67 16.50 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
FluidX3D Test: FP32-FP16C OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 1.4 Test: FP32-FP16C c b a 40 80 120 160 200 175 175 173
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a c b 9 18 27 36 45 37.90 38.12 38.33 MIN: 25.27 / MAX: 84.62 MIN: 26.05 / MAX: 83.42 MIN: 26.97 / MAX: 86.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a c b 20 40 60 80 100 105.44 104.85 104.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 512 b c a 0.3865 0.773 1.1595 1.546 1.9325 1.717714 1.704004 1.699390 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenVKL Benchmark: vklBenchmark ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC c b a 20 40 60 80 100 94 94 93 MIN: 10 / MAX: 1453 MIN: 10 / MAX: 1446 MIN: 10 / MAX: 1436
CockroachDB Workload: MoVR - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 512 a c b 40 80 120 160 200 173.9 173.1 172.1
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU b a c 4 8 12 16 20 17.12 17.14 17.30 MIN: 16.99 MIN: 17.01 MIN: 17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic c b a 5000M 10000M 15000M 20000M 25000M 25386800000 25286500000 25130200000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 60 120 180 240 300 260.10 258.24 257.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 10% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 256 a b c 5K 10K 15K 20K 25K 21256.3 21051.4 21050.4
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 7 14 21 28 35 30.73 30.96 31.03 MIN: 16.95 / MAX: 84.01 MIN: 18.67 / MAX: 47.56 MIN: 12.41 / MAX: 84.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
rav1e Speed: 1 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 1 b c a 0.0938 0.1876 0.2814 0.3752 0.469 0.417 0.416 0.413
CockroachDB Workload: KV, 95% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 256 a b c 7K 14K 21K 28K 35K 30543.0 30259.7 30253.2
Numenta Anomaly Benchmark Detector: Contextual Anomaly Detector OSE OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE c a b 16 32 48 64 80 69.63 69.89 70.24
CockroachDB Workload: KV, 10% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 128 c b a 3K 6K 9K 12K 15K 12422.9 12364.9 12318.5
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU b c a 0.8572 1.7144 2.5716 3.4288 4.286 3.77816 3.78238 3.80985 MIN: 2.87 MIN: 2.9 MIN: 2.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU c a b 1500 3000 4500 6000 7500 6842.96 6869.52 6897.51 MIN: 6686.11 MIN: 6707.44 MIN: 6737.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU c a b 1500 3000 4500 6000 7500 6850.47 6878.39 6904.88 MIN: 6689.16 MIN: 6697.57 MIN: 6753.33 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: CPU-Only c b a 700 1400 2100 2800 3500 3234.94 3257.91 3260.14
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 512 b c a 0.3716 0.7432 1.1148 1.4864 1.858 1.651464 1.649194 1.638868 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU c b a 0.3105 0.621 0.9315 1.242 1.5525 1.38 1.37 1.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 50% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 512 a b c 5K 10K 15K 20K 25K 25227.2 25164.2 25045.3
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU c b a 7 14 21 28 35 31.85 31.98 32.08 MIN: 20.23 / MAX: 81.16 MIN: 18.51 / MAX: 81.22 MIN: 19.86 / MAX: 78.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU c a b 800 1600 2400 3200 4000 3620.42 3609.48 3595.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU c b a 30 60 90 120 150 125.45 124.97 124.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 50% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 1024 a c b 5K 10K 15K 20K 25K 23641.5 23596.0 23487.2
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only b a c 70 140 210 280 350 305.77 306.09 307.76
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU c b a 600 1200 1800 2400 3000 2889.56 2900.79 2907.95 MIN: 2130.02 / MAX: 3082.08 MIN: 1935.99 / MAX: 3085.36 MIN: 2048.8 / MAX: 3076.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU c a b 4 8 12 16 20 15.47 15.50 15.57 MIN: 15.23 MIN: 14.98 MIN: 15.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
CockroachDB Workload: KV, 10% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 1024 a c b 5K 10K 15K 20K 25K 21256.4 21160.8 21125.9
CockroachDB Workload: KV, 95% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 1024 a b c 6K 12K 18K 24K 30K 28002.8 27948.0 27834.5
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU c a b 800 1600 2400 3200 4000 3636.49 3640.28 3657.32 MIN: 3511.21 MIN: 3508.09 MIN: 3527.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
CockroachDB Workload: KV, 50% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 256 a b c 5K 10K 15K 20K 25K 25273.3 25267.5 25133.1
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU c a b 0.9024 1.8048 2.7072 3.6096 4.512 3.99005 3.99763 4.01087 MIN: 3.8 MIN: 3.84 MIN: 3.89 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
CockroachDB Workload: KV, 60% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 512 c b a 6K 12K 18K 24K 30K 26058.7 25936.5 25923.7
CockroachDB Workload: KV, 10% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 512 a c b 5K 10K 15K 20K 25K 22167.5 22080.3 22053.8
FluidX3D Test: FP32-FP32 OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 1.4 Test: FP32-FP32 b a c 40 80 120 160 200 201 201 200
CockroachDB Workload: KV, 50% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 128 a b c 5K 10K 15K 20K 25K 21728.4 21711.8 21625.8
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a c b 0.4973 0.9946 1.4919 1.9892 2.4865 2.20 2.20 2.21 MIN: 0.69 / MAX: 20 MIN: 0.75 / MAX: 7.41 MIN: 0.73 / MAX: 19.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU c a b 1500 3000 4500 6000 7500 6832.22 6850.49 6861.12 MIN: 6683.82 MIN: 6698.29 MIN: 6709.14 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
CockroachDB Workload: KV, 60% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 128 a b c 5K 10K 15K 20K 25K 24369.6 24360.3 24267.8
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig b c a 40 80 120 160 200 189.63 189.63 190.42
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 512 b c a 0.1788 0.3576 0.5364 0.7152 0.894 0.794756 0.792159 0.791489 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 6 12 18 24 30 24.25 24.35 24.35 MIN: 14.38 / MAX: 63.86 MIN: 15.05 / MAX: 64.68 MIN: 14.89 / MAX: 63.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU c a b 300 600 900 1200 1500 1548.90 1552.62 1555.24 MIN: 1000.5 / MAX: 1643.06 MIN: 994.18 / MAX: 1658.73 MIN: 987.74 / MAX: 1638.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 40 80 120 160 200 164.79 164.14 164.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU c b a 0.5805 1.161 1.7415 2.322 2.9025 2.58 2.57 2.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 1024 c b a 0.391 0.782 1.173 1.564 1.955 1.737779 1.733011 1.731054 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 1024 c b a 0.1959 0.3918 0.5877 0.7836 0.9795 0.870579 0.870159 0.867297 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Scikit-Learn Benchmark: MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: MNIST Dataset a c b 60 120 180 240 300 251.41 251.82 252.30
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: CPU-Only a b c 200 400 600 800 1000 902.53 904.16 905.71
CockroachDB Workload: KV, 60% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 256 a b c 6K 12K 18K 24K 30K 26268.4 26246.9 26188.0
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: CPU-Only a c b 200 400 600 800 1000 1083.60 1083.92 1086.78
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU c a b 1000 2000 3000 4000 5000 4555.58 4564.30 4567.98 MIN: 3391.76 / MAX: 4917.08 MIN: 3395.41 / MAX: 4924.67 MIN: 3311.82 / MAX: 4914.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU c b a 60 120 180 240 300 267.65 267.76 268.31 MIN: 158.54 / MAX: 333.85 MIN: 157 / MAX: 354.66 MIN: 173.59 / MAX: 305.62 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Scikit-Learn Benchmark: Sparse Random Projections, 100 Iterations OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: Sparse Random Projections, 100 Iterations a b c 700 1400 2100 2800 3500 3294.52 3294.87 3302.55
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 1024 b c a 0.4056 0.8112 1.2168 1.6224 2.028 1.802469 1.802314 1.798704 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
CockroachDB Workload: KV, 95% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 512 b a c 6K 12K 18K 24K 30K 29633.5 29621.9 29572.8
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a c b 800 1600 2400 3200 4000 3625.06 3629.54 3632.36 MIN: 3495.85 MIN: 3500.37 MIN: 3504.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU c b a 4 8 12 16 20 14.93 14.92 14.90 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 1024 c b a 0.2916 0.5832 0.8748 1.1664 1.458 1.295966 1.295920 1.293792 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 512 c b a 0.2716 0.5432 0.8148 1.0864 1.358 1.206909 1.206529 1.204979 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: CPU-Only b c a 90 180 270 360 450 417.45 417.82 417.83
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU c b a 0.1913 0.3826 0.5739 0.7652 0.9565 0.85 0.85 0.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU c b a 0.1935 0.387 0.5805 0.774 0.9675 0.86 0.86 0.86 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVKL Benchmark: vklBenchmark Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar c b a 10 20 30 40 50 46 46 46 MIN: 5 / MAX: 1066 MIN: 5 / MAX: 1072 MIN: 5 / MAX: 1052
Phoronix Test Suite v10.8.5