Tests for a future article. Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.4Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
b Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB + 2 x 0GB MassStorageClass, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc5-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200
icelake march OpenBenchmarking.org Phoronix Test Suite Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads) Dell 06CDVY (1.0.9 BIOS) Intel Ice Lake-LP DRAM 16GB Toshiba KBG40ZPZ512G NVMe 512GB + 2 x 0GB MassStorageClass Intel Iris Plus ICL GT2 16GB (1100MHz) Realtek ALC289 Intel Ice Lake-LP PCH CNVi WiFi Ubuntu 23.10 6.7.0-060700rc5-generic (x86_64) GNOME Shell 45.1 X Server + Wayland 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa) GCC 13.2.0 ext4 1920x1200 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution Icelake March Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.4 - Python 3.11.6 - gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +8.4% +8.4% +16.8% +16.8% +25.2% +25.2% 33.5% 22.3% 15.7% 15.1% 15.1% 12.7% 11.1% 10.9% 10.3% 9.8% 9.8% 9.4% 9.2% 9.1% 9.1% 9% 8.9% 8.7% 8.4% 8.4% 8.4% 8.4% 8.2% 8.2% 8.1% 8.1% 7.8% 7.6% 7.5% 7.4% 7.3% 7.3% 7.2% 7.1% 7% 6.9% 6.9% 6.9% 6.8% 6.8% 6.6% 6.6% 6.5% 6.5% 6.5% 6.3% 6.3% 6.3% 6.3% 6.2% 6.2% 6.2% 6.2% 6.1% 6.1% 6% 6% 5.8% 5.7% 5.6% 5.5% 5.3% 5.3% 5.3% 5.2% 5.2% 5.1% 5% 5% 4.9% 4.8% 4.8% 4.7% 4.5% 4.5% 4.4% 4.2% 4.1% 4% 3.9% 3.9% 3.8% 3.7% 3.6% 3.5% 3.5% 3.2% 2.9% 2.6% 2.3% PNG - 80 CPU - 1 - ResNet-152 30.4% CPU - 1 - ResNet-50 25.5% WAV To WavPack Seq Fill R.S.A.F.I - CPU R.S.A.F.I - CPU CPU - 1 - ResNet-50 Overwrite D.B.s - CPU CPU P.V.B.D.F - CPU P.V.B.D.F - CPU P.D.F - CPU P.D.F - CPU H.E.R.F.I - CPU H.E.R.F.I - CPU PNG - 90 CPU - 16 - GoogLeNet Preset 8 - Bosphorus 1080p F.D.R.F - CPU Read While Writing Rand Read F.D.R.F - CPU F.D.R.F.I - CPU F.D.R.F.I - CPU M.T.E.T.D.F - CPU M.T.E.T.D.F - CPU JPEG - 90 JPEG - 80 C.B.S.A - CPU CPU - 32 - AlexNet W.P.D.F.I - CPU W.P.D.F.I - CPU CPU - 64 - AlexNet CPU - 32 - GoogLeNet CPU - 16 - ResNet-50 Update Rand P.R.I.R.F - CPU P.R.I.R.F - CPU CPU - 1 - Efficientnet_v2_l A.G.R.R.0.F.I - CPU Chess Benchmark H.E.R.F - CPU R.N.N.T - CPU H.E.R.F - CPU A.G.R.R.0.F - CPU F.D.F - CPU F.D.F - CPU A.G.R.R.0.F - CPU W.P.D.F - CPU N.S.P.L.F - CPU CPU - 64 - GoogLeNet A.G.R.R.0.F.I - CPU W.P.D.F - CPU N.S.P.L.F - CPU CPU - 64 - ResNet-50 BMW27 - CPU-Only CPU - 16 - AlexNet CPU - 32 - ResNet-50 PNG - 100 All Pabellon Barcelona - CPU-Only CPU - 32 - ResNet-152 Rand Fill CPU - 16 - Efficientnet_v2_l P.D.F - CPU Fishy Cat - CPU-Only P.D.F - CPU CPU - 64 - ResNet-152 JPEG - 100 CPU - 1 - GoogLeNet V.D.F - CPU V.D.F - CPU 1e12 Preset 4 - Bosphorus 1080p CPU - 32 - Efficientnet_v2_l CPU - 64 - Efficientnet_v2_l F.D.F.I - CPU CPU - 16 - ResNet-152 Preset 13 - Bosphorus 4K V.D.F.I - CPU V.D.F.I - CPU F.D.F.I - CPU IP Shapes 1D - CPU R.R.W.R R.S.A.F - CPU R.S.A.F - CPU Junkshop - CPU-Only CPU - 16 - ResNet-50 3% CPU - 1 - AlexNet Preset 4 - Bosphorus 4K CPU - 64 - ResNet-50 Rand Fill Sync 2.1% JPEG-XL libjxl PyTorch PyTorch WavPack Audio Encoding RocksDB OpenVINO OpenVINO TensorFlow RocksDB oneDNN Chaos Group V-RAY OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO JPEG-XL libjxl TensorFlow SVT-AV1 OpenVINO RocksDB RocksDB OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO JPEG-XL libjxl JPEG-XL libjxl oneDNN TensorFlow OpenVINO OpenVINO TensorFlow TensorFlow TensorFlow RocksDB OpenVINO OpenVINO PyTorch OpenVINO Stockfish OpenVINO oneDNN OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO OpenVINO TensorFlow OpenVINO OpenVINO OpenVINO TensorFlow Blender TensorFlow TensorFlow JPEG-XL libjxl JPEG-XL Decoding libjxl Blender PyTorch RocksDB PyTorch OpenVINO Blender OpenVINO PyTorch JPEG-XL libjxl TensorFlow OpenVINO OpenVINO Primesieve SVT-AV1 PyTorch PyTorch OpenVINO PyTorch SVT-AV1 OpenVINO OpenVINO OpenVINO oneDNN RocksDB OpenVINO OpenVINO Blender PyTorch TensorFlow SVT-AV1 PyTorch RocksDB a b
icelake march pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Noise Suppression Poconet-Like FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Person Re-Identification Retail FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 1 - GoogLeNet tensorflow: CPU - 1 - ResNet-50 tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 jpegxl: PNG - 80 jpegxl: PNG - 90 jpegxl: JPEG - 80 jpegxl: JPEG - 90 jpegxl: PNG - 100 jpegxl: JPEG - 100 jpegxl-decode: 1 jpegxl-decode: All stockfish: Chess Benchmark rocksdb: Overwrite rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand v-ray: CPU onednn: IP Shapes 1D - CPU onednn: IP Shapes 3D - CPU onednn: Convolution Batch Shapes Auto - CPU onednn: Deconvolution Batch shapes_1d - CPU onednn: Deconvolution Batch shapes_3d - CPU onednn: Recurrent Neural Network Training - CPU onednn: Recurrent Neural Network Inference - CPU draco: Lion draco: Church Facade openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Noise Suppression Poconet-Like FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Person Re-Identification Retail FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU build-linux-kernel: defconfig build-mesa: Time To Compile compress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compression primesieve: 1e12 blender: BMW27 - CPU-Only blender: Junkshop - CPU-Only blender: Fishy Cat - CPU-Only blender: Pabellon Barcelona - CPU-Only encode-wavpack: WAV To WavPack a b 20.99 8.45 8.92 8.69 8.54 3.45 3.37 3.39 4.43 2.46 2.47 2.48 0.63 7.26 7.1 51.59 2.37 167.03 26.34 132.9 63.77 383.05 47.36 8.66 236.71 90.92 116.52 32.47 98.25 1779.08 36.88 4463.62 0.928 7.551 24.902 25.118 3.643 28.85 184.734 240.46 11.23 39.04 42.93 45.59 22.17 5.35 19.69 7.28 20.09 7.44 20.37 7.5 8.278 7.543 8.048 7.662 2.997 2.981 57.584 123.512 2762544 487116 505433 9192128 186334 813335 1719 500210 504091 3535 7.08037 5.80745 13.7106 18.832 13.4544 12553.3 6399.73 5559 8421 6365.09 548.85 562.29 77.47 1675.08 23.86 151.61 30.02 62.62 10.38 84.34 460.93 16.82 43.9 34.26 123.07 40.64 2.2 108.37 0.86 496.16 143.674 28.581235 88.592 671.9 921.09 848.45 2207.35 17.657 16.72 6.48 8.66 8.68 8.74 3.59 3.55 3.56 4.73 2.59 2.58 2.59 0.67 7.94 7.47 54.06 2.47 181 27.27 138.11 67.76 414.42 54.53 9.36 253.97 99.85 123.67 34.59 105.03 1894.37 40.23 4765.04 0.952 7.569 24.624 26.134 3.806 31.373 184.085 238.626 11.56 41.38 46.1 48.86 23.26 6.03 21.44 7.79 21.52 7.87 21.63 7.96 11.053 8.222 8.662 8.256 3.169 3.13 58.041 130.443 2945032 541161 532197 9961527 199263 940711 1683 542153 522145 3899 6.82639 5.83135 12.755 16.9829 13.4885 11783.6 6341.9 5491 8361 5987.53 502.55 535.1 73.91 1614.49 22.01 146.48 28.89 58.98 9.59 73.26 426.3 15.67 39.97 32.26 115.5 38.02 2.07 99.37 0.81 497.181 143.249 28.096278 84.583 633.72 892.47 806.43 2092.75 14.432 OpenBenchmarking.org
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 2 4 6 8 10 8.45 6.48 MIN: 7.21 / MAX: 8.75 MIN: 5.29 / MAX: 8.11
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 2 4 6 8 10 8.92 8.66 MIN: 6.82 / MAX: 10.81 MIN: 7.84 / MAX: 10.76
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 2 4 6 8 10 8.69 8.68 MIN: 7.73 / MAX: 9.58 MIN: 7.82 / MAX: 10.13
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 2 4 6 8 10 8.54 8.74 MIN: 7.7 / MAX: 10.85 MIN: 7.76 / MAX: 10.2
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 0.8078 1.6156 2.4234 3.2312 4.039 3.45 3.59 MIN: 3.28 / MAX: 4.26 MIN: 3.18 / MAX: 4.56
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b 0.7988 1.5976 2.3964 3.1952 3.994 3.37 3.55 MIN: 3.09 / MAX: 4.17 MIN: 3.29 / MAX: 4.15
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b 0.801 1.602 2.403 3.204 4.005 3.39 3.56 MIN: 3.22 / MAX: 4.18 MIN: 3.3 / MAX: 4.19
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 1.0643 2.1286 3.1929 4.2572 5.3215 4.43 4.73 MIN: 3.92 / MAX: 6.05 MIN: 4.29 / MAX: 6.39
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 0.5828 1.1656 1.7484 2.3312 2.914 2.46 2.59 MIN: 2.18 / MAX: 3.06 MIN: 2.34 / MAX: 3.21
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b 0.5805 1.161 1.7415 2.322 2.9025 2.47 2.58 MIN: 2.3 / MAX: 3.1 MIN: 2.18 / MAX: 3.27
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b 0.5828 1.1656 1.7484 2.3312 2.914 2.48 2.59 MIN: 2.21 / MAX: 3.02 MIN: 2.29 / MAX: 3.06
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU a b 0.1508 0.3016 0.4524 0.6032 0.754 0.63 0.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU a b 2 4 6 8 10 7.26 7.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU a b 2 4 6 8 10 7.10 7.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU a b 12 24 36 48 60 51.59 54.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU a b 0.5558 1.1116 1.6674 2.2232 2.779 2.37 2.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU a b 40 80 120 160 200 167.03 181.00 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU a b 6 12 18 24 30 26.34 27.27 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 30 60 90 120 150 132.90 138.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU a b 15 30 45 60 75 63.77 67.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU a b 90 180 270 360 450 383.05 414.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b 12 24 36 48 60 47.36 54.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU a b 3 6 9 12 15 8.66 9.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 60 120 180 240 300 236.71 253.97 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 20 40 60 80 100 90.92 99.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU a b 30 60 90 120 150 116.52 123.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU a b 8 16 24 32 40 32.47 34.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU a b 20 40 60 80 100 98.25 105.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 400 800 1200 1600 2000 1779.08 1894.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b 9 18 27 36 45 36.88 40.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 1000 2000 3000 4000 5000 4463.62 4765.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b 0.2142 0.4284 0.6426 0.8568 1.071 0.928 0.952 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 2 4 6 8 10 7.551 7.569 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 6 12 18 24 30 24.90 24.62 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 6 12 18 24 30 25.12 26.13 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 0.8564 1.7128 2.5692 3.4256 4.282 3.643 3.806 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b 7 14 21 28 35 28.85 31.37 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b 40 80 120 160 200 184.73 184.09 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.0 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 50 100 150 200 250 240.46 238.63 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet a b 3 6 9 12 15 11.23 11.56
JPEG-XL libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG-XL libjxl 0.10.1 Input: PNG - Quality: 80 a b 3 6 9 12 15 8.278 11.053 1. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
JPEG-XL Decoding libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG-XL Decoding libjxl 0.10.1 CPU Threads: 1 a b 13 26 39 52 65 57.58 58.04
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 1024 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 16.1 Chess Benchmark a b 600K 1200K 1800K 2400K 3000K 2762544 2945032 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Fill a b 110K 220K 330K 440K 550K 505433 532197 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Read a b 2M 4M 6M 8M 10M 9192128 9961527 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Update Random a b 40K 80K 120K 160K 200K 186334 199263 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Sequential Fill a b 200K 400K 600K 800K 1000K 813335 940711 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Random Fill Sync a b 400 800 1200 1600 2000 1719 1683 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Read While Writing a b 120K 240K 360K 480K 600K 500210 542153 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 9.0 Test: Read Random Write Random a b 110K 220K 330K 440K 550K 504091 522145 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: IP Shapes 1D - Engine: CPU a b 2 4 6 8 10 7.08037 6.82639 MIN: 6.62 MIN: 6.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: IP Shapes 3D - Engine: CPU a b 1.3121 2.6242 3.9363 5.2484 6.5605 5.80745 5.83135 MIN: 5.67 MIN: 5.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: Convolution Batch Shapes Auto - Engine: CPU a b 4 8 12 16 20 13.71 12.76 MIN: 11.94 MIN: 12.13 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: Deconvolution Batch shapes_1d - Engine: CPU a b 5 10 15 20 25 18.83 16.98 MIN: 16.9 MIN: 15.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: Deconvolution Batch shapes_3d - Engine: CPU a b 3 6 9 12 15 13.45 13.49 MIN: 13.22 MIN: 13.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: Recurrent Neural Network Training - Engine: CPU a b 3K 6K 9K 12K 15K 12553.3 11783.6 MIN: 12419.5 MIN: 11523.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.4 Harness: Recurrent Neural Network Inference - Engine: CPU a b 1400 2800 4200 5600 7000 6399.73 6341.90 MIN: 6336.72 MIN: 6215.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU a b 1400 2800 4200 5600 7000 6365.09 5987.53 MIN: 4117.28 / MAX: 6821.88 MIN: 3948.27 / MAX: 6469.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU a b 120 240 360 480 600 548.85 502.55 MIN: 316.87 / MAX: 622.99 MIN: 251.66 / MAX: 603.5 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU a b 120 240 360 480 600 562.29 535.10 MIN: 328.73 / MAX: 643.55 MIN: 331.64 / MAX: 616.9 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU a b 20 40 60 80 100 77.47 73.91 MIN: 41.16 / MAX: 110.74 MIN: 42.25 / MAX: 102.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU a b 400 800 1200 1600 2000 1675.08 1614.49 MIN: 979.3 / MAX: 1858.76 MIN: 989.41 / MAX: 1851.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU a b 6 12 18 24 30 23.86 22.01 MIN: 9.57 / MAX: 41.51 MIN: 8.86 / MAX: 45.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU a b 30 60 90 120 150 151.61 146.48 MIN: 96.87 / MAX: 185.09 MIN: 95.3 / MAX: 181.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 7 14 21 28 35 30.02 28.89 MIN: 12.02 / MAX: 53.72 MIN: 12.22 / MAX: 56.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU a b 14 28 42 56 70 62.62 58.98 MIN: 28.99 / MAX: 93.01 MIN: 30.6 / MAX: 85.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU a b 3 6 9 12 15 10.38 9.59 MIN: 4.12 / MAX: 24.58 MIN: 4.46 / MAX: 25.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b 20 40 60 80 100 84.34 73.26 MIN: 42.98 / MAX: 112.29 MIN: 42.21 / MAX: 97.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU a b 100 200 300 400 500 460.93 426.30 MIN: 257.76 / MAX: 516.23 MIN: 266.17 / MAX: 497.45 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 4 8 12 16 20 16.82 15.67 MIN: 6.35 / MAX: 33.7 MIN: 7.56 / MAX: 33.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 10 20 30 40 50 43.90 39.97 MIN: 22.03 / MAX: 69.03 MIN: 22.42 / MAX: 60.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU a b 8 16 24 32 40 34.26 32.26 MIN: 16.36 / MAX: 51.61 MIN: 17.41 / MAX: 59.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU a b 30 60 90 120 150 123.07 115.50 MIN: 62.42 / MAX: 153.74 MIN: 62.34 / MAX: 153.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU a b 9 18 27 36 45 40.64 38.02 MIN: 16.78 / MAX: 58.65 MIN: 18.33 / MAX: 61.62 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.495 0.99 1.485 1.98 2.475 2.20 2.07 MIN: 0.81 / MAX: 10.39 MIN: 0.81 / MAX: 21.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b 20 40 60 80 100 108.37 99.37 MIN: 51.18 / MAX: 147.77 MIN: 52.55 / MAX: 136.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.1935 0.387 0.5805 0.774 0.9675 0.86 0.81 MIN: 0.32 / MAX: 7.11 MIN: 0.31 / MAX: 7.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: BMW27 - Compute: CPU-Only a b 140 280 420 560 700 671.90 633.72
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.4Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Testing initiated at 26 March 2024 19:20 by user phoronix.
b Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB + 2 x 0GB MassStorageClass, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc5-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0~git2312230600.551924~oibaf~m (git-551924a 2023-12-23 mantic-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.4Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Testing initiated at 27 March 2024 01:29 by user phoronix.