Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.4.0 BIOS) and Intel Xe TGL GT2 3GB on Pop 22.04 via the Phoronix Test Suite.
Intel Core i7-1185G7 Processor: Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads), Motherboard: Dell 0DXP1F (3.4.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe TGL GT2 3GB (1350MHz), Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1, Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x9a - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
icelake pop os OpenBenchmarking.org Phoronix Test Suite Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads) Dell 0DXP1F (3.4.0 BIOS) Intel Tiger Lake-LP 16GB Micron 2300 NVMe 512GB Intel Xe TGL GT2 3GB (1350MHz) Realtek ALC289 Intel Wi-Fi 6 AX201 Pop 22.04 5.17.5-76051705-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 4.6 Mesa 22.0.1 1.3.204 GCC 11.2.0 ext4 1920x1200 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Icelake Pop Os Performance System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x9a - Thermald 2.4.9 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - Python 3.10.4 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
icelake pop os nettle: aes256 nettle: chacha nettle: sha512 nettle: poly1305-aes renaissance: Scala Dotty renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark ALS renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Apache Spark PageRank renaissance: Finagle HTTP Requests renaissance: In-Memory Database Shootout renaissance: Akka Unbalanced Cobwebbed Tree renaissance: Genetic Algorithm Using Jenetics + Futures etcpak: Multi-Threaded - ETC2 etcpak: Single-Threaded - ETC2 webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: super-resolution-10 - CPU - Standard tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 gromacs: MPI CPU - water_GMX50_bare onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU java-jmh: Throughput build-mplayer: Time To Compile svt-vp9: VMAF Optimized - Bosphorus 4K svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 4K svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 4K svt-vp9: Visual Quality Optimized - Bosphorus 1080p x264: Bosphorus 4K x264: Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-hevc: 7 - Bosphorus 4K svt-hevc: 10 - Bosphorus 4K svt-hevc: 7 - Bosphorus 1080p svt-hevc: 10 - Bosphorus 1080p avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time ospray-studio: 1 - 1080p - 1 - Path Tracer ospray-studio: 2 - 1080p - 1 - Path Tracer ospray-studio: 3 - 1080p - 1 - Path Tracer ospray-studio: 1 - 1080p - 32 - Path Tracer ospray-studio: 2 - 1080p - 32 - Path Tracer ospray-studio: 3 - 1080p - 32 - Path Tracer influxdb: 4 - 10000 - 2,5000,1 - 10000 influxdb: 64 - 10000 - 2,5000,1 - 10000 influxdb: 1024 - 10000 - 2,5000,1 - 10000 rocksdb: Rand Read rocksdb: Update Rand rocksdb: Read While Writing rocksdb: Read Rand Write Rand simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID Intel Core i7-1185G7 10765.86 1293.68 585.27 3862.57 1078.2 874.0 8433.7 3799.8 3076.2 13635.2 4219.3 3753.3 4185.0 15468.8 1846.3 1229.28 261.922 9.527 562.755 1147.792 14.92 2514.159 4788 262 228 41 491 1584 5610.79 90510.9 14947 3938.04 6723.94 85976.6 0.553 9.22394 6.22891 1.93185 2.48803 17.6791 6.09108 8.95038 17.2633 11.3989 7.90264 2.06084 2.42393 7183.98 3703.46 7173.67 50.8822 58.1311 41.0586 3689.57 3.20258 7178.67 3703.42 1.43716 10.7555 8651492675.776 90.911 27.55 91.62 28.57 91.28 21.01 73.88 10.82 45.34 12.347 28.998 44.747 40.53 85.399 236.949 16.43 31.8 48.14 114.53 28.199 33.861 11.804 12.2229 11.9418 114.26 1.32816 1.33247 1.60065 6544 6760 8051 212586 220648 262451 883259.6 889845.6 891520.2 17900785 312370 728355 745777 3.6 7.83 1.25 6.66 7.92 OpenBenchmarking.org
OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: chacha Intel Core i7-1185G7 300 600 900 1200 1500 1293.68 MIN: 595.88 / MAX: 3860.82 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: sha512 Intel Core i7-1185G7 130 260 390 520 650 585.27 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: poly1305-aes Intel Core i7-1185G7 800 1600 2400 3200 4000 3862.57 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Etcpak Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Multi-Threaded - Configuration: ETC2 Intel Core i7-1185G7 300 600 900 1200 1500 1229.28 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Single-Threaded - Configuration: ETC2 Intel Core i7-1185G7 60 120 180 240 300 261.92 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default Intel Core i7-1185G7 3 6 9 12 15 9.527 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard Intel Core i7-1185G7 1000 2000 3000 4000 5000 4788 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard Intel Core i7-1185G7 60 120 180 240 300 262 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard Intel Core i7-1185G7 50 100 150 200 250 228 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Intel Core i7-1185G7 9 18 27 36 45 41 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Intel Core i7-1185G7 110 220 330 440 550 491 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard Intel Core i7-1185G7 300 600 900 1200 1500 1584 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare Intel Core i7-1185G7 0.1244 0.2488 0.3732 0.4976 0.622 0.553 1. (CXX) g++ options: -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 3 6 9 12 15 9.22394 MIN: 8.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 2 4 6 8 10 6.22891 MIN: 5.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 0.4347 0.8694 1.3041 1.7388 2.1735 1.93185 MIN: 1.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 0.5598 1.1196 1.6794 2.2392 2.799 2.48803 MIN: 2.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 4 8 12 16 20 17.68 MIN: 17.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 2 4 6 8 10 6.09108 MIN: 5.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 3 6 9 12 15 8.95038 MIN: 8.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 4 8 12 16 20 17.26 MIN: 13.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 3 6 9 12 15 11.40 MIN: 9.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 2 4 6 8 10 7.90264 MIN: 7.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 0.4637 0.9274 1.3911 1.8548 2.3185 2.06084 MIN: 1.84 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 0.5454 1.0908 1.6362 2.1816 2.727 2.42393 MIN: 2.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 1500 3000 4500 6000 7500 7183.98 MIN: 7146.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 800 1600 2400 3200 4000 3703.46 MIN: 3670.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 1500 3000 4500 6000 7500 7173.67 MIN: 7129.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 11 22 33 44 55 50.88 MIN: 50.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 13 26 39 52 65 58.13 MIN: 55.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 9 18 27 36 45 41.06 MIN: 35.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 800 1600 2400 3200 4000 3689.57 MIN: 3647.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU Intel Core i7-1185G7 0.7206 1.4412 2.1618 2.8824 3.603 3.20258 MIN: 3.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 1500 3000 4500 6000 7500 7178.67 MIN: 7137.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 800 1600 2400 3200 4000 3703.42 MIN: 3672.08 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU Intel Core i7-1185G7 0.3234 0.6468 0.9702 1.2936 1.617 1.43716 MIN: 1.38 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU Intel Core i7-1185G7 3 6 9 12 15 10.76 MIN: 10.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K Intel Core i7-1185G7 6 12 18 24 30 27.55 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p Intel Core i7-1185G7 20 40 60 80 100 91.62 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K Intel Core i7-1185G7 7 14 21 28 35 28.57 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p Intel Core i7-1185G7 20 40 60 80 100 91.28 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K Intel Core i7-1185G7 5 10 15 20 25 21.01 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p Intel Core i7-1185G7 16 32 48 64 80 73.88 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 1080p Intel Core i7-1185G7 10 20 30 40 50 45.34 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 4K Intel Core i7-1185G7 3 6 9 12 15 12.35 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 4K Intel Core i7-1185G7 7 14 21 28 35 29.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 4K Intel Core i7-1185G7 10 20 30 40 50 44.75 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Intel Core i7-1185G7 9 18 27 36 45 40.53 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 1080p Intel Core i7-1185G7 20 40 60 80 100 85.40 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Intel Core i7-1185G7 50 100 150 200 250 236.95 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K Intel Core i7-1185G7 4 8 12 16 20 16.43 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K Intel Core i7-1185G7 7 14 21 28 35 31.8 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p Intel Core i7-1185G7 11 22 33 44 55 48.14 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p Intel Core i7-1185G7 30 60 90 120 150 114.53 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Items Per Second, More Is Better OSPray 2.9 Benchmark: gravity_spheres_volume/dim_512/ao/real_time Intel Core i7-1185G7 0.2988 0.5976 0.8964 1.1952 1.494 1.32816
OpenBenchmarking.org Items Per Second, More Is Better OSPray 2.9 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time Intel Core i7-1185G7 0.2998 0.5996 0.8994 1.1992 1.499 1.33247
OpenBenchmarking.org Items Per Second, More Is Better OSPray 2.9 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time Intel Core i7-1185G7 0.3601 0.7202 1.0803 1.4404 1.8005 1.60065
OSPray Studio Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer Intel Core i7-1185G7 1400 2800 4200 5600 7000 6544 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer Intel Core i7-1185G7 1400 2800 4200 5600 7000 6760 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer Intel Core i7-1185G7 2K 4K 6K 8K 10K 8051 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer Intel Core i7-1185G7 50K 100K 150K 200K 250K 212586 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer Intel Core i7-1185G7 50K 100K 150K 200K 250K 220648 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPray Studio 0.10 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer Intel Core i7-1185G7 60K 120K 180K 240K 300K 262451 1. (CXX) g++ options: -O3 -lm -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Intel Core i7-1185G7 200K 400K 600K 800K 1000K 883259.6
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Intel Core i7-1185G7 200K 400K 600K 800K 1000K 889845.6
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Intel Core i7-1185G7 200K 400K 600K 800K 1000K 891520.2
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Update Random Intel Core i7-1185G7 70K 140K 210K 280K 350K 312370 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read While Writing Intel Core i7-1185G7 160K 320K 480K 640K 800K 728355 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read Random Write Random Intel Core i7-1185G7 160K 320K 480K 640K 800K 745777 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Intel Core i7-1185G7 Processor: Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads), Motherboard: Dell 0DXP1F (3.4.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe TGL GT2 3GB (1350MHz), Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1, Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x9a - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 6 June 2022 20:05 by user phoronix.