Iris Plus G7 Ice Lake

Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus G7 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010229-FI-IRISPLUSG64
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ice Lake
October 22 2020
  10 Hours, 11 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Iris Plus G7 Ice LakeOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Device 34ef16GBToshiba KBG40ZPZ512G NVMe 512GBIntel Iris Plus G7 3GB (1100MHz)Realtek ALC289Intel Killer Wi-Fi 6 AX1650i 160MHzUbuntu 20.045.9.1-050901-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.3.0-devel (git-81797fc 2020-10-22 focal-oibaf-ppa)OpenCL 3.01.2.145GCC 9.3.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionIris Plus G7 Ice Lake BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 1.9.1- Python 3.8.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Iris Plus G7 Ice Lakeplaidml: No - Inference - NASNer Large - OpenCLrealsr-ncnn: 4x - Yesshoc: OpenCL - Texture Read Bandwidthoneapi-level-zero: Peak Integer Computelczero: OpenCLplaidml: No - Inference - DenseNet 201 - OpenCLncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - squeezenetplaidml: No - Inference - IMDB LSTM - OpenCLunigine-heaven: 1920 x 1200 - Fullscreen - OpenGLplaidml: No - Inference - VGG19 - OpenCLunigine-super: 1920 x 1200 - Fullscreen - Medium - OpenGLplaidml: No - Inference - VGG16 - OpenCLunigine-valley: 1920 x 1200 - Fullscreen - OpenGLunigine-super: 1920 x 1200 - Fullscreen - Low - OpenGLplaidml: No - Inference - Inception V3 - OpenCLoneapi-level-zero: Peak Single-Precision Computerealsr-ncnn: 4x - Noxonotic: 1920 x 1200 - Ultimateetlegacy: Renderer2 - 1920 x 1200xonotic: 1920 x 1200 - Ultraplaidml: No - Inference - ResNet 50 - OpenCLxonotic: 1920 x 1200 - Highgputest: GiMark - 1920 x 1200 - Fullscreenxonotic: 1920 x 1200 - Lowgputest: Pixmark Piano - 1920 x 1200 - Fullscreengputest: TessMark - 1920 x 1200 - Fullscreengputest: Furmark - 1920 x 1200 - Fullscreengputest: Pixmark Volplosion - 1920 x 1200 - Fullscreenoneapi-level-zero: Peak System Memory Copy to Shared Memoryoneapi-level-zero: Peak Half-Precision Computewaifu2x-ncnn: 2x - 3 - Yesjuliagpu: GPUcl-mem: Readcl-mem: Writecl-mem: Copymandelgpu: GPUplaidml: No - Inference - Mobilenet - OpenCLshoc: OpenCL - MD5 Hashoneapi-level-zero: Peak Float16 Global Memory Bandwidthtesseract: 1920 x 1200clpeak: Single-Precision Floatclpeak: Global Memory Bandwidthshoc: OpenCL - Max SP Flopsoneapi-level-zero: Host-To-Device-To-Host Image Copyoneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Device-To-Host Bandwidthoneapi-level-zero: Device-To-Host Bandwidthclpeak: Transfer Bandwidth enqueueWriteBufferwaifu2x-ncnn: 2x - 3 - Nofinancebench: Monte-Carlo OpenCLshoc: OpenCL - Triadclpeak: Kernel Latencyviennacl: OpenCL LU Factorizationshoc: OpenCL - FFT SPoneapi-level-zero: Peak Kernel Launch Latencyshoc: OpenCL - Bus Speed Downloadshoc: OpenCL - Bus Speed Readbackfinancebench: Black-Scholes OpenCLIce Lake3.651080.035179.399159.27165412.5939.8935.9016.6617.1097.2218.651.8316.807.585.648.677.5418.9921.9320.6417.079216.049.720.2119.016918.537.85999.871136.62898.445784991.8121.415643279.28141.87053051771209.75016302923675109676815.43151369.7451.93390037990.342.032.335.725219078.0251.950.679737.179192.435521047.0045.715330.1310.631112504.4421.46726112492.3921.48822143.856.926598.38357113.347236.5248.1772141.03726.967539.540241.21386.434OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: OpenCLIce Lake0.82131.64262.46393.28524.1065SE +/- 0.01, N = 33.65

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesIce Lake2004006008001000SE +/- 0.39, N = 31080.04

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Texture Read BandwidthIce Lake4080120160200SE +/- 2.21, N = 12179.401. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Integer ComputeIce Lake4080120160200SE +/- 0.03, N = 3159.271. (CXX) g++ options: -ldl -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLIce Lake140280420560700SE +/- 7.22, N = 36541. (CXX) g++ options: -flto -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLIce Lake3691215SE +/- 0.15, N = 312.59

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyIce Lake918273645SE +/- 0.12, N = 339.89MIN: 34.42 / MAX: 47.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50Ice Lake816243240SE +/- 0.03, N = 335.90MIN: 35.32 / MAX: 36.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetIce Lake48121620SE +/- 0.01, N = 316.66MIN: 15.65 / MAX: 17.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18Ice Lake48121620SE +/- 0.01, N = 317.10MIN: 16.44 / MAX: 17.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16Ice Lake20406080100SE +/- 0.06, N = 397.22MIN: 94.12 / MAX: 98.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetIce Lake510152025SE +/- 0.02, N = 318.65MIN: 18.33 / MAX: 19.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefaceIce Lake0.41180.82361.23541.64722.059SE +/- 0.02, N = 31.83MIN: 1.64 / MAX: 5.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0Ice Lake48121620SE +/- 0.03, N = 316.80MIN: 16.5 / MAX: 17.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetIce Lake246810SE +/- 0.02, N = 37.58MIN: 7.3 / MAX: 8.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2Ice Lake1.2692.5383.8075.0766.345SE +/- 0.03, N = 35.64MIN: 5.4 / MAX: 6.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Ice Lake246810SE +/- 0.03, N = 38.67MIN: 8.4 / MAX: 9.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Ice Lake246810SE +/- 0.06, N = 37.54MIN: 6.97 / MAX: 8.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetIce Lake510152025SE +/- 0.01, N = 318.99MIN: 18.53 / MAX: 22.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetIce Lake510152025SE +/- 0.03, N = 321.93MIN: 21.24 / MAX: 26.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLIce Lake510152025SE +/- 0.02, N = 320.64

Unigine Heaven

This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1200 - Mode: Fullscreen - Renderer: OpenGLIce Lake48121620SE +/- 0.01, N = 317.08

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: OpenCLIce Lake48121620SE +/- 0.02, N = 316.04

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1200 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGLIce Lake3691215SE +/- 0.00, N = 39.7MAX: 11.7

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: OpenCLIce Lake510152025SE +/- 0.02, N = 320.21

Unigine Valley

This test calculates the average frame-rate within the Valley demo for the Unigine engine, released in February 2013. This engine is extremely demanding on the system's graphics card. Unigine Valley relies upon an OpenGL 3 core profile context. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Valley 1.0Resolution: 1920 x 1200 - Mode: Fullscreen - Renderer: OpenGLIce Lake510152025SE +/- 0.03, N = 319.02

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1200 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLIce Lake510152025SE +/- 0.03, N = 318.5MAX: 23.7

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: OpenCLIce Lake918273645SE +/- 0.05, N = 337.85

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Single-Precision ComputeIce Lake2004006008001000SE +/- 0.81, N = 3999.871. (CXX) g++ options: -ldl -pthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoIce Lake306090120150SE +/- 0.36, N = 3136.63

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: UltimateIce Lake20406080100SE +/- 0.42, N = 398.45MIN: 33 / MAX: 188

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 1920 x 1200Ice Lake20406080100SE +/- 1.16, N = 591.8

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: UltraIce Lake306090120150SE +/- 0.67, N = 3121.42MIN: 72 / MAX: 234

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: OpenCLIce Lake20406080100SE +/- 0.06, N = 379.28

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: HighIce Lake306090120150SE +/- 1.19, N = 3141.87MIN: 94 / MAX: 268

GpuTest

GpuTest is a cross-platform OpenGL benchmark developed at Geeks3D.com that offers tech demos such as FurMark, TessMark, and other workloads to stress various areas of GPUs and drivers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgPoints, More Is BetterGpuTest 0.7.0Test: GiMark - Resolution: 1920 x 1200 - Mode: FullscreenIce Lake400800120016002000SE +/- 5.57, N = 31771

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: LowIce Lake50100150200250SE +/- 2.89, N = 4209.75MIN: 130 / MAX: 456

GpuTest

GpuTest is a cross-platform OpenGL benchmark developed at Geeks3D.com that offers tech demos such as FurMark, TessMark, and other workloads to stress various areas of GPUs and drivers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgPoints, More Is BetterGpuTest 0.7.0Test: Pixmark Piano - Resolution: 1920 x 1200 - Mode: FullscreenIce Lake60120180240300SE +/- 2.60, N = 3292

OpenBenchmarking.orgPoints, More Is BetterGpuTest 0.7.0Test: TessMark - Resolution: 1920 x 1200 - Mode: FullscreenIce Lake8001600240032004000SE +/- 16.34, N = 33675

OpenBenchmarking.orgPoints, More Is BetterGpuTest 0.7.0Test: Furmark - Resolution: 1920 x 1200 - Mode: FullscreenIce Lake2004006008001000SE +/- 6.57, N = 31096

OpenBenchmarking.orgPoints, More Is BetterGpuTest 0.7.0Test: Pixmark Volplosion - Resolution: 1920 x 1200 - Mode: FullscreenIce Lake170340510680850SE +/- 5.33, N = 3768

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak System Memory Copy to Shared MemoryIce Lake48121620SE +/- 0.13, N = 315.431. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Half-Precision ComputeIce Lake30060090012001500SE +/- 12.09, N = 31369.741. (CXX) g++ options: -ldl -pthread

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesIce Lake1224364860SE +/- 0.56, N = 351.93

JuliaGPU

JuliaGPU is an OpenCL benchmark with this version containing various PTS-specific enhancements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples/sec, More Is BetterJuliaGPU 1.2pts1OpenCL Device: GPUIce Lake20M40M60M80M100MSE +/- 951136.81, N = 890037990.31. (CC) gcc options: -O3 -march=native -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL -lm

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: ReadIce Lake1020304050SE +/- 0.09, N = 342.01. (CC) gcc options: -O2 -flto -lOpenCL

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: WriteIce Lake816243240SE +/- 0.09, N = 332.31. (CC) gcc options: -O2 -flto -lOpenCL

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: CopyIce Lake816243240SE +/- 0.09, N = 335.71. (CC) gcc options: -O2 -flto -lOpenCL

MandelGPU

MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPUIce Lake5M10M15M20M25MSE +/- 180639.93, N = 325219078.01. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLIce Lake60120180240300SE +/- 0.69, N = 3251.95

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: MD5 HashIce Lake0.15290.30580.45870.61160.7645SE +/- 0.0007, N = 30.67971. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Float16 Global Memory BandwidthIce Lake918273645SE +/- 0.04, N = 337.181. (CXX) g++ options: -ldl -pthread

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1200Ice Lake20406080100SE +/- 1.04, N = 392.44

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Single-Precision FloatIce Lake2004006008001000SE +/- 0.51, N = 31047.001. (CXX) g++ options: -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Global Memory BandwidthIce Lake1020304050SE +/- 0.04, N = 445.711. (CXX) g++ options: -O3 -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Max SP FlopsIce Lake11002200330044005500SE +/- 0.03, N = 35330.131. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device-To-Host Image CopyIce Lake3691215SE +/- 0.02, N = 310.631. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Host-To-Device BandwidthIce Lake3K6K9K12K15KSE +/- 9.96, N = 412504.441. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device BandwidthIce Lake510152025SE +/- 0.02, N = 421.471. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Device-To-Host BandwidthIce Lake3K6K9K12K15KSE +/- 26.95, N = 412492.391. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Device-To-Host BandwidthIce Lake510152025SE +/- 0.05, N = 421.491. (CXX) g++ options: -ldl -pthread

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Transfer Bandwidth enqueueWriteBufferIce Lake1020304050SE +/- 0.22, N = 643.851. (CXX) g++ options: -O3 -rdynamic -lOpenCL

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: NoIce Lake246810SE +/- 0.020, N = 66.926

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-06-06Benchmark: Monte-Carlo OpenCLIce Lake130260390520650SE +/- 1.45, N = 7598.381. (CXX) g++ options: -O3 -lOpenCL

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: TriadIce Lake3691215SE +/- 0.18, N = 1513.351. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus, Fewer Is BetterclpeakOpenCL Test: Kernel LatencyIce Lake816243240SE +/- 0.11, N = 936.521. (CXX) g++ options: -O3 -rdynamic -lOpenCL

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile uses ViennaCL OpenCL support and runs the included computational benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterViennaCL 1.4.2OpenCL LU FactorizationIce Lake1122334455SE +/- 0.17, N = 1048.181. (CXX) g++ options: -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: FFT SPIce Lake306090120150SE +/- 0.28, N = 11141.041. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus, Fewer Is BetteroneAPI Level Zero TestsTest: Peak Kernel Launch LatencyIce Lake612182430SE +/- 0.06, N = 1226.971. (CXX) g++ options: -ldl -pthread

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Bus Speed DownloadIce Lake918273645SE +/- 0.09, N = 1339.541. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Bus Speed ReadbackIce Lake918273645SE +/- 0.14, N = 1241.211. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-06-06Benchmark: Black-Scholes OpenCLIce Lake246810SE +/- 0.111, N = 156.4341. (CXX) g++ options: -O3 -lOpenCL

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringIce Lake918273645Min: 0.13 / Avg: 13.97 / Max: 41.53

73 Results Shown

PlaidML
RealSR-NCNN
SHOC Scalable HeterOgeneous Computing
oneAPI Level Zero Tests
LeelaChessZero
PlaidML
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
  Vulkan GPU - squeezenet
PlaidML
Unigine Heaven
PlaidML
Unigine Superposition
PlaidML
Unigine Valley
Unigine Superposition
PlaidML
oneAPI Level Zero Tests
RealSR-NCNN
Xonotic
ET: Legacy
Xonotic
PlaidML
Xonotic
GpuTest
Xonotic
GpuTest:
  Pixmark Piano - 1920 x 1200 - Fullscreen
  TessMark - 1920 x 1200 - Fullscreen
  Furmark - 1920 x 1200 - Fullscreen
  Pixmark Volplosion - 1920 x 1200 - Fullscreen
oneAPI Level Zero Tests:
  Peak System Memory Copy to Shared Memory
  Peak Half-Precision Compute
Waifu2x-NCNN Vulkan
JuliaGPU
cl-mem:
  Read
  Write
  Copy
MandelGPU
PlaidML
SHOC Scalable HeterOgeneous Computing
oneAPI Level Zero Tests
Tesseract
clpeak:
  Single-Precision Float
  Global Memory Bandwidth
SHOC Scalable HeterOgeneous Computing
oneAPI Level Zero Tests:
  Host-To-Device-To-Host Image Copy
  Host-To-Device Bandwidth
  Host-To-Device Bandwidth
  Device-To-Host Bandwidth
  Device-To-Host Bandwidth
clpeak
Waifu2x-NCNN Vulkan
FinanceBench
SHOC Scalable HeterOgeneous Computing
clpeak
ViennaCL
SHOC Scalable HeterOgeneous Computing
oneAPI Level Zero Tests
SHOC Scalable HeterOgeneous Computing:
  OpenCL - Bus Speed Download
  OpenCL - Bus Speed Readback
FinanceBench
CPU Power Consumption Monitor