Tiger Lake CPU Security Mitigations

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010267-FI-MIT49760730
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Web Browsers 1 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 11 Tests
Creator Workloads 12 Tests
Database Test Suite 5 Tests
Disk Test Suite 2 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 7 Tests
Imaging 7 Tests
Java 2 Tests
Common Kernel Benchmarks 8 Tests
Machine Learning 6 Tests
Multi-Core 4 Tests
Networking Test Suite 2 Tests
NVIDIA GPU Compute 2 Tests
Productivity 5 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 2 Tests
Server 5 Tests
Server CPU Tests 9 Tests
Single-Threaded 5 Tests
Speech 3 Tests
Telephony 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
October 22 2020
  11 Hours, 52 Minutes
mitigations=off
October 23 2020
  12 Hours, 28 Minutes
Ice Lake: Default
October 24 2020
  15 Hours, 39 Minutes
Ice Lake: mitigations=off
October 25 2020
  18 Hours, 8 Minutes
Invert Hiding All Results Option
  14 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tiger Lake CPU Security MitigationsProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (1.0.3 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel UHD 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 20.105.8.0-25-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.11.2.145GCC 10.2.0ext41920x1200Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Device 34efToshiba KBG40ZPZ512G NVMe 512GBIntel Iris Plus G7 3GB (1100MHz)Intel Killer Wi-Fi 6 AX1650i 160MHzOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rwProcessor Details- Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- Ice Lake: Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3- Ice Lake: mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3Java Details- OpenJDK Runtime Environment (build 11.0.9+10-post-Ubuntu-0ubuntu1)Python Details- Python 3.8.6Security Details- Default: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - mitigations=off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: Default: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: mitigations=off: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected

Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offResult OverviewPhoronix Test Suite100%134%169%203%238%SQLiteLevelDBFS-Markctx_clockeSpeak-NG Speech EngineSockperfStress-NGZstd CompressionTimed Apache CompilationTimed Linux Kernel CompilationDaCapo BenchmarkTimed GDB GNU Debugger CompilationG'MICRenaissanceASTC EncoderLibreOfficeGEGLRawTherapeeWireGuard + Linux Networking Stack Stress TestRNNoiseTensorFlow LiteDarktableMobile Neural NetworkOSBenchSQLite SpeedtestPyBenchPyPerformanceSeleniumFacebook RocksDBlibrsvgGIMPLibRawNCNNTesseract OCRDeepSpeechEthrGitGNU Octave BenchmarkCaffe

Tiger Lake CPU Security Mitigationssqlite: 8renaissance: Twitter HTTP Requestsctx-clock: Context Switch Timestress-ng: Malloccaffe: AlexNet - CPU - 100influxdb: 4 - 10000 - 2,5000,1 - 10000gegl: Scalesockperf: Throughputgmic: 2D Function Plotting, 1000 Timesselenium: Kraken - Firefoxgegl: Cropsockperf: Latency Ping Pongselenium: Octane - Firefoxselenium: Jetstream - Firefoxcompress-zstd: 3build-apache: Time To Compilebuild-linux-kernel: Time To Compileselenium: WASM collisionDetection - Firefoxbuild-gdb: Time To Compilestress-ng: Forkingcaffe: GoogleNet - CPU - 100selenium: CanvasMark - Firefoxselenium: ARES-6 - Firefoxdarktable: Masskrug - CPU-onlyastcenc: Thoroughastcenc: Mediumpyperformance: python_startupselenium: Jetstream 2 - Firefoxrenaissance: Apache Spark ALSstress-ng: Context Switchingselenium: WebXPRT - Google Chromegegl: Cartoongimp: resizeastcenc: Fastselenium: WebXPRT - Firefoxrawtherapee: Total Benchmark Timerenaissance: Rand Foresttensorflow-lite: Mobilenet Quantgegl: Tile Glassselenium: WASM collisionDetection - Google Chromernnoise: rocksdb: Rand Readmnn: MobileNetV2_224mnn: resnet-v2-50ethr: HTTP - Bandwidth - 1influxdb: 64 - 10000 - 2,5000,1 - 10000tensorflow-lite: NASNet Mobilegegl: Antialiaspyperformance: pickle_pure_pythonpyperformance: chaospyperformance: django_templatepyperformance: pathlibtensorflow-lite: Inception ResNet V2selenium: Kraken - Google Chrometensorflow-lite: Inception V4gegl: Color Enhancepyperformance: raytracepyperformance: 2to3selenium: WASM imageConvolute - Firefoxtensorflow-lite: Mobilenet Floatselenium: Jetstream 2 - Google Chromesqlite-speedtest: Timed Time - Size 1,000selenium: Jetstream - Google Chromeselenium: ARES-6 - Google Chromegimp: rotatepybench: Total For Average Test Timespyperformance: regex_compilepyperformance: gogegl: Reflectdarktable: Boat - CPU-onlyrsvg: SVG Files To PNGstress-ng: Socket Activitygmic: 3D Elevated Function In Rand Colors, 100 Timesncnn: CPU - resnet50mnn: SqueezeNetV1.0ncnn: CPU - alexnetselenium: StyleBench - Google Chromencnn: CPU - blazefaceselenium: CanvasMark - Google Chromegimp: unsharp-maskmnn: inception-v3selenium: StyleBench - Firefoxncnn: CPU - yolov4-tinyncnn: CPU - resnet18libraw: Post-Processing Benchmarkgegl: Rotate 90 Degreespyperformance: crypto_pyaesncnn: CPU - googlenetpyperformance: json_loadsncnn: CPU - squeezenetncnn: CPU - mobilenetgimp: auto-levelsselenium: WASM imageConvolute - Google Chromepyperformance: nbodyselenium: Octane - Google Chromedarktable: Server Rack - CPU-onlytesseract-ocr: Time To OCR 7 Imagespyperformance: floatncnn: CPU - vgg16deepspeech: CPUrenaissance: Akka Unbalanced Cobwebbed Treegit: Time To Complete Common Git Commandsleveldb: Rand Deleteoctave-benchmark: ethr: TCP - Connections/s - 1selenium: Maze Solver - Google Chromemnn: mobilenet-v1-1.0selenium: Maze Solver - Firefoxncnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2selenium: MotionMark - Google Chromerocksdb: Read While Writingrocksdb: Rand Fill Syncrocksdb: Seq Fillstress-ng: MMAPlibreoffice: 20 Documents To PDFgegl: Wavelet Blurtensorflow-lite: SqueezeNetleveldb: Fill Syncleveldb: Fill Syncespeak: Text-To-Speech Synthesisrenaissance: In-Memory Database Shootoutrenaissance: Scala Dottydacapobench: Tradebeansdacapobench: Tradesoapdacapobench: Jythondacapobench: H2osbench: Create Processeswireguard: fs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 1000 Files, 1MB Sizesqlite: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off99.1392487.36712831930889.5184931869356.16.03576711796.951671.97.3622.86139987237.754153.233.647252.138328.1168.62138181.932164201474835.959.14187.1011.756.36108.4233187.1131413871.9428178.3849.3666.8828899.0642138.84537204526.744281.393321.013157911366.16354.5411384.21881790.240643033.86733483.238.213.97382540668.2816690748.38238125227.9377902165.90951.251289.5516.609.71973813220226.32716.48917.0193321.4845.54549.9811.56318.6439.62.161769813.34568.28898.240.4421.3026.6437.98283.524.5318.626.0931.0411.26628.257589.6616080.19620.39195.071.0460.8786912529.40247.22724.7316.144121834.75.3546.8067099593590063138.555.52550.2905644373493.2770.328.0444605.1721621.396408081673843322623.983796273.13861.4186.2274.329.248256.1962533.80812731273086.7289846774218.36.20573803299.935610.57.6482.80445545263.994093.135.082255.552320.9173.76935111.572267881523334.229.33288.4911.886.67115.0803388.5351250676.6126981.6829.7347.10299101.5242187.74239451428.133283.510721.690153040726.89657.6681397.02860922.043044936.79634185.439.314.27765500701.8858294351.63739126225.3400069158.47353.138276.8817.329.98376413420927.43017.11317.4183510.0946.60654.0312.13320.1538.52.331686413.99073.11699.643.8223.2026.8840.70585.727.3018.928.4134.0511.67029.297291.4582350.20021.06397.875.9861.8013212954.09349.03425.4956.232123674.85.2523.8765286599374755825.005.70355.2655885068406.1040.139.8354991.3351708.537422286994113343421.313508281.82670.0106.171.463.986106.7453842.1648321289487.35669128.172554537131.588830.79.5913.72134398199.703179.344.122324.805415.4212.56729707.301799341196643.5611.618110.4414.858.0091.6483982.6461131793.8322592.51711.6448.55241122.2422560.71945517531.430344.560325.534129255157.48766.5881645.6149418741.114405100.846.216.88917553799.3984725358.09745930330.4452730138.43961.215242.7819.7611.31687415724030.67819.54720.2002962.5553.95859.1813.68622.0533.52.551499415.70980.48184.647.5525.0222.9644.37697.628.5921.630.2435.8813.00832.6245103537170.22523.11810880.6868.7291413943.86552.03923.3166.712118374.710.5165.413.508.355.547.298.27410.52604305178480325325.666.90859.6986741534522.2080.235.8035633.0552078.5875483117085101333020.492673334.67139.4189.6231.032.587107.5023525.2948321342273.2064140627537.68.360582151132.090752.79.9213.55739712224.673165.243.615329.503417.5217.67030270.521769081228640.0911.571110.2814.847.8799.7203873.6701251244.8022797.56211.4758.52264118.8352633.24445640132.767341.810025.709137320807.52963.4441688.07723413.249517940.43539998.344.916.18905090806.0972707758.33444329828.5453532140.14659.986250.9419.7711.56687815323231.26019.57520.0903055.4652.85157.5513.37621.4734.02.481514915.74077.03294.046.8324.3322.9244.43793.327.1521.329.2734.1312.31332.6082102536350.22523.35710675.5967.9796813670.59252.08523.9476.698122734.610.0935.413.128.075.787.298.22439.9162033076386106521.376.75762.58164698811614.3570.138.9625821.3082050.1255293108054414318820.055771323.55150.672.0215.236.473OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300SE +/- 0.09, N = 3SE +/- 1.42, N = 3SE +/- 0.90, N = 3SE +/- 0.46, N = 399.14106.75107.50256.201. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 99.04 / Avg: 99.14 / Max: 99.32Min: 103.9 / Avg: 106.75 / Max: 108.2Min: 105.79 / Avg: 107.5 / Max: 108.84Min: 255.39 / Avg: 256.2 / Max: 256.971. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off8001600240032004000SE +/- 10.30, N = 5SE +/- 45.02, N = 5SE +/- 33.64, N = 25SE +/- 11.39, N = 52487.373842.163525.292533.81
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7001400210028003500Min: 2462.06 / Avg: 2487.37 / Max: 2511.52Min: 3669.4 / Avg: 3842.16 / Max: 3918.64Min: 3285.93 / Avg: 3525.29 / Max: 3843.65Min: 2511.41 / Avg: 2533.81 / Max: 2569.64

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150SE +/- 1.20, N = 3SE +/- 1.33, N = 3SE +/- 0.67, N = 3SE +/- 1.53, N = 31288383127
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 126 / Avg: 128.33 / Max: 130Min: 80 / Avg: 82.67 / Max: 84Min: 82 / Avg: 82.67 / Max: 84Min: 125 / Avg: 127 / Max: 130

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7M14M21M28M35MSE +/- 309611.94, N = 3SE +/- 206722.39, N = 3SE +/- 157319.06, N = 15SE +/- 346720.57, N = 331930889.5121289487.3521342273.2031273086.721. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off6M12M18M24M30MMin: 31595618.05 / Avg: 31930889.51 / Max: 32549383.8Min: 21067814.84 / Avg: 21289487.35 / Max: 21702562.6Min: 20999628.09 / Avg: 21342273.2 / Max: 23237262.92Min: 30908682.14 / Avg: 31273086.72 / Max: 31966221.911. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20K40K60K80K100KSE +/- 561.91, N = 3SE +/- 396.22, N = 3SE +/- 873.74, N = 3SE +/- 434.17, N = 3849316691264140898461. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off16K32K48K64K80KMin: 83813 / Avg: 84931 / Max: 85589Min: 66145 / Avg: 66912 / Max: 67468Min: 62393 / Avg: 64139.67 / Max: 65059Min: 88986 / Avg: 89846 / Max: 903801. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000DefaultIce Lake: mitigations=offmitigations=off200K400K600K800K1000KSE +/- 2712.83, N = 3SE +/- 6927.38, N = 12SE +/- 11766.77, N = 3869356.1627537.6774218.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000DefaultIce Lake: mitigations=offmitigations=off150K300K450K600K750KMin: 865387.4 / Avg: 869356.07 / Max: 874544.4Min: 597046.6 / Avg: 627537.61 / Max: 660271.8Min: 752154.4 / Avg: 774218.3 / Max: 792339.6

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.054, N = 15SE +/- 0.084, N = 12SE +/- 0.082, N = 15SE +/- 0.051, N = 136.0358.1728.3606.205
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 5.53 / Avg: 6.03 / Max: 6.35Min: 7.26 / Avg: 8.17 / Max: 8.36Min: 7.38 / Avg: 8.36 / Max: 8.97Min: 5.68 / Avg: 6.21 / Max: 6.33

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off160K320K480K640K800KSE +/- 9040.07, N = 25SE +/- 2421.79, N = 5SE +/- 6985.56, N = 5SE +/- 8183.20, N = 57671175545375821517380321. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off130K260K390K520K650KMin: 632628 / Avg: 767116.96 / Max: 801744Min: 547029 / Avg: 554537.4 / Max: 560837Min: 559547 / Avg: 582151.4 / Max: 596407Min: 713039 / Avg: 738031.6 / Max: 7521121. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150SE +/- 0.35, N = 3SE +/- 1.69, N = 4SE +/- 1.58, N = 12SE +/- 0.53, N = 396.95131.59132.0999.941. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 96.43 / Avg: 96.95 / Max: 97.61Min: 128.78 / Avg: 131.59 / Max: 136.1Min: 126.6 / Avg: 132.09 / Max: 143.73Min: 99.02 / Avg: 99.94 / Max: 100.851. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2004006008001000SE +/- 1.35, N = 3SE +/- 1.20, N = 3SE +/- 1.72, N = 3SE +/- 0.73, N = 3671.9830.7752.7610.51. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off150300450600750Min: 670.2 / Avg: 671.93 / Max: 674.6Min: 828.3 / Avg: 830.67 / Max: 832.2Min: 749.5 / Avg: 752.67 / Max: 755.4Min: 609.2 / Avg: 610.53 / Max: 611.71. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215SE +/- 0.018, N = 3SE +/- 0.077, N = 15SE +/- 0.104, N = 12SE +/- 0.062, N = 37.3629.5919.9217.648
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 7.33 / Avg: 7.36 / Max: 7.39Min: 8.87 / Avg: 9.59 / Max: 9.79Min: 9.06 / Avg: 9.92 / Max: 10.65Min: 7.57 / Avg: 7.65 / Max: 7.77

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off0.83721.67442.51163.34884.186SE +/- 0.009, N = 5SE +/- 0.020, N = 5SE +/- 0.034, N = 25SE +/- 0.028, N = 82.8613.7213.5572.8041. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810Min: 2.84 / Avg: 2.86 / Max: 2.89Min: 3.68 / Avg: 3.72 / Max: 3.79Min: 3.44 / Avg: 3.56 / Max: 4.09Min: 2.67 / Avg: 2.8 / Max: 2.881. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off10K20K30K40K50KSE +/- 455.76, N = 3SE +/- 70.32, N = 3SE +/- 94.24, N = 3SE +/- 534.74, N = 3399873439839712455451. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off8K16K24K32K40KMin: 39385 / Avg: 39987.33 / Max: 40881Min: 34285 / Avg: 34398 / Max: 34527Min: 39578 / Avg: 39712.33 / Max: 39894Min: 44892 / Avg: 45545 / Max: 466051. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300SE +/- 0.24, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.84, N = 3237.75199.70224.67263.991. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 237.28 / Avg: 237.75 / Max: 238Min: 199.47 / Avg: 199.7 / Max: 200.02Min: 224.11 / Avg: 224.67 / Max: 225.09Min: 262.63 / Avg: 263.99 / Max: 265.531. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off9001800270036004500SE +/- 13.27, N = 3SE +/- 5.57, N = 3SE +/- 7.13, N = 3SE +/- 4.52, N = 34153.23179.33165.24093.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7001400210028003500Min: 4127.9 / Avg: 4153.2 / Max: 4172.8Min: 3169.5 / Avg: 3179.33 / Max: 3188.8Min: 3152.8 / Avg: 3165.17 / Max: 3177.5Min: 4086.6 / Avg: 4093.1 / Max: 4101.81. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050SE +/- 0.30, N = 3SE +/- 0.64, N = 4SE +/- 0.48, N = 3SE +/- 0.32, N = 333.6544.1243.6235.08
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645Min: 33.05 / Avg: 33.65 / Max: 33.95Min: 42.64 / Avg: 44.12 / Max: 45.73Min: 42.68 / Avg: 43.62 / Max: 44.26Min: 34.46 / Avg: 35.08 / Max: 35.47

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350SE +/- 0.66, N = 3SE +/- 0.97, N = 3SE +/- 1.55, N = 3SE +/- 0.61, N = 3252.14324.81329.50255.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300Min: 251.38 / Avg: 252.14 / Max: 253.46Min: 323.13 / Avg: 324.81 / Max: 326.5Min: 327.4 / Avg: 329.5 / Max: 332.53Min: 254.85 / Avg: 255.55 / Max: 256.77

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off90180270360450SE +/- 3.80, N = 3SE +/- 2.08, N = 3SE +/- 1.82, N = 3SE +/- 0.56, N = 3328.1415.4417.5320.91. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350Min: 323.4 / Avg: 328.07 / Max: 335.6Min: 411.8 / Avg: 415.43 / Max: 419Min: 413.9 / Avg: 417.53 / Max: 419.5Min: 320.1 / Avg: 320.93 / Max: 3221. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250SE +/- 0.25, N = 3SE +/- 0.47, N = 3SE +/- 2.35, N = 3SE +/- 0.29, N = 3168.62212.57217.67173.77
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off4080120160200Min: 168.36 / Avg: 168.62 / Max: 169.12Min: 211.69 / Avg: 212.57 / Max: 213.28Min: 214.12 / Avg: 217.67 / Max: 222.1Min: 173.21 / Avg: 173.77 / Max: 174.18

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off8K16K24K32K40KSE +/- 344.79, N = 3SE +/- 369.67, N = 3SE +/- 276.55, N = 3SE +/- 220.10, N = 338181.9329707.3030270.5235111.571. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7K14K21K28K35KMin: 37612.54 / Avg: 38181.93 / Max: 38803.49Min: 29039.99 / Avg: 29707.3 / Max: 30316.6Min: 29905.28 / Avg: 30270.52 / Max: 30812.85Min: 34796.09 / Avg: 35111.57 / Max: 35535.181. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50K100K150K200K250KSE +/- 393.32, N = 3SE +/- 269.90, N = 3SE +/- 739.79, N = 3SE +/- 351.65, N = 32164201799341769082267881. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off40K80K120K160K200KMin: 215784 / Avg: 216420.33 / Max: 217139Min: 179544 / Avg: 179933.67 / Max: 180452Min: 175487 / Avg: 176908.33 / Max: 177975Min: 226174 / Avg: 226788.33 / Max: 2273921. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KSE +/- 69.48, N = 3SE +/- 98.75, N = 3SE +/- 38.97, N = 3SE +/- 65.36, N = 3147481196612286152331. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KMin: 14620 / Avg: 14747.67 / Max: 14859Min: 11820 / Avg: 11965.67 / Max: 12154Min: 12219 / Avg: 12286 / Max: 12354Min: 15147 / Avg: 15232.67 / Max: 153611. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050SE +/- 0.37, N = 3SE +/- 0.19, N = 3SE +/- 0.32, N = 3SE +/- 0.31, N = 335.9543.5640.0934.221. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645Min: 35.37 / Avg: 35.95 / Max: 36.63Min: 43.3 / Avg: 43.56 / Max: 43.93Min: 39.57 / Avg: 40.09 / Max: 40.68Min: 33.8 / Avg: 34.22 / Max: 34.821. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215SE +/- 0.128, N = 13SE +/- 0.109, N = 12SE +/- 0.087, N = 15SE +/- 0.139, N = 129.14111.61811.5719.332
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 7.64 / Avg: 9.14 / Max: 9.51Min: 10.43 / Avg: 11.62 / Max: 11.76Min: 10.45 / Avg: 11.57 / Max: 11.77Min: 7.81 / Avg: 9.33 / Max: 9.54

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.46, N = 3SE +/- 0.42, N = 3SE +/- 0.53, N = 3SE +/- 0.39, N = 387.10110.44110.2888.491. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 86.22 / Avg: 87.1 / Max: 87.75Min: 109.61 / Avg: 110.44 / Max: 110.87Min: 109.22 / Avg: 110.28 / Max: 110.91Min: 87.71 / Avg: 88.49 / Max: 88.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620SE +/- 0.08, N = 15SE +/- 0.15, N = 8SE +/- 0.15, N = 8SE +/- 0.10, N = 1211.7514.8514.8411.881. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620Min: 10.7 / Avg: 11.75 / Max: 12.18Min: 13.77 / Avg: 14.85 / Max: 15.03Min: 13.79 / Avg: 14.84 / Max: 15.01Min: 10.76 / Avg: 11.88 / Max: 12.031. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.368.007.876.67
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 6.33 / Avg: 6.36 / Max: 6.38Min: 7.97 / Avg: 8 / Max: 8.02Min: 7.86 / Avg: 7.87 / Max: 7.88Min: 6.64 / Avg: 6.67 / Max: 6.69

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150SE +/- 0.86, N = 3SE +/- 0.36, N = 3SE +/- 0.55, N = 3SE +/- 1.47, N = 5108.4291.6599.72115.081. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 106.71 / Avg: 108.42 / Max: 109.4Min: 90.93 / Avg: 91.65 / Max: 92.03Min: 98.68 / Avg: 99.72 / Max: 100.58Min: 112.24 / Avg: 115.08 / Max: 119.731. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off9001800270036004500SE +/- 17.71, N = 5SE +/- 36.06, N = 20SE +/- 43.24, N = 18SE +/- 34.32, N = 253187.113982.653873.673388.54
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7001400210028003500Min: 3132.72 / Avg: 3187.11 / Max: 3231.56Min: 3638.03 / Avg: 3982.65 / Max: 4222.22Min: 3539.65 / Avg: 3873.67 / Max: 4153.31Min: 3003.66 / Avg: 3388.54 / Max: 3702.41

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off300K600K900K1200K1500KSE +/- 19012.48, N = 3SE +/- 17116.57, N = 3SE +/- 16138.47, N = 3SE +/- 13179.93, N = 71413871.941131793.831251244.801250676.611. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off200K400K600K800K1000KMin: 1386928.15 / Avg: 1413871.94 / Max: 1450580.61Min: 1107449.86 / Avg: 1131793.83 / Max: 1164809.49Min: 1226301.61 / Avg: 1251244.8 / Max: 1281456.98Min: 1206267.88 / Avg: 1250676.61 / Max: 1297748.641. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300SE +/- 0.58, N = 3SE +/- 2.08, N = 3SE +/- 1.00, N = 32812252272691. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 280 / Avg: 281 / Max: 282Min: 222 / Avg: 225 / Max: 229Min: 267 / Avg: 269 / Max: 2701. chrome 86.0.4240.111

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.33, N = 3SE +/- 0.77, N = 3SE +/- 1.15, N = 3SE +/- 0.18, N = 378.3892.5297.5681.68
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 77.77 / Avg: 78.38 / Max: 78.91Min: 91.14 / Avg: 92.52 / Max: 93.8Min: 95.33 / Avg: 97.56 / Max: 99.11Min: 81.34 / Avg: 81.68 / Max: 81.96

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215SE +/- 0.066, N = 15SE +/- 0.108, N = 12SE +/- 0.113, N = 9SE +/- 0.100, N = 129.36611.64411.4759.734
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 8.55 / Avg: 9.37 / Max: 9.72Min: 10.57 / Avg: 11.64 / Max: 12.01Min: 10.57 / Avg: 11.47 / Max: 11.62Min: 8.66 / Avg: 9.73 / Max: 9.98

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.06, N = 15SE +/- 0.12, N = 12SE +/- 0.10, N = 14SE +/- 0.06, N = 156.888.558.527.101. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 6.13 / Avg: 6.88 / Max: 7.15Min: 7.22 / Avg: 8.55 / Max: 8.84Min: 7.33 / Avg: 8.52 / Max: 8.82Min: 6.3 / Avg: 7.1 / Max: 7.321. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350SE +/- 1.00, N = 2SE +/- 0.88, N = 3SE +/- 1.53, N = 3SE +/- 0.33, N = 32882412642991. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 287 / Avg: 288 / Max: 289Min: 239 / Avg: 240.67 / Max: 242Min: 261 / Avg: 264 / Max: 266Min: 299 / Avg: 299.33 / Max: 3001. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150SE +/- 0.40, N = 3SE +/- 1.39, N = 6SE +/- 1.04, N = 11SE +/- 0.67, N = 399.06122.24118.84101.521. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 98.26 / Avg: 99.06 / Max: 99.48Min: 115.28 / Avg: 122.24 / Max: 123.78Min: 109.07 / Avg: 118.84 / Max: 120.66Min: 100.21 / Avg: 101.52 / Max: 102.381. RawTherapee, version 5.8, command line.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off6001200180024003000SE +/- 23.75, N = 25SE +/- 24.37, N = 5SE +/- 36.40, N = 15SE +/- 21.76, N = 82138.852560.722633.242187.74
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off5001000150020002500Min: 1908.57 / Avg: 2138.84 / Max: 2386.27Min: 2497.77 / Avg: 2560.72 / Max: 2612.35Min: 2350.31 / Avg: 2633.24 / Max: 2852.48Min: 2112.39 / Avg: 2187.74 / Max: 2273.88

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off100K200K300K400K500KSE +/- 2509.46, N = 3SE +/- 3093.36, N = 3SE +/- 3375.54, N = 3SE +/- 2832.59, N = 3372045455175456401394514
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off80K160K240K320K400KMin: 367028 / Avg: 372045 / Max: 374674Min: 449010 / Avg: 455175 / Max: 458706Min: 449652 / Avg: 456401.33 / Max: 459909Min: 388866 / Avg: 394513.67 / Max: 397723

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240SE +/- 0.28, N = 3SE +/- 0.45, N = 4SE +/- 0.37, N = 15SE +/- 0.26, N = 326.7431.4332.7728.13
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835Min: 26.2 / Avg: 26.74 / Max: 27.13Min: 30.49 / Avg: 31.43 / Max: 32.66Min: 30.89 / Avg: 32.77 / Max: 35.21Min: 27.72 / Avg: 28.13 / Max: 28.6

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350SE +/- 0.17, N = 3SE +/- 0.58, N = 3SE +/- 0.24, N = 3SE +/- 0.25, N = 3281.39344.56341.81283.511. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300Min: 281.15 / Avg: 281.39 / Max: 281.72Min: 343.72 / Avg: 344.56 / Max: 345.66Min: 341.41 / Avg: 341.81 / Max: 342.26Min: 283.21 / Avg: 283.51 / Max: 284.011. chrome 86.0.4240.111

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.19, N = 3SE +/- 0.32, N = 321.0125.5325.7121.691. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 20.9 / Avg: 21.01 / Max: 21.09Min: 25.51 / Avg: 25.53 / Max: 25.55Min: 25.33 / Avg: 25.71 / Max: 25.98Min: 21.09 / Avg: 21.69 / Max: 22.21. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3M6M9M12M15MSE +/- 224908.53, N = 3SE +/- 123171.35, N = 15SE +/- 208149.62, N = 15SE +/- 150814.30, N = 3157911361292551513732080153040721. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3M6M9M12M15MMin: 15437905 / Avg: 15791135.67 / Max: 16208941Min: 12559704 / Avg: 12925515.33 / Max: 14496776Min: 13179330 / Avg: 13732079.87 / Max: 16527585Min: 15125634 / Avg: 15304072.33 / Max: 156038971. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.008, N = 3SE +/- 0.025, N = 3SE +/- 0.011, N = 3SE +/- 0.060, N = 36.1637.4877.5296.896MIN: 6.07 / MAX: 22.29MIN: 7 / MAX: 25.22MIN: 6.99 / MAX: 23.15MIN: 6.03 / MAX: 21.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 6.15 / Avg: 6.16 / Max: 6.18Min: 7.46 / Avg: 7.49 / Max: 7.54Min: 7.52 / Avg: 7.53 / Max: 7.55Min: 6.83 / Avg: 6.9 / Max: 7.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1530456075SE +/- 0.05, N = 3SE +/- 0.34, N = 3SE +/- 0.20, N = 3SE +/- 0.10, N = 354.5466.5963.4457.67MIN: 52.47 / MAX: 71.35MIN: 62.42 / MAX: 109.96MIN: 56.81 / MAX: 91.45MIN: 52.17 / MAX: 881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1326395265Min: 54.48 / Avg: 54.54 / Max: 54.64Min: 66.23 / Avg: 66.59 / Max: 67.27Min: 63.08 / Avg: 63.44 / Max: 63.77Min: 57.47 / Avg: 57.67 / Max: 57.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off400800120016002000SE +/- 0.30, N = 3SE +/- 3.86, N = 3SE +/- 1.15, N = 3SE +/- 1.23, N = 31384.211645.611688.071397.02MIN: 1380 / MAX: 1400MIN: 1630 / MAX: 1660MIN: 1670 / MAX: 1700MIN: 1390 / MAX: 1410
OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off30060090012001500Min: 1383.68 / Avg: 1384.21 / Max: 1384.74Min: 1637.89 / Avg: 1645.61 / Max: 1649.47Min: 1685.79 / Avg: 1688.07 / Max: 1689.47Min: 1394.74 / Avg: 1397.02 / Max: 1398.95

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000DefaultIce Lake: mitigations=offmitigations=off200K400K600K800K1000KSE +/- 1914.13, N = 3SE +/- 1358.93, N = 3SE +/- 4238.82, N = 3881790.2723413.2860922.0
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000DefaultIce Lake: mitigations=offmitigations=off150K300K450K600K750KMin: 879405.3 / Avg: 881790.23 / Max: 885576.1Min: 720712.5 / Avg: 723413.23 / Max: 725027.4Min: 856190 / Avg: 860922 / Max: 869379.7

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off110K220K330K440K550KSE +/- 2818.20, N = 3SE +/- 3688.05, N = 3SE +/- 2851.71, N = 3SE +/- 3141.38, N = 3406430494187495179430449
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off90K180K270K360K450KMin: 400797 / Avg: 406430 / Max: 409416Min: 486812 / Avg: 494187.33 / Max: 497967Min: 489486 / Avg: 495179 / Max: 498324Min: 424167 / Avg: 430448.67 / Max: 433691

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645SE +/- 0.23, N = 3SE +/- 0.44, N = 15SE +/- 0.31, N = 3SE +/- 0.59, N = 333.8741.1140.4436.80
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645Min: 33.41 / Avg: 33.87 / Max: 34.12Min: 40.01 / Avg: 41.11 / Max: 44.91Min: 39.81 / Avg: 40.44 / Max: 40.77Min: 35.61 / Avg: 36.8 / Max: 37.46

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off90180270360450SE +/- 3.18, N = 3SE +/- 3.33, N = 3SE +/- 1.67, N = 3SE +/- 4.70, N = 3334405399341
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350Min: 328 / Avg: 334.33 / Max: 338Min: 398 / Avg: 404.67 / Max: 408Min: 396 / Avg: 399.33 / Max: 401Min: 332 / Avg: 341.33 / Max: 347

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.19, N = 3SE +/- 0.76, N = 3SE +/- 0.19, N = 3SE +/- 0.61, N = 383.2100.898.385.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 82.8 / Avg: 83.17 / Max: 83.4Min: 99.4 / Avg: 100.8 / Max: 102Min: 97.9 / Avg: 98.27 / Max: 98.5Min: 84.3 / Avg: 85.37 / Max: 86.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050SE +/- 0.20, N = 3SE +/- 0.23, N = 3SE +/- 0.18, N = 3SE +/- 0.35, N = 338.246.244.939.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645Min: 37.8 / Avg: 38.2 / Max: 38.4Min: 45.8 / Avg: 46.23 / Max: 46.6Min: 44.6 / Avg: 44.93 / Max: 45.2Min: 38.6 / Avg: 39.27 / Max: 39.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 313.916.816.114.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620Min: 13.7 / Avg: 13.87 / Max: 14Min: 16.6 / Avg: 16.77 / Max: 16.9Min: 16.1 / Avg: 16.13 / Max: 16.2Min: 13.9 / Avg: 14.2 / Max: 14.4

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2M4M6M8M10MSE +/- 5564.33, N = 3SE +/- 2473.50, N = 3SE +/- 14703.15, N = 3SE +/- 1537.93, N = 37382540891755389050907765500
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1.5M3M4.5M6M7.5MMin: 7371560 / Avg: 7382540 / Max: 7389600Min: 8912610 / Avg: 8917553.33 / Max: 8920190Min: 8875720 / Avg: 8905090 / Max: 8921040Min: 7762510 / Avg: 7765500 / Max: 7767620

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2004006008001000SE +/- 1.55, N = 3SE +/- 2.47, N = 3SE +/- 3.34, N = 3SE +/- 9.46, N = 3668.2799.3806.0701.81. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off140280420560700Min: 665.8 / Avg: 668.23 / Max: 671.1Min: 796.3 / Avg: 799.3 / Max: 804.2Min: 799.3 / Avg: 805.97 / Max: 809.6Min: 688.4 / Avg: 701.83 / Max: 720.11. chrome 86.0.4240.111

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2M4M6M8M10MSE +/- 4887.54, N = 3SE +/- 2111.67, N = 3SE +/- 15084.72, N = 3SE +/- 5358.28, N = 38166907984725397270778582943
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2M4M6M8M10MMin: 8158390 / Avg: 8166906.67 / Max: 8175320Min: 9843520 / Avg: 9847253.33 / Max: 9850830Min: 9707650 / Avg: 9727076.67 / Max: 9756780Min: 8575380 / Avg: 8582943.33 / Max: 8593300

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1326395265SE +/- 0.18, N = 3SE +/- 0.65, N = 6SE +/- 0.83, N = 4SE +/- 0.59, N = 348.3858.1058.3351.64
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860Min: 48.04 / Avg: 48.38 / Max: 48.65Min: 56.97 / Avg: 58.1 / Max: 61.13Min: 56.59 / Avg: 58.33 / Max: 60.57Min: 50.76 / Avg: 51.64 / Max: 52.76

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off100200300400500SE +/- 0.88, N = 3SE +/- 1.15, N = 3SE +/- 0.58, N = 3SE +/- 1.76, N = 3381459443391
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off80160240320400Min: 379 / Avg: 380.67 / Max: 382Min: 457 / Avg: 459 / Max: 461Min: 442 / Avg: 443 / Max: 444Min: 388 / Avg: 391.33 / Max: 394

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350SE +/- 1.20, N = 3SE +/- 1.20, N = 3SE +/- 1.33, N = 3SE +/- 1.67, N = 3252303298262
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 250 / Avg: 252.33 / Max: 254Min: 301 / Avg: 303.33 / Max: 305Min: 295 / Avg: 297.67 / Max: 299Min: 259 / Avg: 262.33 / Max: 264

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835SE +/- 0.31, N = 7SE +/- 0.41, N = 3SE +/- 0.49, N = 3SE +/- 0.25, N = 327.930.428.525.31. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835Min: 27.2 / Avg: 27.94 / Max: 29.5Min: 29.6 / Avg: 30.37 / Max: 31Min: 27.9 / Avg: 28.53 / Max: 29.5Min: 25 / Avg: 25.3 / Max: 25.81. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off100K200K300K400K500KSE +/- 2564.11, N = 3SE +/- 7063.92, N = 3SE +/- 7243.74, N = 3SE +/- 2425.81, N = 3377902452730453532400069
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off80K160K240K320K400KMin: 372792 / Avg: 377902 / Max: 380831Min: 438634 / Avg: 452730 / Max: 460599Min: 439048 / Avg: 453532 / Max: 461049Min: 395221 / Avg: 400068.67 / Max: 402662

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off4080120160200SE +/- 0.51, N = 3SE +/- 0.40, N = 3SE +/- 0.64, N = 3SE +/- 0.72, N = 3165.91138.44140.15158.471. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150Min: 164.92 / Avg: 165.91 / Max: 166.63Min: 137.95 / Avg: 138.44 / Max: 139.23Min: 139.07 / Avg: 140.15 / Max: 141.28Min: 157.29 / Avg: 158.47 / Max: 159.781. chrome 86.0.4240.111

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1428425670SE +/- 0.29, N = 3SE +/- 0.96, N = 3SE +/- 0.22, N = 3SE +/- 0.50, N = 351.2561.2259.9953.141. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860Min: 50.68 / Avg: 51.25 / Max: 51.58Min: 60.13 / Avg: 61.21 / Max: 63.13Min: 59.68 / Avg: 59.99 / Max: 60.41Min: 52.14 / Avg: 53.14 / Max: 53.751. (CC) gcc options: -O2 -ldl -lz -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300SE +/- 1.20, N = 3SE +/- 0.87, N = 3SE +/- 0.84, N = 3SE +/- 0.73, N = 3289.55242.78250.94276.881. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 287.69 / Avg: 289.55 / Max: 291.79Min: 241.09 / Avg: 242.78 / Max: 243.96Min: 250.08 / Avg: 250.94 / Max: 252.63Min: 275.97 / Avg: 276.88 / Max: 278.321. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 316.6019.7619.7717.321. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 16.35 / Avg: 16.6 / Max: 16.74Min: 19.71 / Avg: 19.76 / Max: 19.8Min: 19.7 / Avg: 19.77 / Max: 19.82Min: 17.05 / Avg: 17.32 / Max: 17.531. chrome 86.0.4240.111

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215SE +/- 0.005, N = 3SE +/- 0.009, N = 3SE +/- 0.145, N = 4SE +/- 0.024, N = 39.71911.31611.5669.983
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 9.71 / Avg: 9.72 / Max: 9.73Min: 11.3 / Avg: 11.32 / Max: 11.33Min: 11.29 / Avg: 11.57 / Max: 11.98Min: 9.95 / Avg: 9.98 / Max: 10.03

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2004006008001000SE +/- 5.24, N = 3SE +/- 7.42, N = 3SE +/- 9.53, N = 3SE +/- 8.33, N = 3738874878764
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off150300450600750Min: 730 / Avg: 738.33 / Max: 748Min: 859 / Avg: 873.67 / Max: 883Min: 860 / Avg: 878.33 / Max: 892Min: 747 / Avg: 763.67 / Max: 772

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150SE +/- 1.20, N = 3SE +/- 1.86, N = 3SE +/- 1.53, N = 3132157153134
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150Min: 130 / Avg: 132.33 / Max: 134Min: 153 / Avg: 156.67 / Max: 159Min: 131 / Avg: 134 / Max: 136

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250SE +/- 1.67, N = 3SE +/- 2.33, N = 3SE +/- 1.73, N = 3SE +/- 1.45, N = 3202240232209
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off4080120160200Min: 199 / Avg: 202.33 / Max: 204Min: 236 / Avg: 240.33 / Max: 244Min: 229 / Avg: 232 / Max: 235Min: 206 / Avg: 208.67 / Max: 211

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835SE +/- 0.26, N = 3SE +/- 0.38, N = 3SE +/- 0.12, N = 3SE +/- 0.35, N = 326.3330.6831.2627.43
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835Min: 25.81 / Avg: 26.33 / Max: 26.63Min: 29.91 / Avg: 30.68 / Max: 31.1Min: 31.13 / Avg: 31.26 / Max: 31.51Min: 26.75 / Avg: 27.43 / Max: 27.87

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025SE +/- 0.11, N = 14SE +/- 0.29, N = 3SE +/- 0.22, N = 6SE +/- 0.16, N = 916.4919.5519.5817.11
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 15.04 / Avg: 16.49 / Max: 16.9Min: 18.96 / Avg: 19.55 / Max: 19.87Min: 18.46 / Avg: 19.58 / Max: 19.86Min: 15.82 / Avg: 17.11 / Max: 17.33

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 317.0220.2020.0917.421. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 16.94 / Avg: 17.02 / Max: 17.17Min: 20.14 / Avg: 20.2 / Max: 20.24Min: 20.06 / Avg: 20.09 / Max: 20.14Min: 17.25 / Avg: 17.42 / Max: 17.721. rsvg-convert version 2.50.1

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off8001600240032004000SE +/- 33.45, N = 8SE +/- 40.22, N = 3SE +/- 24.47, N = 13SE +/- 38.12, N = 153321.482962.553055.463510.091. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off6001200180024003000Min: 3163.35 / Avg: 3321.48 / Max: 3505.39Min: 2914.62 / Avg: 2962.55 / Max: 3042.46Min: 2969.03 / Avg: 3055.46 / Max: 3262.32Min: 3327.66 / Avg: 3510.09 / Max: 3927.071. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860SE +/- 0.17, N = 3SE +/- 0.32, N = 3SE +/- 0.02, N = 3SE +/- 0.29, N = 345.5553.9652.8546.611. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1122334455Min: 45.33 / Avg: 45.55 / Max: 45.88Min: 53.41 / Avg: 53.96 / Max: 54.53Min: 52.81 / Avg: 52.85 / Max: 52.88Min: 46.07 / Avg: 46.61 / Max: 47.051. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1326395265SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 1.25, N = 4SE +/- 0.08, N = 349.9859.1857.5554.03MIN: 46.96 / MAX: 62.34MIN: 54.99 / MAX: 74.7MIN: 51.93 / MAX: 71.08MIN: 50.79 / MAX: 72.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860Min: 49.89 / Avg: 49.98 / Max: 50.1Min: 59.09 / Avg: 59.18 / Max: 59.24Min: 53.87 / Avg: 57.55 / Max: 59.15Min: 53.89 / Avg: 54.03 / Max: 54.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.22, N = 3SE +/- 0.08, N = 311.5613.6913.3812.13MIN: 11.25 / MAX: 27.29MIN: 12.96 / MAX: 33.2MIN: 11.73 / MAX: 33.3MIN: 10.65 / MAX: 29.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620Min: 11.48 / Avg: 11.56 / Max: 11.67Min: 13.57 / Avg: 13.69 / Max: 13.88Min: 12.98 / Avg: 13.38 / Max: 13.74Min: 12.02 / Avg: 12.13 / Max: 12.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.38, N = 4SE +/- 0.06, N = 318.6422.0521.4720.15MIN: 17.18 / MAX: 29.18MIN: 20.64 / MAX: 34.28MIN: 19.64 / MAX: 34.78MIN: 18.45 / MAX: 58.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 18.62 / Avg: 18.64 / Max: 18.67Min: 22.02 / Avg: 22.05 / Max: 22.09Min: 20.79 / Avg: 21.47 / Max: 22.19Min: 20.03 / Avg: 20.15 / Max: 20.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 339.633.534.038.51. chrome 86.0.4240.111
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240Min: 39.4 / Avg: 39.6 / Max: 39.8Min: 33.2 / Avg: 33.47 / Max: 33.6Min: 33.9 / Avg: 33.97 / Max: 34Min: 38.4 / Avg: 38.47 / Max: 38.51. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off0.57381.14761.72142.29522.869SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 4SE +/- 0.06, N = 32.162.552.482.33MIN: 1.99 / MAX: 4.52MIN: 2.32 / MAX: 14.92MIN: 2.2 / MAX: 12.69MIN: 2.04 / MAX: 3.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810Min: 2.13 / Avg: 2.16 / Max: 2.18Min: 2.52 / Avg: 2.55 / Max: 2.58Min: 2.42 / Avg: 2.48 / Max: 2.57Min: 2.23 / Avg: 2.33 / Max: 2.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off4K8K12K16K20KSE +/- 200.87, N = 12SE +/- 144.97, N = 9SE +/- 107.22, N = 3SE +/- 151.73, N = 10176981499415149168641. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KMin: 16711 / Avg: 17697.67 / Max: 18599Min: 14456 / Avg: 14994.22 / Max: 15839Min: 14949 / Avg: 15149 / Max: 15316Min: 16357 / Avg: 16863.5 / Max: 176401. chrome 86.0.4240.111

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620SE +/- 0.13, N = 9SE +/- 0.21, N = 5SE +/- 0.16, N = 15SE +/- 0.14, N = 913.3515.7115.7413.99
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620Min: 12.31 / Avg: 13.34 / Max: 13.64Min: 14.9 / Avg: 15.71 / Max: 16.09Min: 14.19 / Avg: 15.74 / Max: 16.73Min: 12.96 / Avg: 13.99 / Max: 14.32

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.11, N = 3SE +/- 0.50, N = 3SE +/- 0.72, N = 3SE +/- 0.04, N = 368.2980.4877.0373.12MIN: 66.59 / MAX: 84.15MIN: 77.49 / MAX: 99.85MIN: 70.4 / MAX: 108.37MIN: 68.33 / MAX: 117.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1530456075Min: 68.09 / Avg: 68.29 / Max: 68.46Min: 79.93 / Avg: 80.48 / Max: 81.47Min: 75.75 / Avg: 77.03 / Max: 78.25Min: 73.04 / Avg: 73.12 / Max: 73.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.19, N = 3SE +/- 0.72, N = 15SE +/- 0.25, N = 3SE +/- 0.15, N = 398.284.694.099.61. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 97.8 / Avg: 98.17 / Max: 98.4Min: 82 / Avg: 84.55 / Max: 90.7Min: 93.7 / Avg: 94 / Max: 94.5Min: 99.3 / Avg: 99.57 / Max: 99.81. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1122334455SE +/- 0.19, N = 3SE +/- 0.04, N = 3SE +/- 0.64, N = 4SE +/- 0.07, N = 340.4447.5546.8343.82MIN: 38.52 / MAX: 52.97MIN: 45.86 / MAX: 62.55MIN: 42.33 / MAX: 59.43MIN: 41.38 / MAX: 88.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050Min: 40.2 / Avg: 40.44 / Max: 40.82Min: 47.5 / Avg: 47.55 / Max: 47.62Min: 44.94 / Avg: 46.83 / Max: 47.67Min: 43.7 / Avg: 43.82 / Max: 43.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.51, N = 4SE +/- 0.02, N = 321.3025.0224.3323.20MIN: 18.47 / MAX: 34.83MIN: 22.42 / MAX: 37.92MIN: 20.42 / MAX: 41.66MIN: 20.27 / MAX: 381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 21.22 / Avg: 21.3 / Max: 21.34Min: 24.98 / Avg: 25.02 / Max: 25.1Min: 22.83 / Avg: 24.33 / Max: 24.98Min: 23.16 / Avg: 23.2 / Max: 23.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.41, N = 15SE +/- 0.16, N = 3SE +/- 0.31, N = 3SE +/- 0.26, N = 326.6422.9622.9226.881. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 23.58 / Avg: 26.64 / Max: 28.27Min: 22.65 / Avg: 22.96 / Max: 23.21Min: 22.52 / Avg: 22.92 / Max: 23.54Min: 26.59 / Avg: 26.88 / Max: 27.391. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050SE +/- 0.19, N = 3SE +/- 0.05, N = 3SE +/- 0.27, N = 3SE +/- 0.59, N = 337.9844.3844.4440.71
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645Min: 37.61 / Avg: 37.98 / Max: 38.19Min: 44.3 / Avg: 44.38 / Max: 44.46Min: 43.91 / Avg: 44.44 / Max: 44.75Min: 39.68 / Avg: 40.71 / Max: 41.74

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.35, N = 3SE +/- 0.52, N = 3SE +/- 0.62, N = 3SE +/- 0.54, N = 383.597.693.385.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 82.8 / Avg: 83.5 / Max: 83.9Min: 96.6 / Avg: 97.57 / Max: 98.4Min: 92.1 / Avg: 93.33 / Max: 94Min: 84.6 / Avg: 85.67 / Max: 86.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.66, N = 3SE +/- 0.24, N = 324.5328.5927.1527.30MIN: 22.42 / MAX: 35.81MIN: 26.28 / MAX: 41.46MIN: 25.04 / MAX: 41.26MIN: 21.44 / MAX: 219.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 24.43 / Avg: 24.53 / Max: 24.61Min: 28.55 / Avg: 28.59 / Max: 28.66Min: 26.42 / Avg: 27.15 / Max: 28.47Min: 26.82 / Avg: 27.3 / Max: 27.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.10, N = 318.621.621.318.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 18.5 / Avg: 18.63 / Max: 18.7Min: 21.3 / Avg: 21.57 / Max: 21.7Min: 21 / Avg: 21.33 / Max: 21.6Min: 18.7 / Avg: 18.9 / Max: 19

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835SE +/- 0.18, N = 3SE +/- 0.01, N = 3SE +/- 0.42, N = 4SE +/- 0.47, N = 326.0930.2429.2728.41MIN: 25.43 / MAX: 37.4MIN: 29.25 / MAX: 42.03MIN: 27.06 / MAX: 43.42MIN: 23.73 / MAX: 197.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835Min: 25.89 / Avg: 26.09 / Max: 26.44Min: 30.22 / Avg: 30.24 / Max: 30.26Min: 28.53 / Avg: 29.27 / Max: 30.17Min: 27.74 / Avg: 28.41 / Max: 29.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240SE +/- 0.30, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 4SE +/- 0.12, N = 331.0435.8834.1334.05MIN: 29.81 / MAX: 43.82MIN: 34.42 / MAX: 48.29MIN: 32.05 / MAX: 48.09MIN: 31.96 / MAX: 68.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240Min: 30.69 / Avg: 31.04 / Max: 31.64Min: 35.86 / Avg: 35.88 / Max: 35.92Min: 33.79 / Avg: 34.13 / Max: 34.49Min: 33.84 / Avg: 34.05 / Max: 34.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215SE +/- 0.11, N = 9SE +/- 0.15, N = 7SE +/- 0.12, N = 3SE +/- 0.12, N = 811.2713.0112.3111.67
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off48121620Min: 10.5 / Avg: 11.27 / Max: 11.81Min: 12.14 / Avg: 13.01 / Max: 13.22Min: 12.14 / Avg: 12.31 / Max: 12.55Min: 10.82 / Avg: 11.67 / Max: 11.9

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.09, N = 3SE +/- 0.25, N = 328.2632.6232.6129.301. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off714212835Min: 28.13 / Avg: 28.26 / Max: 28.46Min: 32.01 / Avg: 32.62 / Max: 33.19Min: 32.44 / Avg: 32.61 / Max: 32.75Min: 29.05 / Avg: 29.3 / Max: 29.791. chrome 86.0.4240.111

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.76, N = 3SE +/- 0.84, N = 389.6103.0102.091.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 88.4 / Avg: 89.6 / Max: 91Min: 89.7 / Avg: 91.37 / Max: 92.3

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off13K26K39K52K65KSE +/- 1033.11, N = 3SE +/- 67.56, N = 3SE +/- 159.67, N = 3SE +/- 745.81, N = 5616085371753635582351. chrome 86.0.4240.111
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off11K22K33K44K55KMin: 60493 / Avg: 61608 / Max: 63672Min: 53599 / Avg: 53717 / Max: 53833Min: 53316 / Avg: 53635.33 / Max: 53797Min: 57022 / Avg: 58234.6 / Max: 611041. chrome 86.0.4240.111

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off0.05060.10120.15180.20240.253SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 30.1960.2250.2250.200
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off12345Min: 0.19 / Avg: 0.2 / Max: 0.2Min: 0.22 / Avg: 0.23 / Max: 0.23Min: 0.22 / Avg: 0.23 / Max: 0.23Min: 0.2 / Avg: 0.2 / Max: 0.2

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.22, N = 9SE +/- 0.28, N = 320.3923.1223.3621.06
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off510152025Min: 20.23 / Avg: 20.39 / Max: 20.68Min: 22.97 / Avg: 23.12 / Max: 23.32Min: 22.81 / Avg: 23.36 / Max: 25.06Min: 20.5 / Avg: 21.06 / Max: 21.35

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.80, N = 3SE +/- 1.00, N = 3SE +/- 0.88, N = 395.0108.0106.097.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100Min: 93.4 / Avg: 95 / Max: 95.8Min: 106 / Avg: 108 / Max: 109Min: 96.1 / Avg: 97.83 / Max: 99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off20406080100SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 1.31, N = 4SE +/- 0.19, N = 371.0480.6875.5975.98MIN: 67.45 / MAX: 85.82MIN: 76.5 / MAX: 103.41MIN: 70.5 / MAX: 95.89MIN: 71.7 / MAX: 102.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1530456075Min: 70.81 / Avg: 71.04 / Max: 71.22Min: 80.55 / Avg: 80.68 / Max: 80.9Min: 72.81 / Avg: 75.59 / Max: 79.13Min: 75.69 / Avg: 75.98 / Max: 76.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1530456075SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 360.8868.7367.9861.80
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1326395265Min: 60.7 / Avg: 60.88 / Max: 60.99Min: 68.61 / Avg: 68.73 / Max: 68.95Min: 67.82 / Avg: 67.98 / Max: 68.22Min: 61.48 / Avg: 61.8 / Max: 61.99

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KSE +/- 116.84, N = 5SE +/- 137.47, N = 9SE +/- 120.68, N = 15SE +/- 125.63, N = 512529.4013943.8713670.5912954.09
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2K4K6K8K10KMin: 12196.8 / Avg: 12529.4 / Max: 12890.28Min: 13275.84 / Avg: 13943.86 / Max: 14552.3Min: 12835.41 / Avg: 13670.59 / Max: 14622.75Min: 12692.41 / Avg: 12954.09 / Max: 13361.67

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860SE +/- 0.40, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.66, N = 347.2352.0452.0949.031. git version 2.27.0
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1020304050Min: 46.44 / Avg: 47.23 / Max: 47.7Min: 51.92 / Avg: 52.04 / Max: 52.27Min: 52.02 / Avg: 52.09 / Max: 52.19Min: 47.71 / Avg: 49.03 / Max: 49.751. git version 2.27.0

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.35, N = 12SE +/- 0.02, N = 3SE +/- 0.26, N = 3SE +/- 0.29, N = 1224.7323.3223.9525.501. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 21.14 / Avg: 24.73 / Max: 26.11Min: 23.27 / Avg: 23.32 / Max: 23.35Min: 23.63 / Avg: 23.95 / Max: 24.46Min: 22.29 / Avg: 25.49 / Max: 25.951. (CXX) g++ options: -O3 -lsnappy -lpthread

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.012, N = 5SE +/- 0.026, N = 5SE +/- 0.019, N = 5SE +/- 0.014, N = 56.1446.7126.6986.232
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 6.12 / Avg: 6.14 / Max: 6.19Min: 6.67 / Avg: 6.71 / Max: 6.81Min: 6.65 / Avg: 6.7 / Max: 6.76Min: 6.19 / Avg: 6.23 / Max: 6.27

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KSE +/- 17.64, N = 3SE +/- 44.10, N = 3SE +/- 37.12, N = 3SE +/- 95.63, N = 312183118371227312367
OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2K4K6K8K10KMin: 12150 / Avg: 12183.33 / Max: 12210Min: 11770 / Avg: 11836.67 / Max: 11920Min: 12200 / Avg: 12273.33 / Max: 12320Min: 12210 / Avg: 12366.67 / Max: 12540

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1.082.163.244.325.4SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 34.74.74.64.81. chrome 86.0.4240.111
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.6 / Avg: 4.63 / Max: 4.7Min: 4.7 / Avg: 4.77 / Max: 4.81. chrome 86.0.4240.111

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Defaultmitigations=off3691215SE +/- 0.00, N = 3SE +/- 0.11, N = 310.5210.09MIN: 9.74 / MAX: 26.11MIN: 9.21 / MAX: 30.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Defaultmitigations=off3691215Min: 10.51 / Avg: 10.52 / Max: 10.52Min: 9.98 / Avg: 10.09 / Max: 10.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1.2152.433.6454.866.075SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 35.35.45.45.21. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810Min: 5.3 / Avg: 5.33 / Max: 5.4Min: 5.4 / Avg: 5.43 / Max: 5.5Min: 5.4 / Avg: 5.4 / Max: 5.4Min: 5.2 / Avg: 5.23 / Max: 5.31. Default: firefox 81.0.22. Ice Lake: Default: firefox 82.03. Ice Lake: mitigations=off: firefox 82.04. mitigations=off: firefox 81.0.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Defaultmitigations=off3691215SE +/- 0.08, N = 3SE +/- 0.24, N = 413.5013.12MIN: 12.59 / MAX: 25.81MIN: 11.89 / MAX: 29.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Defaultmitigations=off48121620Min: 13.41 / Avg: 13.5 / Max: 13.66Min: 12.63 / Avg: 13.12 / Max: 13.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetDefaultmitigations=off246810SE +/- 0.70, N = 3SE +/- 0.91, N = 48.358.07MIN: 5.2 / MAX: 21.96MIN: 5.21 / MAX: 20.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetDefaultmitigations=off3691215Min: 6.96 / Avg: 8.35 / Max: 9.07Min: 5.36 / Avg: 8.07 / Max: 9.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Defaultmitigations=off1.30052.6013.90155.2026.5025SE +/- 0.80, N = 3SE +/- 0.60, N = 45.545.78MIN: 3.78 / MAX: 17.92MIN: 3.81 / MAX: 18.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Defaultmitigations=off246810Min: 3.93 / Avg: 5.54 / Max: 6.38Min: 3.98 / Avg: 5.78 / Max: 6.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Defaultmitigations=off246810SE +/- 1.14, N = 3SE +/- 0.76, N = 47.297.29MIN: 4.88 / MAX: 18.97MIN: 4.87 / MAX: 20.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Defaultmitigations=off3691215Min: 5.02 / Avg: 7.29 / Max: 8.45Min: 5.04 / Avg: 7.29 / Max: 8.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Defaultmitigations=off246810SE +/- 1.33, N = 3SE +/- 0.88, N = 48.278.22MIN: 5.47 / MAX: 22.52MIN: 5.46 / MAX: 24.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Defaultmitigations=off3691215Min: 5.62 / Avg: 8.27 / Max: 9.63Min: 5.63 / Avg: 8.22 / Max: 9.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off120240360480600SE +/- 6.75, N = 9SE +/- 12.09, N = 9SE +/- 10.06, N = 9SE +/- 2.04, N = 3546.80410.52439.91523.871. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off100200300400500Min: 527.73 / Avg: 546.8 / Max: 583.38Min: 314.26 / Avg: 410.52 / Max: 428.36Min: 362.13 / Avg: 439.91 / Max: 458.87Min: 520.09 / Avg: 523.87 / Max: 527.091. chrome 86.0.4240.111

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off140K280K420K560K700KSE +/- 5302.48, N = 15SE +/- 12034.86, N = 15SE +/- 17734.70, N = 15SE +/- 6731.14, N = 126709956043056203306528651. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off120K240K360K480K600KMin: 647652 / Avg: 670995.47 / Max: 719461Min: 568062 / Avg: 604305.2 / Max: 759644Min: 561461 / Avg: 620330.33 / Max: 789535Min: 620016 / Avg: 652865.33 / Max: 7113401. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off400800120016002000SE +/- 21.21, N = 13SE +/- 23.13, N = 3SE +/- 39.22, N = 12SE +/- 21.98, N = 1493517847639931. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off30060090012001500Min: 726 / Avg: 935.08 / Max: 1018Min: 1753 / Avg: 1783.67 / Max: 1829Min: 507 / Avg: 763 / Max: 997Min: 764 / Avg: 993.07 / Max: 10871. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off200K400K600K800K1000KSE +/- 14748.19, N = 15SE +/- 11089.18, N = 15SE +/- 41408.12, N = 12SE +/- 30961.97, N = 129006318032538610657475581. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off160K320K480K640K800KMin: 796904 / Avg: 900631.07 / Max: 1003139Min: 741241 / Avg: 803252.6 / Max: 948580Min: 456313 / Avg: 861064.58 / Max: 1047137Min: 614143 / Avg: 747557.67 / Max: 9131201. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645SE +/- 0.32, N = 12SE +/- 1.09, N = 12SE +/- 1.41, N = 15SE +/- 0.53, N = 1538.5525.6621.3725.001. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240Min: 36.16 / Avg: 38.55 / Max: 41.26Min: 17.57 / Avg: 25.66 / Max: 32.73Min: 13.65 / Avg: 21.37 / Max: 34.06Min: 21.13 / Avg: 25 / Max: 29.131. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off246810SE +/- 0.054, N = 9SE +/- 0.131, N = 25SE +/- 0.061, N = 10SE +/- 0.067, N = 65.5256.9086.7575.7031. LibreOffice 7.0.2.2 00(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3691215Min: 5.43 / Avg: 5.53 / Max: 5.96Min: 6.53 / Avg: 6.91 / Max: 8.85Min: 6.65 / Avg: 6.76 / Max: 7.3Min: 5.56 / Avg: 5.7 / Max: 6.031. LibreOffice 7.0.2.2 00(Build:2)

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1428425670SE +/- 0.15, N = 3SE +/- 0.21, N = 3SE +/- 0.99, N = 15SE +/- 0.71, N = 350.2959.7062.5855.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1224364860Min: 49.99 / Avg: 50.29 / Max: 50.46Min: 59.45 / Avg: 59.7 / Max: 60.11Min: 59.39 / Avg: 62.58 / Max: 72.5Min: 53.94 / Avg: 55.27 / Max: 56.35

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off140K280K420K560K700KSE +/- 4132.54, N = 3SE +/- 5183.16, N = 15SE +/- 12430.05, N = 14SE +/- 6328.07, N = 3564437674153646988588506
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off120K240K360K480K600KMin: 556234 / Avg: 564436.67 / Max: 569416Min: 611223 / Avg: 674153.2 / Max: 682946Min: 490679 / Avg: 646987.93 / Max: 677309Min: 575850 / Avg: 588505.67 / Max: 594929

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2K4K6K8K10KSE +/- 4.51, N = 15SE +/- 260.74, N = 3SE +/- 1222.10, N = 15SE +/- 58.40, N = 33493.284522.2111614.368406.101. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2K4K6K8K10KMin: 3458.68 / Avg: 3493.28 / Max: 3520.32Min: 4112.65 / Avg: 4522.21 / Max: 5006.55Min: 8024.55 / Avg: 11614.36 / Max: 21514.35Min: 8302.5 / Avg: 8406.1 / Max: 8504.61. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off0.06750.1350.20250.270.3375SE +/- 0.01, N = 15SE +/- 0.00, N = 3SE +/- 0.00, N = 12SE +/- 0.00, N = 30.30.20.10.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off12345Min: 0.2 / Avg: 0.27 / Max: 0.3Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11. (CXX) g++ options: -O3 -lsnappy -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off918273645SE +/- 0.33, N = 4SE +/- 0.76, N = 20SE +/- 0.99, N = 20SE +/- 1.01, N = 2028.0435.8038.9639.841. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off816243240Min: 27.05 / Avg: 28.04 / Max: 28.44Min: 31.24 / Avg: 35.8 / Max: 44.9Min: 33.06 / Avg: 38.96 / Max: 50.01Min: 31.81 / Avg: 39.83 / Max: 48.791. (CC) gcc options: -O2 -std=c99

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off12002400360048006000SE +/- 50.76, N = 5SE +/- 93.41, N = 20SE +/- 280.74, N = 20SE +/- 90.54, N = 254605.175633.065821.314991.34
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off10002000300040005000Min: 4479.16 / Avg: 4605.17 / Max: 4744.79Min: 4987.14 / Avg: 5633.06 / Max: 6984.36Min: 4991.1 / Avg: 5821.31 / Max: 10800.32Min: 4239.89 / Avg: 4991.33 / Max: 6224.03

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off400800120016002000SE +/- 18.99, N = 5SE +/- 47.08, N = 17SE +/- 22.98, N = 7SE +/- 22.58, N = 51621.402078.592050.131708.54
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off400800120016002000Min: 1575.81 / Avg: 1621.4 / Max: 1691.01Min: 1792.18 / Avg: 2078.59 / Max: 2738.08Min: 1957.16 / Avg: 2050.13 / Max: 2115.74Min: 1654.67 / Avg: 1708.54 / Max: 1770.39

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off12002400360048006000SE +/- 87.78, N = 16SE +/- 42.46, N = 20SE +/- 59.72, N = 20SE +/- 65.83, N = 204080548352934222
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off10002000300040005000Min: 3035 / Avg: 4079.63 / Max: 4455Min: 5089 / Avg: 5482.7 / Max: 5824Min: 4492 / Avg: 5293.05 / Max: 5727Min: 3115 / Avg: 4222.15 / Max: 4623

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off3K6K9K12K15KSE +/- 195.83, N = 16SE +/- 82.01, N = 18SE +/- 112.50, N = 20SE +/- 58.57, N = 20816711708108058699
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off2K4K6K8K10KMin: 5299 / Avg: 8166.88 / Max: 8709Min: 10761 / Avg: 11708.39 / Max: 12194Min: 9264 / Avg: 10804.5 / Max: 11509Min: 7832 / Avg: 8699.15 / Max: 8985

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off11002200330044005500SE +/- 38.95, N = 20SE +/- 93.22, N = 16SE +/- 45.07, N = 4SE +/- 50.55, N = 53843510144144113
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off9001800270036004500Min: 3466 / Avg: 3842.7 / Max: 4094Min: 4440 / Avg: 5100.75 / Max: 5737Min: 4323 / Avg: 4413.5 / Max: 4537Min: 3939 / Avg: 4113.4 / Max: 4219

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off7001400210028003500SE +/- 43.85, N = 20SE +/- 69.83, N = 17SE +/- 36.55, N = 20SE +/- 37.47, N = 203226333031883434
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off6001200180024003000Min: 2653 / Avg: 3225.5 / Max: 3443Min: 2967 / Avg: 3329.82 / Max: 3895Min: 2873 / Avg: 3188.3 / Max: 3535Min: 2919 / Avg: 3433.7 / Max: 3669

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430SE +/- 0.76, N = 15SE +/- 0.29, N = 4SE +/- 0.19, N = 3SE +/- 0.51, N = 1523.9820.4920.0621.311. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off612182430Min: 17.67 / Avg: 23.98 / Max: 27.16Min: 20.18 / Avg: 20.49 / Max: 21.36Min: 19.76 / Avg: 20.06 / Max: 20.41Min: 17.93 / Avg: 21.31 / Max: 24.431. (CC) gcc options: -lm

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off70140210280350SE +/- 2.00, N = 3SE +/- 6.73, N = 9SE +/- 7.10, N = 9SE +/- 3.08, N = 3273.14334.67323.55281.83
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300Min: 269.28 / Avg: 273.14 / Max: 276.01Min: 285.67 / Avg: 334.67 / Max: 351.26Min: 269.42 / Avg: 323.55 / Max: 338.06Min: 276.34 / Avg: 281.83 / Max: 287

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1632486480SE +/- 0.32, N = 3SE +/- 1.38, N = 15SE +/- 3.42, N = 12SE +/- 1.02, N = 1561.439.450.670.01. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1428425670Min: 61 / Avg: 61.37 / Max: 62Min: 32.2 / Avg: 39.39 / Max: 51.1Min: 38.9 / Avg: 50.58 / Max: 77.4Min: 61.4 / Avg: 70.02 / Max: 75.71. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off4080120160200SE +/- 45.40, N = 9SE +/- 27.80, N = 10SE +/- 18.91, N = 9SE +/- 3.97, N = 12186.2189.672.0106.11. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off306090120150Min: 87.9 / Avg: 186.21 / Max: 418.8Min: 63.2 / Avg: 189.61 / Max: 290Min: 36.8 / Avg: 72.02 / Max: 216Min: 76 / Avg: 106.14 / Max: 115.51. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off60120180240300SE +/- 1.18, N = 3SE +/- 13.94, N = 12SE +/- 13.14, N = 15SE +/- 0.99, N = 15274.3231.0215.271.41. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeDefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off50100150200250Min: 272.2 / Avg: 274.27 / Max: 276.3Min: 132.2 / Avg: 230.98 / Max: 263.3Min: 136.8 / Avg: 215.17 / Max: 265.1Min: 60.9 / Avg: 71.44 / Max: 76.11. (CC) gcc options: -static

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1428425670SE +/- 0.08, N = 3SE +/- 0.35, N = 15SE +/- 1.60, N = 15SE +/- 0.54, N = 1229.2532.5936.4763.991. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1DefaultIce Lake: DefaultIce Lake: mitigations=offmitigations=off1326395265Min: 29.1 / Avg: 29.25 / Max: 29.37Min: 31.64 / Avg: 32.59 / Max: 35.94Min: 31.45 / Avg: 36.47 / Max: 55.81Min: 58.99 / Avg: 63.99 / Max: 65.381. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

136 Results Shown

SQLite
Renaissance
ctx_clock
Stress-NG
Caffe
InfluxDB
GEGL
Sockperf
G'MIC
Selenium
GEGL
Sockperf
Selenium:
  Octane - Firefox
  Jetstream - Firefox
Zstd Compression
Timed Apache Compilation
Timed Linux Kernel Compilation
Selenium
Timed GDB GNU Debugger Compilation
Stress-NG
Caffe
Selenium:
  CanvasMark - Firefox
  ARES-6 - Firefox
Darktable
ASTC Encoder:
  Thorough
  Medium
PyPerformance
Selenium
Renaissance
Stress-NG
Selenium
GEGL
GIMP
ASTC Encoder
Selenium
RawTherapee
Renaissance
TensorFlow Lite
GEGL
Selenium
RNNoise
Facebook RocksDB
Mobile Neural Network:
  MobileNetV2_224
  resnet-v2-50
Ethr
InfluxDB
TensorFlow Lite
GEGL
PyPerformance:
  pickle_pure_python
  chaos
  django_template
  pathlib
TensorFlow Lite
Selenium
TensorFlow Lite
GEGL
PyPerformance:
  raytrace
  2to3
Selenium
TensorFlow Lite
Selenium
SQLite Speedtest
Selenium:
  Jetstream - Google Chrome
  ARES-6 - Google Chrome
GIMP
PyBench
PyPerformance:
  regex_compile
  go
GEGL
Darktable
librsvg
Stress-NG
G'MIC
NCNN
Mobile Neural Network
NCNN
Selenium
NCNN
Selenium
GIMP
Mobile Neural Network
Selenium
NCNN:
  CPU - yolov4-tiny
  CPU - resnet18
LibRaw
GEGL
PyPerformance
NCNN
PyPerformance
NCNN:
  CPU - squeezenet
  CPU - mobilenet
GIMP
Selenium
PyPerformance
Selenium
Darktable
Tesseract OCR
PyPerformance
NCNN
DeepSpeech
Renaissance
Git
LevelDB
GNU Octave Benchmark
Ethr
Selenium
Mobile Neural Network
Selenium
NCNN:
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Selenium
Facebook RocksDB:
  Read While Writing
  Rand Fill Sync
  Seq Fill
Stress-NG
LibreOffice
GEGL
TensorFlow Lite
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s
eSpeak-NG Speech Engine
Renaissance:
  In-Memory Database Shootout
  Scala Dotty
DaCapo Benchmark:
  Tradebeans
  Tradesoap
  Jython
  H2
OSBench
WireGuard + Linux Networking Stack Stress Test
FS-Mark:
  4000 Files, 32 Sub Dirs, 1MB Size
  5000 Files, 1MB Size, 4 Threads
  1000 Files, 1MB Size
SQLite