Tiger Lake CPU Security Mitigations

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010267-FI-MIT49760730
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Web Browsers 1 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 11 Tests
Creator Workloads 12 Tests
Database Test Suite 5 Tests
Disk Test Suite 2 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 7 Tests
Imaging 7 Tests
Java 2 Tests
Common Kernel Benchmarks 8 Tests
Machine Learning 6 Tests
Multi-Core 4 Tests
Networking Test Suite 2 Tests
NVIDIA GPU Compute 2 Tests
Productivity 5 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 2 Tests
Server 5 Tests
Server CPU Tests 9 Tests
Single-Threaded 5 Tests
Speech 3 Tests
Telephony 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
October 22 2020
  11 Hours, 52 Minutes
mitigations=off
October 23 2020
  12 Hours, 28 Minutes
Ice Lake: Default
October 24 2020
  15 Hours, 39 Minutes
Ice Lake: mitigations=off
October 25 2020
  18 Hours, 8 Minutes
Invert Hiding All Results Option
  14 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tiger Lake CPU Security MitigationsProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (1.0.3 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel UHD 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 20.105.8.0-25-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.11.2.145GCC 10.2.0ext41920x1200Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Device 34efToshiba KBG40ZPZ512G NVMe 512GBIntel Iris Plus G7 3GB (1100MHz)Intel Killer Wi-Fi 6 AX1650i 160MHzOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rwProcessor Details- Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- Ice Lake: Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3- Ice Lake: mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3Java Details- OpenJDK Runtime Environment (build 11.0.9+10-post-Ubuntu-0ubuntu1)Python Details- Python 3.8.6Security Details- Default: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - mitigations=off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: Default: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: mitigations=off: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected

Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offResult OverviewPhoronix Test Suite100%134%169%203%238%SQLiteLevelDBFS-Markctx_clockeSpeak-NG Speech EngineSockperfStress-NGZstd CompressionTimed Apache CompilationTimed Linux Kernel CompilationDaCapo BenchmarkTimed GDB GNU Debugger CompilationG'MICRenaissanceASTC EncoderLibreOfficeGEGLRawTherapeeWireGuard + Linux Networking Stack Stress TestRNNoiseTensorFlow LiteDarktableMobile Neural NetworkOSBenchSQLite SpeedtestPyBenchPyPerformanceSeleniumFacebook RocksDBlibrsvgGIMPLibRawNCNNTesseract OCRDeepSpeechEthrGitGNU Octave BenchmarkCaffe

Tiger Lake CPU Security Mitigationssqlite: 8renaissance: Twitter HTTP Requestsctx-clock: Context Switch Timestress-ng: Malloccaffe: AlexNet - CPU - 100influxdb: 4 - 10000 - 2,5000,1 - 10000gegl: Scalesockperf: Throughputgmic: 2D Function Plotting, 1000 Timesselenium: Kraken - Firefoxgegl: Cropsockperf: Latency Ping Pongselenium: Octane - Firefoxselenium: Jetstream - Firefoxcompress-zstd: 3build-apache: Time To Compilebuild-linux-kernel: Time To Compileselenium: WASM collisionDetection - Firefoxbuild-gdb: Time To Compilestress-ng: Forkingcaffe: GoogleNet - CPU - 100selenium: CanvasMark - Firefoxselenium: ARES-6 - Firefoxdarktable: Masskrug - CPU-onlyastcenc: Thoroughastcenc: Mediumpyperformance: python_startupselenium: Jetstream 2 - Firefoxrenaissance: Apache Spark ALSstress-ng: Context Switchingselenium: WebXPRT - Google Chromegegl: Cartoongimp: resizeastcenc: Fastselenium: WebXPRT - Firefoxrawtherapee: Total Benchmark Timerenaissance: Rand Foresttensorflow-lite: Mobilenet Quantgegl: Tile Glassselenium: WASM collisionDetection - Google Chromernnoise: rocksdb: Rand Readmnn: MobileNetV2_224mnn: resnet-v2-50ethr: HTTP - Bandwidth - 1influxdb: 64 - 10000 - 2,5000,1 - 10000tensorflow-lite: NASNet Mobilegegl: Antialiaspyperformance: pickle_pure_pythonpyperformance: chaospyperformance: django_templatepyperformance: pathlibtensorflow-lite: Inception ResNet V2selenium: Kraken - Google Chrometensorflow-lite: Inception V4gegl: Color Enhancepyperformance: raytracepyperformance: 2to3selenium: WASM imageConvolute - Firefoxtensorflow-lite: Mobilenet Floatselenium: Jetstream 2 - Google Chromesqlite-speedtest: Timed Time - Size 1,000selenium: Jetstream - Google Chromeselenium: ARES-6 - Google Chromegimp: rotatepybench: Total For Average Test Timespyperformance: regex_compilepyperformance: gogegl: Reflectdarktable: Boat - CPU-onlyrsvg: SVG Files To PNGstress-ng: Socket Activitygmic: 3D Elevated Function In Rand Colors, 100 Timesncnn: CPU - resnet50mnn: SqueezeNetV1.0ncnn: CPU - alexnetselenium: StyleBench - Google Chromencnn: CPU - blazefaceselenium: CanvasMark - Google Chromegimp: unsharp-maskmnn: inception-v3selenium: StyleBench - Firefoxncnn: CPU - yolov4-tinyncnn: CPU - resnet18libraw: Post-Processing Benchmarkgegl: Rotate 90 Degreespyperformance: crypto_pyaesncnn: CPU - googlenetpyperformance: json_loadsncnn: CPU - squeezenetncnn: CPU - mobilenetgimp: auto-levelsselenium: WASM imageConvolute - Google Chromepyperformance: nbodyselenium: Octane - Google Chromedarktable: Server Rack - CPU-onlytesseract-ocr: Time To OCR 7 Imagespyperformance: floatncnn: CPU - vgg16deepspeech: CPUrenaissance: Akka Unbalanced Cobwebbed Treegit: Time To Complete Common Git Commandsleveldb: Rand Deleteoctave-benchmark: ethr: TCP - Connections/s - 1selenium: Maze Solver - Google Chromemnn: mobilenet-v1-1.0selenium: Maze Solver - Firefoxncnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2selenium: MotionMark - Google Chromerocksdb: Read While Writingrocksdb: Rand Fill Syncrocksdb: Seq Fillstress-ng: MMAPlibreoffice: 20 Documents To PDFgegl: Wavelet Blurtensorflow-lite: SqueezeNetleveldb: Fill Syncleveldb: Fill Syncespeak: Text-To-Speech Synthesisrenaissance: In-Memory Database Shootoutrenaissance: Scala Dottydacapobench: Tradebeansdacapobench: Tradesoapdacapobench: Jythondacapobench: H2osbench: Create Processeswireguard: fs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 1000 Files, 1MB Sizesqlite: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off99.1392487.36712831930889.5184931869356.16.03576711796.951671.97.3622.86139987237.754153.233.647252.138328.1168.62138181.932164201474835.959.14187.1011.756.36108.4233187.1131413871.9428178.3849.3666.8828899.0642138.84537204526.744281.393321.013157911366.16354.5411384.21881790.240643033.86733483.238.213.97382540668.2816690748.38238125227.9377902165.90951.251289.5516.609.71973813220226.32716.48917.0193321.4845.54549.9811.56318.6439.62.161769813.34568.28898.240.4421.3026.6437.98283.524.5318.626.0931.0411.26628.257589.6616080.19620.39195.071.0460.8786912529.40247.22724.7316.144121834.75.3546.8067099593590063138.555.52550.2905644373493.2770.328.0444605.1721621.396408081673843322623.983796273.13861.4186.2274.329.248256.1962533.80812731273086.7289846774218.36.20573803299.935610.57.6482.80445545263.994093.135.082255.552320.9173.76935111.572267881523334.229.33288.4911.886.67115.0803388.5351250676.6126981.6829.7347.10299101.5242187.74239451428.133283.510721.690153040726.89657.6681397.02860922.043044936.79634185.439.314.27765500701.8858294351.63739126225.3400069158.47353.138276.8817.329.98376413420927.43017.11317.4183510.0946.60654.0312.13320.1538.52.331686413.99073.11699.643.8223.2026.8840.70585.727.3018.928.4134.0511.67029.297291.4582350.20021.06397.875.9861.8013212954.09349.03425.4956.232123674.85.2523.8765286599374755825.005.70355.2655885068406.1040.139.8354991.3351708.537422286994113343421.313508281.82670.0106.171.463.986106.7453842.1648321289487.35669128.172554537131.588830.79.5913.72134398199.703179.344.122324.805415.4212.56729707.301799341196643.5611.618110.4414.858.0091.6483982.6461131793.8322592.51711.6448.55241122.2422560.71945517531.430344.560325.534129255157.48766.5881645.6149418741.114405100.846.216.88917553799.3984725358.09745930330.4452730138.43961.215242.7819.7611.31687415724030.67819.54720.2002962.5553.95859.1813.68622.0533.52.551499415.70980.48184.647.5525.0222.9644.37697.628.5921.630.2435.8813.00832.6245103537170.22523.11810880.6868.7291413943.86552.03923.3166.712118374.710.5165.413.508.355.547.298.27410.52604305178480325325.666.90859.6986741534522.2080.235.8035633.0552078.5875483117085101333020.492673334.67139.4189.6231.032.587107.5023525.2948321342273.2064140627537.68.360582151132.090752.79.9213.55739712224.673165.243.615329.503417.5217.67030270.521769081228640.0911.571110.2814.847.8799.7203873.6701251244.8022797.56211.4758.52264118.8352633.24445640132.767341.810025.709137320807.52963.4441688.07723413.249517940.43539998.344.916.18905090806.0972707758.33444329828.5453532140.14659.986250.9419.7711.56687815323231.26019.57520.0903055.4652.85157.5513.37621.4734.02.481514915.74077.03294.046.8324.3322.9244.43793.327.1521.329.2734.1312.31332.6082102536350.22523.35710675.5967.9796813670.59252.08523.9476.698122734.610.0935.413.128.075.787.298.22439.9162033076386106521.376.75762.58164698811614.3570.138.9625821.3082050.1255293108054414318820.055771323.55150.672.0215.236.473OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300SE +/- 0.09, N = 3SE +/- 0.46, N = 3SE +/- 1.42, N = 3SE +/- 0.90, N = 399.14256.20106.75107.501. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 99.04 / Avg: 99.14 / Max: 99.32Min: 255.39 / Avg: 256.2 / Max: 256.97Min: 103.9 / Avg: 106.75 / Max: 108.2Min: 105.79 / Avg: 107.5 / Max: 108.841. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off8001600240032004000SE +/- 10.30, N = 5SE +/- 11.39, N = 5SE +/- 45.02, N = 5SE +/- 33.64, N = 252487.372533.813842.163525.29
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7001400210028003500Min: 2462.06 / Avg: 2487.37 / Max: 2511.52Min: 2511.41 / Avg: 2533.81 / Max: 2569.64Min: 3669.4 / Avg: 3842.16 / Max: 3918.64Min: 3285.93 / Avg: 3525.29 / Max: 3843.65

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150SE +/- 1.20, N = 3SE +/- 1.53, N = 3SE +/- 1.33, N = 3SE +/- 0.67, N = 31281278383
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 126 / Avg: 128.33 / Max: 130Min: 125 / Avg: 127 / Max: 130Min: 80 / Avg: 82.67 / Max: 84Min: 82 / Avg: 82.67 / Max: 84

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7M14M21M28M35MSE +/- 309611.94, N = 3SE +/- 346720.57, N = 3SE +/- 206722.39, N = 3SE +/- 157319.06, N = 1531930889.5131273086.7221289487.3521342273.201. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off6M12M18M24M30MMin: 31595618.05 / Avg: 31930889.51 / Max: 32549383.8Min: 30908682.14 / Avg: 31273086.72 / Max: 31966221.91Min: 21067814.84 / Avg: 21289487.35 / Max: 21702562.6Min: 20999628.09 / Avg: 21342273.2 / Max: 23237262.921. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20K40K60K80K100KSE +/- 561.91, N = 3SE +/- 434.17, N = 3SE +/- 396.22, N = 3SE +/- 873.74, N = 3849318984666912641401. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off16K32K48K64K80KMin: 83813 / Avg: 84931 / Max: 85589Min: 88986 / Avg: 89846 / Max: 90380Min: 66145 / Avg: 66912 / Max: 67468Min: 62393 / Avg: 64139.67 / Max: 650591. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Defaultmitigations=offIce Lake: mitigations=off200K400K600K800K1000KSE +/- 2712.83, N = 3SE +/- 11766.77, N = 3SE +/- 6927.38, N = 12869356.1774218.3627537.6
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Defaultmitigations=offIce Lake: mitigations=off150K300K450K600K750KMin: 865387.4 / Avg: 869356.07 / Max: 874544.4Min: 752154.4 / Avg: 774218.3 / Max: 792339.6Min: 597046.6 / Avg: 627537.61 / Max: 660271.8

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.054, N = 15SE +/- 0.051, N = 13SE +/- 0.084, N = 12SE +/- 0.082, N = 156.0356.2058.1728.360
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 5.53 / Avg: 6.03 / Max: 6.35Min: 5.68 / Avg: 6.21 / Max: 6.33Min: 7.26 / Avg: 8.17 / Max: 8.36Min: 7.38 / Avg: 8.36 / Max: 8.97

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off160K320K480K640K800KSE +/- 9040.07, N = 25SE +/- 8183.20, N = 5SE +/- 2421.79, N = 5SE +/- 6985.56, N = 57671177380325545375821511. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off130K260K390K520K650KMin: 632628 / Avg: 767116.96 / Max: 801744Min: 713039 / Avg: 738031.6 / Max: 752112Min: 547029 / Avg: 554537.4 / Max: 560837Min: 559547 / Avg: 582151.4 / Max: 5964071. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150SE +/- 0.35, N = 3SE +/- 0.53, N = 3SE +/- 1.69, N = 4SE +/- 1.58, N = 1296.9599.94131.59132.091. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 96.43 / Avg: 96.95 / Max: 97.61Min: 99.02 / Avg: 99.94 / Max: 100.85Min: 128.78 / Avg: 131.59 / Max: 136.1Min: 126.6 / Avg: 132.09 / Max: 143.731. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2004006008001000SE +/- 1.35, N = 3SE +/- 0.73, N = 3SE +/- 1.20, N = 3SE +/- 1.72, N = 3671.9610.5830.7752.71. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off150300450600750Min: 670.2 / Avg: 671.93 / Max: 674.6Min: 609.2 / Avg: 610.53 / Max: 611.7Min: 828.3 / Avg: 830.67 / Max: 832.2Min: 749.5 / Avg: 752.67 / Max: 755.41. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215SE +/- 0.018, N = 3SE +/- 0.062, N = 3SE +/- 0.077, N = 15SE +/- 0.104, N = 127.3627.6489.5919.921
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 7.33 / Avg: 7.36 / Max: 7.39Min: 7.57 / Avg: 7.65 / Max: 7.77Min: 8.87 / Avg: 9.59 / Max: 9.79Min: 9.06 / Avg: 9.92 / Max: 10.65

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off0.83721.67442.51163.34884.186SE +/- 0.009, N = 5SE +/- 0.028, N = 8SE +/- 0.020, N = 5SE +/- 0.034, N = 252.8612.8043.7213.5571. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810Min: 2.84 / Avg: 2.86 / Max: 2.89Min: 2.67 / Avg: 2.8 / Max: 2.88Min: 3.68 / Avg: 3.72 / Max: 3.79Min: 3.44 / Avg: 3.56 / Max: 4.091. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off10K20K30K40K50KSE +/- 455.76, N = 3SE +/- 534.74, N = 3SE +/- 70.32, N = 3SE +/- 94.24, N = 3399874554534398397121. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off8K16K24K32K40KMin: 39385 / Avg: 39987.33 / Max: 40881Min: 44892 / Avg: 45545 / Max: 46605Min: 34285 / Avg: 34398 / Max: 34527Min: 39578 / Avg: 39712.33 / Max: 398941. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300SE +/- 0.24, N = 3SE +/- 0.84, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3237.75263.99199.70224.671. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 237.28 / Avg: 237.75 / Max: 238Min: 262.63 / Avg: 263.99 / Max: 265.53Min: 199.47 / Avg: 199.7 / Max: 200.02Min: 224.11 / Avg: 224.67 / Max: 225.091. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off9001800270036004500SE +/- 13.27, N = 3SE +/- 4.52, N = 3SE +/- 5.57, N = 3SE +/- 7.13, N = 34153.24093.13179.33165.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7001400210028003500Min: 4127.9 / Avg: 4153.2 / Max: 4172.8Min: 4086.6 / Avg: 4093.1 / Max: 4101.8Min: 3169.5 / Avg: 3179.33 / Max: 3188.8Min: 3152.8 / Avg: 3165.17 / Max: 3177.51. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050SE +/- 0.30, N = 3SE +/- 0.32, N = 3SE +/- 0.64, N = 4SE +/- 0.48, N = 333.6535.0844.1243.62
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645Min: 33.05 / Avg: 33.65 / Max: 33.95Min: 34.46 / Avg: 35.08 / Max: 35.47Min: 42.64 / Avg: 44.12 / Max: 45.73Min: 42.68 / Avg: 43.62 / Max: 44.26

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350SE +/- 0.66, N = 3SE +/- 0.61, N = 3SE +/- 0.97, N = 3SE +/- 1.55, N = 3252.14255.55324.81329.50
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300Min: 251.38 / Avg: 252.14 / Max: 253.46Min: 254.85 / Avg: 255.55 / Max: 256.77Min: 323.13 / Avg: 324.81 / Max: 326.5Min: 327.4 / Avg: 329.5 / Max: 332.53

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off90180270360450SE +/- 3.80, N = 3SE +/- 0.56, N = 3SE +/- 2.08, N = 3SE +/- 1.82, N = 3328.1320.9415.4417.51. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350Min: 323.4 / Avg: 328.07 / Max: 335.6Min: 320.1 / Avg: 320.93 / Max: 322Min: 411.8 / Avg: 415.43 / Max: 419Min: 413.9 / Avg: 417.53 / Max: 419.51. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250SE +/- 0.25, N = 3SE +/- 0.29, N = 3SE +/- 0.47, N = 3SE +/- 2.35, N = 3168.62173.77212.57217.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off4080120160200Min: 168.36 / Avg: 168.62 / Max: 169.12Min: 173.21 / Avg: 173.77 / Max: 174.18Min: 211.69 / Avg: 212.57 / Max: 213.28Min: 214.12 / Avg: 217.67 / Max: 222.1

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off8K16K24K32K40KSE +/- 344.79, N = 3SE +/- 220.10, N = 3SE +/- 369.67, N = 3SE +/- 276.55, N = 338181.9335111.5729707.3030270.521. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7K14K21K28K35KMin: 37612.54 / Avg: 38181.93 / Max: 38803.49Min: 34796.09 / Avg: 35111.57 / Max: 35535.18Min: 29039.99 / Avg: 29707.3 / Max: 30316.6Min: 29905.28 / Avg: 30270.52 / Max: 30812.851. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50K100K150K200K250KSE +/- 393.32, N = 3SE +/- 351.65, N = 3SE +/- 269.90, N = 3SE +/- 739.79, N = 32164202267881799341769081. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off40K80K120K160K200KMin: 215784 / Avg: 216420.33 / Max: 217139Min: 226174 / Avg: 226788.33 / Max: 227392Min: 179544 / Avg: 179933.67 / Max: 180452Min: 175487 / Avg: 176908.33 / Max: 1779751. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KSE +/- 69.48, N = 3SE +/- 65.36, N = 3SE +/- 98.75, N = 3SE +/- 38.97, N = 3147481523311966122861. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KMin: 14620 / Avg: 14747.67 / Max: 14859Min: 15147 / Avg: 15232.67 / Max: 15361Min: 11820 / Avg: 11965.67 / Max: 12154Min: 12219 / Avg: 12286 / Max: 123541. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050SE +/- 0.37, N = 3SE +/- 0.31, N = 3SE +/- 0.19, N = 3SE +/- 0.32, N = 335.9534.2243.5640.091. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645Min: 35.37 / Avg: 35.95 / Max: 36.63Min: 33.8 / Avg: 34.22 / Max: 34.82Min: 43.3 / Avg: 43.56 / Max: 43.93Min: 39.57 / Avg: 40.09 / Max: 40.681. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215SE +/- 0.128, N = 13SE +/- 0.139, N = 12SE +/- 0.109, N = 12SE +/- 0.087, N = 159.1419.33211.61811.571
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 7.64 / Avg: 9.14 / Max: 9.51Min: 7.81 / Avg: 9.33 / Max: 9.54Min: 10.43 / Avg: 11.62 / Max: 11.76Min: 10.45 / Avg: 11.57 / Max: 11.77

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.46, N = 3SE +/- 0.39, N = 3SE +/- 0.42, N = 3SE +/- 0.53, N = 387.1088.49110.44110.281. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 86.22 / Avg: 87.1 / Max: 87.75Min: 87.71 / Avg: 88.49 / Max: 88.92Min: 109.61 / Avg: 110.44 / Max: 110.87Min: 109.22 / Avg: 110.28 / Max: 110.911. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620SE +/- 0.08, N = 15SE +/- 0.10, N = 12SE +/- 0.15, N = 8SE +/- 0.15, N = 811.7511.8814.8514.841. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620Min: 10.7 / Avg: 11.75 / Max: 12.18Min: 10.76 / Avg: 11.88 / Max: 12.03Min: 13.77 / Avg: 14.85 / Max: 15.03Min: 13.79 / Avg: 14.84 / Max: 15.011. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.366.678.007.87
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 6.33 / Avg: 6.36 / Max: 6.38Min: 6.64 / Avg: 6.67 / Max: 6.69Min: 7.97 / Avg: 8 / Max: 8.02Min: 7.86 / Avg: 7.87 / Max: 7.88

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150SE +/- 0.86, N = 3SE +/- 1.47, N = 5SE +/- 0.36, N = 3SE +/- 0.55, N = 3108.42115.0891.6599.721. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 106.71 / Avg: 108.42 / Max: 109.4Min: 112.24 / Avg: 115.08 / Max: 119.73Min: 90.93 / Avg: 91.65 / Max: 92.03Min: 98.68 / Avg: 99.72 / Max: 100.581. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off9001800270036004500SE +/- 17.71, N = 5SE +/- 34.32, N = 25SE +/- 36.06, N = 20SE +/- 43.24, N = 183187.113388.543982.653873.67
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7001400210028003500Min: 3132.72 / Avg: 3187.11 / Max: 3231.56Min: 3003.66 / Avg: 3388.54 / Max: 3702.41Min: 3638.03 / Avg: 3982.65 / Max: 4222.22Min: 3539.65 / Avg: 3873.67 / Max: 4153.31

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off300K600K900K1200K1500KSE +/- 19012.48, N = 3SE +/- 13179.93, N = 7SE +/- 17116.57, N = 3SE +/- 16138.47, N = 31413871.941250676.611131793.831251244.801. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off200K400K600K800K1000KMin: 1386928.15 / Avg: 1413871.94 / Max: 1450580.61Min: 1206267.88 / Avg: 1250676.61 / Max: 1297748.64Min: 1107449.86 / Avg: 1131793.83 / Max: 1164809.49Min: 1226301.61 / Avg: 1251244.8 / Max: 1281456.981. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300SE +/- 0.58, N = 3SE +/- 1.00, N = 3SE +/- 2.08, N = 32812692252271. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 280 / Avg: 281 / Max: 282Min: 267 / Avg: 269 / Max: 270Min: 222 / Avg: 225 / Max: 2291. chrome 86.0.4240.111

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.33, N = 3SE +/- 0.18, N = 3SE +/- 0.77, N = 3SE +/- 1.15, N = 378.3881.6892.5297.56
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 77.77 / Avg: 78.38 / Max: 78.91Min: 81.34 / Avg: 81.68 / Max: 81.96Min: 91.14 / Avg: 92.52 / Max: 93.8Min: 95.33 / Avg: 97.56 / Max: 99.11

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215SE +/- 0.066, N = 15SE +/- 0.100, N = 12SE +/- 0.108, N = 12SE +/- 0.113, N = 99.3669.73411.64411.475
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 8.55 / Avg: 9.37 / Max: 9.72Min: 8.66 / Avg: 9.73 / Max: 9.98Min: 10.57 / Avg: 11.64 / Max: 12.01Min: 10.57 / Avg: 11.47 / Max: 11.62

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.06, N = 15SE +/- 0.06, N = 15SE +/- 0.12, N = 12SE +/- 0.10, N = 146.887.108.558.521. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 6.13 / Avg: 6.88 / Max: 7.15Min: 6.3 / Avg: 7.1 / Max: 7.32Min: 7.22 / Avg: 8.55 / Max: 8.84Min: 7.33 / Avg: 8.52 / Max: 8.821. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350SE +/- 1.00, N = 2SE +/- 0.33, N = 3SE +/- 0.88, N = 3SE +/- 1.53, N = 32882992412641. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 287 / Avg: 288 / Max: 289Min: 299 / Avg: 299.33 / Max: 300Min: 239 / Avg: 240.67 / Max: 242Min: 261 / Avg: 264 / Max: 2661. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150SE +/- 0.40, N = 3SE +/- 0.67, N = 3SE +/- 1.39, N = 6SE +/- 1.04, N = 1199.06101.52122.24118.841. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 98.26 / Avg: 99.06 / Max: 99.48Min: 100.21 / Avg: 101.52 / Max: 102.38Min: 115.28 / Avg: 122.24 / Max: 123.78Min: 109.07 / Avg: 118.84 / Max: 120.661. RawTherapee, version 5.8, command line.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off6001200180024003000SE +/- 23.75, N = 25SE +/- 21.76, N = 8SE +/- 24.37, N = 5SE +/- 36.40, N = 152138.852187.742560.722633.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off5001000150020002500Min: 1908.57 / Avg: 2138.84 / Max: 2386.27Min: 2112.39 / Avg: 2187.74 / Max: 2273.88Min: 2497.77 / Avg: 2560.72 / Max: 2612.35Min: 2350.31 / Avg: 2633.24 / Max: 2852.48

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off100K200K300K400K500KSE +/- 2509.46, N = 3SE +/- 2832.59, N = 3SE +/- 3093.36, N = 3SE +/- 3375.54, N = 3372045394514455175456401
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off80K160K240K320K400KMin: 367028 / Avg: 372045 / Max: 374674Min: 388866 / Avg: 394513.67 / Max: 397723Min: 449010 / Avg: 455175 / Max: 458706Min: 449652 / Avg: 456401.33 / Max: 459909

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.45, N = 4SE +/- 0.37, N = 1526.7428.1331.4332.77
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835Min: 26.2 / Avg: 26.74 / Max: 27.13Min: 27.72 / Avg: 28.13 / Max: 28.6Min: 30.49 / Avg: 31.43 / Max: 32.66Min: 30.89 / Avg: 32.77 / Max: 35.21

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350SE +/- 0.17, N = 3SE +/- 0.25, N = 3SE +/- 0.58, N = 3SE +/- 0.24, N = 3281.39283.51344.56341.811. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300Min: 281.15 / Avg: 281.39 / Max: 281.72Min: 283.21 / Avg: 283.51 / Max: 284.01Min: 343.72 / Avg: 344.56 / Max: 345.66Min: 341.41 / Avg: 341.81 / Max: 342.261. chrome 86.0.4240.111

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.06, N = 3SE +/- 0.32, N = 3SE +/- 0.01, N = 3SE +/- 0.19, N = 321.0121.6925.5325.711. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 20.9 / Avg: 21.01 / Max: 21.09Min: 21.09 / Avg: 21.69 / Max: 22.2Min: 25.51 / Avg: 25.53 / Max: 25.55Min: 25.33 / Avg: 25.71 / Max: 25.981. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3M6M9M12M15MSE +/- 224908.53, N = 3SE +/- 150814.30, N = 3SE +/- 123171.35, N = 15SE +/- 208149.62, N = 15157911361530407212925515137320801. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3M6M9M12M15MMin: 15437905 / Avg: 15791135.67 / Max: 16208941Min: 15125634 / Avg: 15304072.33 / Max: 15603897Min: 12559704 / Avg: 12925515.33 / Max: 14496776Min: 13179330 / Avg: 13732079.87 / Max: 165275851. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.008, N = 3SE +/- 0.060, N = 3SE +/- 0.025, N = 3SE +/- 0.011, N = 36.1636.8967.4877.529MIN: 6.07 / MAX: 22.29MIN: 6.03 / MAX: 21.38MIN: 7 / MAX: 25.22MIN: 6.99 / MAX: 23.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 6.15 / Avg: 6.16 / Max: 6.18Min: 6.83 / Avg: 6.9 / Max: 7.02Min: 7.46 / Avg: 7.49 / Max: 7.54Min: 7.52 / Avg: 7.53 / Max: 7.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1530456075SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.34, N = 3SE +/- 0.20, N = 354.5457.6766.5963.44MIN: 52.47 / MAX: 71.35MIN: 52.17 / MAX: 88MIN: 62.42 / MAX: 109.96MIN: 56.81 / MAX: 91.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1326395265Min: 54.48 / Avg: 54.54 / Max: 54.64Min: 57.47 / Avg: 57.67 / Max: 57.77Min: 66.23 / Avg: 66.59 / Max: 67.27Min: 63.08 / Avg: 63.44 / Max: 63.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off400800120016002000SE +/- 0.30, N = 3SE +/- 1.23, N = 3SE +/- 3.86, N = 3SE +/- 1.15, N = 31384.211397.021645.611688.07MIN: 1380 / MAX: 1400MIN: 1390 / MAX: 1410MIN: 1630 / MAX: 1660MIN: 1670 / MAX: 1700
OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off30060090012001500Min: 1383.68 / Avg: 1384.21 / Max: 1384.74Min: 1394.74 / Avg: 1397.02 / Max: 1398.95Min: 1637.89 / Avg: 1645.61 / Max: 1649.47Min: 1685.79 / Avg: 1688.07 / Max: 1689.47

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Defaultmitigations=offIce Lake: mitigations=off200K400K600K800K1000KSE +/- 1914.13, N = 3SE +/- 4238.82, N = 3SE +/- 1358.93, N = 3881790.2860922.0723413.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Defaultmitigations=offIce Lake: mitigations=off150K300K450K600K750KMin: 879405.3 / Avg: 881790.23 / Max: 885576.1Min: 856190 / Avg: 860922 / Max: 869379.7Min: 720712.5 / Avg: 723413.23 / Max: 725027.4

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off110K220K330K440K550KSE +/- 2818.20, N = 3SE +/- 3141.38, N = 3SE +/- 3688.05, N = 3SE +/- 2851.71, N = 3406430430449494187495179
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off90K180K270K360K450KMin: 400797 / Avg: 406430 / Max: 409416Min: 424167 / Avg: 430448.67 / Max: 433691Min: 486812 / Avg: 494187.33 / Max: 497967Min: 489486 / Avg: 495179 / Max: 498324

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645SE +/- 0.23, N = 3SE +/- 0.59, N = 3SE +/- 0.44, N = 15SE +/- 0.31, N = 333.8736.8041.1140.44
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645Min: 33.41 / Avg: 33.87 / Max: 34.12Min: 35.61 / Avg: 36.8 / Max: 37.46Min: 40.01 / Avg: 41.11 / Max: 44.91Min: 39.81 / Avg: 40.44 / Max: 40.77

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off90180270360450SE +/- 3.18, N = 3SE +/- 4.70, N = 3SE +/- 3.33, N = 3SE +/- 1.67, N = 3334341405399
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350Min: 328 / Avg: 334.33 / Max: 338Min: 332 / Avg: 341.33 / Max: 347Min: 398 / Avg: 404.67 / Max: 408Min: 396 / Avg: 399.33 / Max: 401

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.19, N = 3SE +/- 0.61, N = 3SE +/- 0.76, N = 3SE +/- 0.19, N = 383.285.4100.898.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 82.8 / Avg: 83.17 / Max: 83.4Min: 84.3 / Avg: 85.37 / Max: 86.4Min: 99.4 / Avg: 100.8 / Max: 102Min: 97.9 / Avg: 98.27 / Max: 98.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050SE +/- 0.20, N = 3SE +/- 0.35, N = 3SE +/- 0.23, N = 3SE +/- 0.18, N = 338.239.346.244.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645Min: 37.8 / Avg: 38.2 / Max: 38.4Min: 38.6 / Avg: 39.27 / Max: 39.8Min: 45.8 / Avg: 46.23 / Max: 46.6Min: 44.6 / Avg: 44.93 / Max: 45.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 313.914.216.816.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620Min: 13.7 / Avg: 13.87 / Max: 14Min: 13.9 / Avg: 14.2 / Max: 14.4Min: 16.6 / Avg: 16.77 / Max: 16.9Min: 16.1 / Avg: 16.13 / Max: 16.2

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2M4M6M8M10MSE +/- 5564.33, N = 3SE +/- 1537.93, N = 3SE +/- 2473.50, N = 3SE +/- 14703.15, N = 37382540776550089175538905090
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1.5M3M4.5M6M7.5MMin: 7371560 / Avg: 7382540 / Max: 7389600Min: 7762510 / Avg: 7765500 / Max: 7767620Min: 8912610 / Avg: 8917553.33 / Max: 8920190Min: 8875720 / Avg: 8905090 / Max: 8921040

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2004006008001000SE +/- 1.55, N = 3SE +/- 9.46, N = 3SE +/- 2.47, N = 3SE +/- 3.34, N = 3668.2701.8799.3806.01. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off140280420560700Min: 665.8 / Avg: 668.23 / Max: 671.1Min: 688.4 / Avg: 701.83 / Max: 720.1Min: 796.3 / Avg: 799.3 / Max: 804.2Min: 799.3 / Avg: 805.97 / Max: 809.61. chrome 86.0.4240.111

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2M4M6M8M10MSE +/- 4887.54, N = 3SE +/- 5358.28, N = 3SE +/- 2111.67, N = 3SE +/- 15084.72, N = 38166907858294398472539727077
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2M4M6M8M10MMin: 8158390 / Avg: 8166906.67 / Max: 8175320Min: 8575380 / Avg: 8582943.33 / Max: 8593300Min: 9843520 / Avg: 9847253.33 / Max: 9850830Min: 9707650 / Avg: 9727076.67 / Max: 9756780

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1326395265SE +/- 0.18, N = 3SE +/- 0.59, N = 3SE +/- 0.65, N = 6SE +/- 0.83, N = 448.3851.6458.1058.33
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860Min: 48.04 / Avg: 48.38 / Max: 48.65Min: 50.76 / Avg: 51.64 / Max: 52.76Min: 56.97 / Avg: 58.1 / Max: 61.13Min: 56.59 / Avg: 58.33 / Max: 60.57

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off100200300400500SE +/- 0.88, N = 3SE +/- 1.76, N = 3SE +/- 1.15, N = 3SE +/- 0.58, N = 3381391459443
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off80160240320400Min: 379 / Avg: 380.67 / Max: 382Min: 388 / Avg: 391.33 / Max: 394Min: 457 / Avg: 459 / Max: 461Min: 442 / Avg: 443 / Max: 444

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350SE +/- 1.20, N = 3SE +/- 1.67, N = 3SE +/- 1.20, N = 3SE +/- 1.33, N = 3252262303298
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 250 / Avg: 252.33 / Max: 254Min: 259 / Avg: 262.33 / Max: 264Min: 301 / Avg: 303.33 / Max: 305Min: 295 / Avg: 297.67 / Max: 299

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835SE +/- 0.31, N = 7SE +/- 0.25, N = 3SE +/- 0.41, N = 3SE +/- 0.49, N = 327.925.330.428.51. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835Min: 27.2 / Avg: 27.94 / Max: 29.5Min: 25 / Avg: 25.3 / Max: 25.8Min: 29.6 / Avg: 30.37 / Max: 31Min: 27.9 / Avg: 28.53 / Max: 29.51. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off100K200K300K400K500KSE +/- 2564.11, N = 3SE +/- 2425.81, N = 3SE +/- 7063.92, N = 3SE +/- 7243.74, N = 3377902400069452730453532
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off80K160K240K320K400KMin: 372792 / Avg: 377902 / Max: 380831Min: 395221 / Avg: 400068.67 / Max: 402662Min: 438634 / Avg: 452730 / Max: 460599Min: 439048 / Avg: 453532 / Max: 461049

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off4080120160200SE +/- 0.51, N = 3SE +/- 0.72, N = 3SE +/- 0.40, N = 3SE +/- 0.64, N = 3165.91158.47138.44140.151. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150Min: 164.92 / Avg: 165.91 / Max: 166.63Min: 157.29 / Avg: 158.47 / Max: 159.78Min: 137.95 / Avg: 138.44 / Max: 139.23Min: 139.07 / Avg: 140.15 / Max: 141.281. chrome 86.0.4240.111

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1428425670SE +/- 0.29, N = 3SE +/- 0.50, N = 3SE +/- 0.96, N = 3SE +/- 0.22, N = 351.2553.1461.2259.991. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860Min: 50.68 / Avg: 51.25 / Max: 51.58Min: 52.14 / Avg: 53.14 / Max: 53.75Min: 60.13 / Avg: 61.21 / Max: 63.13Min: 59.68 / Avg: 59.99 / Max: 60.411. (CC) gcc options: -O2 -ldl -lz -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300SE +/- 1.20, N = 3SE +/- 0.73, N = 3SE +/- 0.87, N = 3SE +/- 0.84, N = 3289.55276.88242.78250.941. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 287.69 / Avg: 289.55 / Max: 291.79Min: 275.97 / Avg: 276.88 / Max: 278.32Min: 241.09 / Avg: 242.78 / Max: 243.96Min: 250.08 / Avg: 250.94 / Max: 252.631. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 316.6017.3219.7619.771. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 16.35 / Avg: 16.6 / Max: 16.74Min: 17.05 / Avg: 17.32 / Max: 17.53Min: 19.71 / Avg: 19.76 / Max: 19.8Min: 19.7 / Avg: 19.77 / Max: 19.821. chrome 86.0.4240.111

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215SE +/- 0.005, N = 3SE +/- 0.024, N = 3SE +/- 0.009, N = 3SE +/- 0.145, N = 49.7199.98311.31611.566
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 9.71 / Avg: 9.72 / Max: 9.73Min: 9.95 / Avg: 9.98 / Max: 10.03Min: 11.3 / Avg: 11.32 / Max: 11.33Min: 11.29 / Avg: 11.57 / Max: 11.98

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2004006008001000SE +/- 5.24, N = 3SE +/- 8.33, N = 3SE +/- 7.42, N = 3SE +/- 9.53, N = 3738764874878
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off150300450600750Min: 730 / Avg: 738.33 / Max: 748Min: 747 / Avg: 763.67 / Max: 772Min: 859 / Avg: 873.67 / Max: 883Min: 860 / Avg: 878.33 / Max: 892

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150SE +/- 1.20, N = 3SE +/- 1.53, N = 3SE +/- 1.86, N = 3132134157153
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150Min: 130 / Avg: 132.33 / Max: 134Min: 131 / Avg: 134 / Max: 136Min: 153 / Avg: 156.67 / Max: 159

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250SE +/- 1.67, N = 3SE +/- 1.45, N = 3SE +/- 2.33, N = 3SE +/- 1.73, N = 3202209240232
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off4080120160200Min: 199 / Avg: 202.33 / Max: 204Min: 206 / Avg: 208.67 / Max: 211Min: 236 / Avg: 240.33 / Max: 244Min: 229 / Avg: 232 / Max: 235

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835SE +/- 0.26, N = 3SE +/- 0.35, N = 3SE +/- 0.38, N = 3SE +/- 0.12, N = 326.3327.4330.6831.26
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835Min: 25.81 / Avg: 26.33 / Max: 26.63Min: 26.75 / Avg: 27.43 / Max: 27.87Min: 29.91 / Avg: 30.68 / Max: 31.1Min: 31.13 / Avg: 31.26 / Max: 31.51

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025SE +/- 0.11, N = 14SE +/- 0.16, N = 9SE +/- 0.29, N = 3SE +/- 0.22, N = 616.4917.1119.5519.58
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 15.04 / Avg: 16.49 / Max: 16.9Min: 15.82 / Avg: 17.11 / Max: 17.33Min: 18.96 / Avg: 19.55 / Max: 19.87Min: 18.46 / Avg: 19.58 / Max: 19.86

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025SE +/- 0.08, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 317.0217.4220.2020.091. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 16.94 / Avg: 17.02 / Max: 17.17Min: 17.25 / Avg: 17.42 / Max: 17.72Min: 20.14 / Avg: 20.2 / Max: 20.24Min: 20.06 / Avg: 20.09 / Max: 20.141. rsvg-convert version 2.50.1

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off8001600240032004000SE +/- 33.45, N = 8SE +/- 38.12, N = 15SE +/- 40.22, N = 3SE +/- 24.47, N = 133321.483510.092962.553055.461. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off6001200180024003000Min: 3163.35 / Avg: 3321.48 / Max: 3505.39Min: 3327.66 / Avg: 3510.09 / Max: 3927.07Min: 2914.62 / Avg: 2962.55 / Max: 3042.46Min: 2969.03 / Avg: 3055.46 / Max: 3262.321. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.32, N = 3SE +/- 0.02, N = 345.5546.6153.9652.851. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1122334455Min: 45.33 / Avg: 45.55 / Max: 45.88Min: 46.07 / Avg: 46.61 / Max: 47.05Min: 53.41 / Avg: 53.96 / Max: 54.53Min: 52.81 / Avg: 52.85 / Max: 52.881. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1326395265SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 1.25, N = 449.9854.0359.1857.55MIN: 46.96 / MAX: 62.34MIN: 50.79 / MAX: 72.47MIN: 54.99 / MAX: 74.7MIN: 51.93 / MAX: 71.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860Min: 49.89 / Avg: 49.98 / Max: 50.1Min: 53.89 / Avg: 54.03 / Max: 54.15Min: 59.09 / Avg: 59.18 / Max: 59.24Min: 53.87 / Avg: 57.55 / Max: 59.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.22, N = 311.5612.1313.6913.38MIN: 11.25 / MAX: 27.29MIN: 10.65 / MAX: 29.64MIN: 12.96 / MAX: 33.2MIN: 11.73 / MAX: 33.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620Min: 11.48 / Avg: 11.56 / Max: 11.67Min: 12.02 / Avg: 12.13 / Max: 12.3Min: 13.57 / Avg: 13.69 / Max: 13.88Min: 12.98 / Avg: 13.38 / Max: 13.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.38, N = 418.6420.1522.0521.47MIN: 17.18 / MAX: 29.18MIN: 18.45 / MAX: 58.96MIN: 20.64 / MAX: 34.28MIN: 19.64 / MAX: 34.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 18.62 / Avg: 18.64 / Max: 18.67Min: 20.03 / Avg: 20.15 / Max: 20.25Min: 22.02 / Avg: 22.05 / Max: 22.09Min: 20.79 / Avg: 21.47 / Max: 22.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 339.638.533.534.01. chrome 86.0.4240.111
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240Min: 39.4 / Avg: 39.6 / Max: 39.8Min: 38.4 / Avg: 38.47 / Max: 38.5Min: 33.2 / Avg: 33.47 / Max: 33.6Min: 33.9 / Avg: 33.97 / Max: 341. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off0.57381.14761.72142.29522.869SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 42.162.332.552.48MIN: 1.99 / MAX: 4.52MIN: 2.04 / MAX: 3.21MIN: 2.32 / MAX: 14.92MIN: 2.2 / MAX: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810Min: 2.13 / Avg: 2.16 / Max: 2.18Min: 2.23 / Avg: 2.33 / Max: 2.45Min: 2.52 / Avg: 2.55 / Max: 2.58Min: 2.42 / Avg: 2.48 / Max: 2.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off4K8K12K16K20KSE +/- 200.87, N = 12SE +/- 151.73, N = 10SE +/- 144.97, N = 9SE +/- 107.22, N = 3176981686414994151491. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KMin: 16711 / Avg: 17697.67 / Max: 18599Min: 16357 / Avg: 16863.5 / Max: 17640Min: 14456 / Avg: 14994.22 / Max: 15839Min: 14949 / Avg: 15149 / Max: 153161. chrome 86.0.4240.111

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620SE +/- 0.13, N = 9SE +/- 0.14, N = 9SE +/- 0.21, N = 5SE +/- 0.16, N = 1513.3513.9915.7115.74
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620Min: 12.31 / Avg: 13.34 / Max: 13.64Min: 12.96 / Avg: 13.99 / Max: 14.32Min: 14.9 / Avg: 15.71 / Max: 16.09Min: 14.19 / Avg: 15.74 / Max: 16.73

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.50, N = 3SE +/- 0.72, N = 368.2973.1280.4877.03MIN: 66.59 / MAX: 84.15MIN: 68.33 / MAX: 117.84MIN: 77.49 / MAX: 99.85MIN: 70.4 / MAX: 108.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1530456075Min: 68.09 / Avg: 68.29 / Max: 68.46Min: 73.04 / Avg: 73.12 / Max: 73.17Min: 79.93 / Avg: 80.48 / Max: 81.47Min: 75.75 / Avg: 77.03 / Max: 78.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.19, N = 3SE +/- 0.15, N = 3SE +/- 0.72, N = 15SE +/- 0.25, N = 398.299.684.694.01. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 97.8 / Avg: 98.17 / Max: 98.4Min: 99.3 / Avg: 99.57 / Max: 99.8Min: 82 / Avg: 84.55 / Max: 90.7Min: 93.7 / Avg: 94 / Max: 94.51. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1122334455SE +/- 0.19, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.64, N = 440.4443.8247.5546.83MIN: 38.52 / MAX: 52.97MIN: 41.38 / MAX: 88.84MIN: 45.86 / MAX: 62.55MIN: 42.33 / MAX: 59.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050Min: 40.2 / Avg: 40.44 / Max: 40.82Min: 43.7 / Avg: 43.82 / Max: 43.94Min: 47.5 / Avg: 47.55 / Max: 47.62Min: 44.94 / Avg: 46.83 / Max: 47.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.51, N = 421.3023.2025.0224.33MIN: 18.47 / MAX: 34.83MIN: 20.27 / MAX: 38MIN: 22.42 / MAX: 37.92MIN: 20.42 / MAX: 41.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 21.22 / Avg: 21.3 / Max: 21.34Min: 23.16 / Avg: 23.2 / Max: 23.24Min: 24.98 / Avg: 25.02 / Max: 25.1Min: 22.83 / Avg: 24.33 / Max: 24.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.41, N = 15SE +/- 0.26, N = 3SE +/- 0.16, N = 3SE +/- 0.31, N = 326.6426.8822.9622.921. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 23.58 / Avg: 26.64 / Max: 28.27Min: 26.59 / Avg: 26.88 / Max: 27.39Min: 22.65 / Avg: 22.96 / Max: 23.21Min: 22.52 / Avg: 22.92 / Max: 23.541. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050SE +/- 0.19, N = 3SE +/- 0.59, N = 3SE +/- 0.05, N = 3SE +/- 0.27, N = 337.9840.7144.3844.44
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645Min: 37.61 / Avg: 37.98 / Max: 38.19Min: 39.68 / Avg: 40.71 / Max: 41.74Min: 44.3 / Avg: 44.38 / Max: 44.46Min: 43.91 / Avg: 44.44 / Max: 44.75

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.35, N = 3SE +/- 0.54, N = 3SE +/- 0.52, N = 3SE +/- 0.62, N = 383.585.797.693.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 82.8 / Avg: 83.5 / Max: 83.9Min: 84.6 / Avg: 85.67 / Max: 86.3Min: 96.6 / Avg: 97.57 / Max: 98.4Min: 92.1 / Avg: 93.33 / Max: 94

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835SE +/- 0.05, N = 3SE +/- 0.24, N = 3SE +/- 0.03, N = 3SE +/- 0.66, N = 324.5327.3028.5927.15MIN: 22.42 / MAX: 35.81MIN: 21.44 / MAX: 219.45MIN: 26.28 / MAX: 41.46MIN: 25.04 / MAX: 41.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 24.43 / Avg: 24.53 / Max: 24.61Min: 26.82 / Avg: 27.3 / Max: 27.58Min: 28.55 / Avg: 28.59 / Max: 28.66Min: 26.42 / Avg: 27.15 / Max: 28.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.18, N = 318.618.921.621.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 18.5 / Avg: 18.63 / Max: 18.7Min: 18.7 / Avg: 18.9 / Max: 19Min: 21.3 / Avg: 21.57 / Max: 21.7Min: 21 / Avg: 21.33 / Max: 21.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835SE +/- 0.18, N = 3SE +/- 0.47, N = 3SE +/- 0.01, N = 3SE +/- 0.42, N = 426.0928.4130.2429.27MIN: 25.43 / MAX: 37.4MIN: 23.73 / MAX: 197.67MIN: 29.25 / MAX: 42.03MIN: 27.06 / MAX: 43.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835Min: 25.89 / Avg: 26.09 / Max: 26.44Min: 27.74 / Avg: 28.41 / Max: 29.32Min: 30.22 / Avg: 30.24 / Max: 30.26Min: 28.53 / Avg: 29.27 / Max: 30.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240SE +/- 0.30, N = 3SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 431.0434.0535.8834.13MIN: 29.81 / MAX: 43.82MIN: 31.96 / MAX: 68.49MIN: 34.42 / MAX: 48.29MIN: 32.05 / MAX: 48.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240Min: 30.69 / Avg: 31.04 / Max: 31.64Min: 33.84 / Avg: 34.05 / Max: 34.27Min: 35.86 / Avg: 35.88 / Max: 35.92Min: 33.79 / Avg: 34.13 / Max: 34.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215SE +/- 0.11, N = 9SE +/- 0.12, N = 8SE +/- 0.15, N = 7SE +/- 0.12, N = 311.2711.6713.0112.31
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off48121620Min: 10.5 / Avg: 11.27 / Max: 11.81Min: 10.82 / Avg: 11.67 / Max: 11.9Min: 12.14 / Avg: 13.01 / Max: 13.22Min: 12.14 / Avg: 12.31 / Max: 12.55

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240SE +/- 0.10, N = 3SE +/- 0.25, N = 3SE +/- 0.34, N = 3SE +/- 0.09, N = 328.2629.3032.6232.611. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off714212835Min: 28.13 / Avg: 28.26 / Max: 28.46Min: 29.05 / Avg: 29.3 / Max: 29.79Min: 32.01 / Avg: 32.62 / Max: 33.19Min: 32.44 / Avg: 32.61 / Max: 32.751. chrome 86.0.4240.111

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.76, N = 3SE +/- 0.84, N = 389.691.4103.0102.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 88.4 / Avg: 89.6 / Max: 91Min: 89.7 / Avg: 91.37 / Max: 92.3

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off13K26K39K52K65KSE +/- 1033.11, N = 3SE +/- 745.81, N = 5SE +/- 67.56, N = 3SE +/- 159.67, N = 3616085823553717536351. chrome 86.0.4240.111
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off11K22K33K44K55KMin: 60493 / Avg: 61608 / Max: 63672Min: 57022 / Avg: 58234.6 / Max: 61104Min: 53599 / Avg: 53717 / Max: 53833Min: 53316 / Avg: 53635.33 / Max: 537971. chrome 86.0.4240.111

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off0.05060.10120.15180.20240.253SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.1960.2000.2250.225
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off12345Min: 0.19 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.22 / Avg: 0.23 / Max: 0.23Min: 0.22 / Avg: 0.23 / Max: 0.23

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.14, N = 3SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.22, N = 920.3921.0623.1223.36
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off510152025Min: 20.23 / Avg: 20.39 / Max: 20.68Min: 20.5 / Avg: 21.06 / Max: 21.35Min: 22.97 / Avg: 23.12 / Max: 23.32Min: 22.81 / Avg: 23.36 / Max: 25.06

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.80, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 395.097.8108.0106.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100Min: 93.4 / Avg: 95 / Max: 95.8Min: 96.1 / Avg: 97.83 / Max: 99Min: 106 / Avg: 108 / Max: 109

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off20406080100SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 3SE +/- 1.31, N = 471.0475.9880.6875.59MIN: 67.45 / MAX: 85.82MIN: 71.7 / MAX: 102.99MIN: 76.5 / MAX: 103.41MIN: 70.5 / MAX: 95.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1530456075Min: 70.81 / Avg: 71.04 / Max: 71.22Min: 75.69 / Avg: 75.98 / Max: 76.33Min: 80.55 / Avg: 80.68 / Max: 80.9Min: 72.81 / Avg: 75.59 / Max: 79.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1530456075SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 360.8861.8068.7367.98
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1326395265Min: 60.7 / Avg: 60.88 / Max: 60.99Min: 61.48 / Avg: 61.8 / Max: 61.99Min: 68.61 / Avg: 68.73 / Max: 68.95Min: 67.82 / Avg: 67.98 / Max: 68.22

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KSE +/- 116.84, N = 5SE +/- 125.63, N = 5SE +/- 137.47, N = 9SE +/- 120.68, N = 1512529.4012954.0913943.8713670.59
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2K4K6K8K10KMin: 12196.8 / Avg: 12529.4 / Max: 12890.28Min: 12692.41 / Avg: 12954.09 / Max: 13361.67Min: 13275.84 / Avg: 13943.86 / Max: 14552.3Min: 12835.41 / Avg: 13670.59 / Max: 14622.75

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860SE +/- 0.40, N = 3SE +/- 0.66, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 347.2349.0352.0452.091. git version 2.27.0
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1020304050Min: 46.44 / Avg: 47.23 / Max: 47.7Min: 47.71 / Avg: 49.03 / Max: 49.75Min: 51.92 / Avg: 52.04 / Max: 52.27Min: 52.02 / Avg: 52.09 / Max: 52.191. git version 2.27.0

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.35, N = 12SE +/- 0.29, N = 12SE +/- 0.02, N = 3SE +/- 0.26, N = 324.7325.5023.3223.951. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 21.14 / Avg: 24.73 / Max: 26.11Min: 22.29 / Avg: 25.49 / Max: 25.95Min: 23.27 / Avg: 23.32 / Max: 23.35Min: 23.63 / Avg: 23.95 / Max: 24.461. (CXX) g++ options: -O3 -lsnappy -lpthread

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.012, N = 5SE +/- 0.014, N = 5SE +/- 0.026, N = 5SE +/- 0.019, N = 56.1446.2326.7126.698
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 6.12 / Avg: 6.14 / Max: 6.19Min: 6.19 / Avg: 6.23 / Max: 6.27Min: 6.67 / Avg: 6.71 / Max: 6.81Min: 6.65 / Avg: 6.7 / Max: 6.76

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KSE +/- 17.64, N = 3SE +/- 95.63, N = 3SE +/- 44.10, N = 3SE +/- 37.12, N = 312183123671183712273
OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2K4K6K8K10KMin: 12150 / Avg: 12183.33 / Max: 12210Min: 12210 / Avg: 12366.67 / Max: 12540Min: 11770 / Avg: 11836.67 / Max: 11920Min: 12200 / Avg: 12273.33 / Max: 12320

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1.082.163.244.325.4SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 34.74.84.74.61. chrome 86.0.4240.111
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.7 / Avg: 4.77 / Max: 4.8Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.6 / Avg: 4.63 / Max: 4.71. chrome 86.0.4240.111

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Defaultmitigations=off3691215SE +/- 0.00, N = 3SE +/- 0.11, N = 310.5210.09MIN: 9.74 / MAX: 26.11MIN: 9.21 / MAX: 30.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Defaultmitigations=off3691215Min: 10.51 / Avg: 10.52 / Max: 10.52Min: 9.98 / Avg: 10.09 / Max: 10.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1.2152.433.6454.866.075SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 35.35.25.45.41. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810Min: 5.3 / Avg: 5.33 / Max: 5.4Min: 5.2 / Avg: 5.23 / Max: 5.3Min: 5.4 / Avg: 5.43 / Max: 5.5Min: 5.4 / Avg: 5.4 / Max: 5.41. Default: firefox 81.0.22. mitigations=off: firefox 81.0.23. Ice Lake: Default: firefox 82.04. Ice Lake: mitigations=off: firefox 82.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Defaultmitigations=off3691215SE +/- 0.08, N = 3SE +/- 0.24, N = 413.5013.12MIN: 12.59 / MAX: 25.81MIN: 11.89 / MAX: 29.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Defaultmitigations=off48121620Min: 13.41 / Avg: 13.5 / Max: 13.66Min: 12.63 / Avg: 13.12 / Max: 13.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetDefaultmitigations=off246810SE +/- 0.70, N = 3SE +/- 0.91, N = 48.358.07MIN: 5.2 / MAX: 21.96MIN: 5.21 / MAX: 20.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetDefaultmitigations=off3691215Min: 6.96 / Avg: 8.35 / Max: 9.07Min: 5.36 / Avg: 8.07 / Max: 9.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Defaultmitigations=off1.30052.6013.90155.2026.5025SE +/- 0.80, N = 3SE +/- 0.60, N = 45.545.78MIN: 3.78 / MAX: 17.92MIN: 3.81 / MAX: 18.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Defaultmitigations=off246810Min: 3.93 / Avg: 5.54 / Max: 6.38Min: 3.98 / Avg: 5.78 / Max: 6.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Defaultmitigations=off246810SE +/- 1.14, N = 3SE +/- 0.76, N = 47.297.29MIN: 4.88 / MAX: 18.97MIN: 4.87 / MAX: 20.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Defaultmitigations=off3691215Min: 5.02 / Avg: 7.29 / Max: 8.45Min: 5.04 / Avg: 7.29 / Max: 8.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Defaultmitigations=off246810SE +/- 1.33, N = 3SE +/- 0.88, N = 48.278.22MIN: 5.47 / MAX: 22.52MIN: 5.46 / MAX: 24.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Defaultmitigations=off3691215Min: 5.62 / Avg: 8.27 / Max: 9.63Min: 5.63 / Avg: 8.22 / Max: 9.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off120240360480600SE +/- 6.75, N = 9SE +/- 2.04, N = 3SE +/- 12.09, N = 9SE +/- 10.06, N = 9546.80523.87410.52439.911. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off100200300400500Min: 527.73 / Avg: 546.8 / Max: 583.38Min: 520.09 / Avg: 523.87 / Max: 527.09Min: 314.26 / Avg: 410.52 / Max: 428.36Min: 362.13 / Avg: 439.91 / Max: 458.871. chrome 86.0.4240.111

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off140K280K420K560K700KSE +/- 5302.48, N = 15SE +/- 6731.14, N = 12SE +/- 12034.86, N = 15SE +/- 17734.70, N = 156709956528656043056203301. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off120K240K360K480K600KMin: 647652 / Avg: 670995.47 / Max: 719461Min: 620016 / Avg: 652865.33 / Max: 711340Min: 568062 / Avg: 604305.2 / Max: 759644Min: 561461 / Avg: 620330.33 / Max: 7895351. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off400800120016002000SE +/- 21.21, N = 13SE +/- 21.98, N = 14SE +/- 23.13, N = 3SE +/- 39.22, N = 1293599317847631. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off30060090012001500Min: 726 / Avg: 935.08 / Max: 1018Min: 764 / Avg: 993.07 / Max: 1087Min: 1753 / Avg: 1783.67 / Max: 1829Min: 507 / Avg: 763 / Max: 9971. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off200K400K600K800K1000KSE +/- 14748.19, N = 15SE +/- 30961.97, N = 12SE +/- 11089.18, N = 15SE +/- 41408.12, N = 129006317475588032538610651. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off160K320K480K640K800KMin: 796904 / Avg: 900631.07 / Max: 1003139Min: 614143 / Avg: 747557.67 / Max: 913120Min: 741241 / Avg: 803252.6 / Max: 948580Min: 456313 / Avg: 861064.58 / Max: 10471371. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645SE +/- 0.32, N = 12SE +/- 0.53, N = 15SE +/- 1.09, N = 12SE +/- 1.41, N = 1538.5525.0025.6621.371. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240Min: 36.16 / Avg: 38.55 / Max: 41.26Min: 21.13 / Avg: 25 / Max: 29.13Min: 17.57 / Avg: 25.66 / Max: 32.73Min: 13.65 / Avg: 21.37 / Max: 34.061. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off246810SE +/- 0.054, N = 9SE +/- 0.067, N = 6SE +/- 0.131, N = 25SE +/- 0.061, N = 105.5255.7036.9086.7571. LibreOffice 7.0.2.2 00(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3691215Min: 5.43 / Avg: 5.53 / Max: 5.96Min: 5.56 / Avg: 5.7 / Max: 6.03Min: 6.53 / Avg: 6.91 / Max: 8.85Min: 6.65 / Avg: 6.76 / Max: 7.31. LibreOffice 7.0.2.2 00(Build:2)

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1428425670SE +/- 0.15, N = 3SE +/- 0.71, N = 3SE +/- 0.21, N = 3SE +/- 0.99, N = 1550.2955.2759.7062.58
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1224364860Min: 49.99 / Avg: 50.29 / Max: 50.46Min: 53.94 / Avg: 55.27 / Max: 56.35Min: 59.45 / Avg: 59.7 / Max: 60.11Min: 59.39 / Avg: 62.58 / Max: 72.5

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off140K280K420K560K700KSE +/- 4132.54, N = 3SE +/- 6328.07, N = 3SE +/- 5183.16, N = 15SE +/- 12430.05, N = 14564437588506674153646988
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off120K240K360K480K600KMin: 556234 / Avg: 564436.67 / Max: 569416Min: 575850 / Avg: 588505.67 / Max: 594929Min: 611223 / Avg: 674153.2 / Max: 682946Min: 490679 / Avg: 646987.93 / Max: 677309

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2K4K6K8K10KSE +/- 4.51, N = 15SE +/- 58.40, N = 3SE +/- 260.74, N = 3SE +/- 1222.10, N = 153493.288406.104522.2111614.361. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2K4K6K8K10KMin: 3458.68 / Avg: 3493.28 / Max: 3520.32Min: 8302.5 / Avg: 8406.1 / Max: 8504.6Min: 4112.65 / Avg: 4522.21 / Max: 5006.55Min: 8024.55 / Avg: 11614.36 / Max: 21514.351. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off0.06750.1350.20250.270.3375SE +/- 0.01, N = 15SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 120.30.10.20.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off12345Min: 0.2 / Avg: 0.27 / Max: 0.3Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.1 / Avg: 0.1 / Max: 0.11. (CXX) g++ options: -O3 -lsnappy -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off918273645SE +/- 0.33, N = 4SE +/- 1.01, N = 20SE +/- 0.76, N = 20SE +/- 0.99, N = 2028.0439.8435.8038.961. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off816243240Min: 27.05 / Avg: 28.04 / Max: 28.44Min: 31.81 / Avg: 39.83 / Max: 48.79Min: 31.24 / Avg: 35.8 / Max: 44.9Min: 33.06 / Avg: 38.96 / Max: 50.011. (CC) gcc options: -O2 -std=c99

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off12002400360048006000SE +/- 50.76, N = 5SE +/- 90.54, N = 25SE +/- 93.41, N = 20SE +/- 280.74, N = 204605.174991.345633.065821.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off10002000300040005000Min: 4479.16 / Avg: 4605.17 / Max: 4744.79Min: 4239.89 / Avg: 4991.33 / Max: 6224.03Min: 4987.14 / Avg: 5633.06 / Max: 6984.36Min: 4991.1 / Avg: 5821.31 / Max: 10800.32

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off400800120016002000SE +/- 18.99, N = 5SE +/- 22.58, N = 5SE +/- 47.08, N = 17SE +/- 22.98, N = 71621.401708.542078.592050.13
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off400800120016002000Min: 1575.81 / Avg: 1621.4 / Max: 1691.01Min: 1654.67 / Avg: 1708.54 / Max: 1770.39Min: 1792.18 / Avg: 2078.59 / Max: 2738.08Min: 1957.16 / Avg: 2050.13 / Max: 2115.74

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off12002400360048006000SE +/- 87.78, N = 16SE +/- 65.83, N = 20SE +/- 42.46, N = 20SE +/- 59.72, N = 204080422254835293
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off10002000300040005000Min: 3035 / Avg: 4079.63 / Max: 4455Min: 3115 / Avg: 4222.15 / Max: 4623Min: 5089 / Avg: 5482.7 / Max: 5824Min: 4492 / Avg: 5293.05 / Max: 5727

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off3K6K9K12K15KSE +/- 195.83, N = 16SE +/- 58.57, N = 20SE +/- 82.01, N = 18SE +/- 112.50, N = 20816786991170810805
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off2K4K6K8K10KMin: 5299 / Avg: 8166.88 / Max: 8709Min: 7832 / Avg: 8699.15 / Max: 8985Min: 10761 / Avg: 11708.39 / Max: 12194Min: 9264 / Avg: 10804.5 / Max: 11509

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off11002200330044005500SE +/- 38.95, N = 20SE +/- 50.55, N = 5SE +/- 93.22, N = 16SE +/- 45.07, N = 43843411351014414
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off9001800270036004500Min: 3466 / Avg: 3842.7 / Max: 4094Min: 3939 / Avg: 4113.4 / Max: 4219Min: 4440 / Avg: 5100.75 / Max: 5737Min: 4323 / Avg: 4413.5 / Max: 4537

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off7001400210028003500SE +/- 43.85, N = 20SE +/- 37.47, N = 20SE +/- 69.83, N = 17SE +/- 36.55, N = 203226343433303188
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off6001200180024003000Min: 2653 / Avg: 3225.5 / Max: 3443Min: 2919 / Avg: 3433.7 / Max: 3669Min: 2967 / Avg: 3329.82 / Max: 3895Min: 2873 / Avg: 3188.3 / Max: 3535

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430SE +/- 0.76, N = 15SE +/- 0.51, N = 15SE +/- 0.29, N = 4SE +/- 0.19, N = 323.9821.3120.4920.061. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off612182430Min: 17.67 / Avg: 23.98 / Max: 27.16Min: 17.93 / Avg: 21.31 / Max: 24.43Min: 20.18 / Avg: 20.49 / Max: 21.36Min: 19.76 / Avg: 20.06 / Max: 20.411. (CC) gcc options: -lm

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off70140210280350SE +/- 2.00, N = 3SE +/- 3.08, N = 3SE +/- 6.73, N = 9SE +/- 7.10, N = 9273.14281.83334.67323.55
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300Min: 269.28 / Avg: 273.14 / Max: 276.01Min: 276.34 / Avg: 281.83 / Max: 287Min: 285.67 / Avg: 334.67 / Max: 351.26Min: 269.42 / Avg: 323.55 / Max: 338.06

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1632486480SE +/- 0.32, N = 3SE +/- 1.02, N = 15SE +/- 1.38, N = 15SE +/- 3.42, N = 1261.470.039.450.61. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1428425670Min: 61 / Avg: 61.37 / Max: 62Min: 61.4 / Avg: 70.02 / Max: 75.7Min: 32.2 / Avg: 39.39 / Max: 51.1Min: 38.9 / Avg: 50.58 / Max: 77.41. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off4080120160200SE +/- 45.40, N = 9SE +/- 3.97, N = 12SE +/- 27.80, N = 10SE +/- 18.91, N = 9186.2106.1189.672.01. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off306090120150Min: 87.9 / Avg: 186.21 / Max: 418.8Min: 76 / Avg: 106.14 / Max: 115.5Min: 63.2 / Avg: 189.61 / Max: 290Min: 36.8 / Avg: 72.02 / Max: 2161. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off60120180240300SE +/- 1.18, N = 3SE +/- 0.99, N = 15SE +/- 13.94, N = 12SE +/- 13.14, N = 15274.371.4231.0215.21. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off50100150200250Min: 272.2 / Avg: 274.27 / Max: 276.3Min: 60.9 / Avg: 71.44 / Max: 76.1Min: 132.2 / Avg: 230.98 / Max: 263.3Min: 136.8 / Avg: 215.17 / Max: 265.11. (CC) gcc options: -static

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1428425670SE +/- 0.08, N = 3SE +/- 0.54, N = 12SE +/- 0.35, N = 15SE +/- 1.60, N = 1529.2563.9932.5936.471. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off1326395265Min: 29.1 / Avg: 29.25 / Max: 29.37Min: 58.99 / Avg: 63.99 / Max: 65.38Min: 31.64 / Avg: 32.59 / Max: 35.94Min: 31.45 / Avg: 36.47 / Max: 55.811. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

136 Results Shown

SQLite
Renaissance
ctx_clock
Stress-NG
Caffe
InfluxDB
GEGL
Sockperf
G'MIC
Selenium
GEGL
Sockperf
Selenium:
  Octane - Firefox
  Jetstream - Firefox
Zstd Compression
Timed Apache Compilation
Timed Linux Kernel Compilation
Selenium
Timed GDB GNU Debugger Compilation
Stress-NG
Caffe
Selenium:
  CanvasMark - Firefox
  ARES-6 - Firefox
Darktable
ASTC Encoder:
  Thorough
  Medium
PyPerformance
Selenium
Renaissance
Stress-NG
Selenium
GEGL
GIMP
ASTC Encoder
Selenium
RawTherapee
Renaissance
TensorFlow Lite
GEGL
Selenium
RNNoise
Facebook RocksDB
Mobile Neural Network:
  MobileNetV2_224
  resnet-v2-50
Ethr
InfluxDB
TensorFlow Lite
GEGL
PyPerformance:
  pickle_pure_python
  chaos
  django_template
  pathlib
TensorFlow Lite
Selenium
TensorFlow Lite
GEGL
PyPerformance:
  raytrace
  2to3
Selenium
TensorFlow Lite
Selenium
SQLite Speedtest
Selenium:
  Jetstream - Google Chrome
  ARES-6 - Google Chrome
GIMP
PyBench
PyPerformance:
  regex_compile
  go
GEGL
Darktable
librsvg
Stress-NG
G'MIC
NCNN
Mobile Neural Network
NCNN
Selenium
NCNN
Selenium
GIMP
Mobile Neural Network
Selenium
NCNN:
  CPU - yolov4-tiny
  CPU - resnet18
LibRaw
GEGL
PyPerformance
NCNN
PyPerformance
NCNN:
  CPU - squeezenet
  CPU - mobilenet
GIMP
Selenium
PyPerformance
Selenium
Darktable
Tesseract OCR
PyPerformance
NCNN
DeepSpeech
Renaissance
Git
LevelDB
GNU Octave Benchmark
Ethr
Selenium
Mobile Neural Network
Selenium
NCNN:
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Selenium
Facebook RocksDB:
  Read While Writing
  Rand Fill Sync
  Seq Fill
Stress-NG
LibreOffice
GEGL
TensorFlow Lite
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s
eSpeak-NG Speech Engine
Renaissance:
  In-Memory Database Shootout
  Scala Dotty
DaCapo Benchmark:
  Tradebeans
  Tradesoap
  Jython
  H2
OSBench
WireGuard + Linux Networking Stack Stress Test
FS-Mark:
  4000 Files, 32 Sub Dirs, 1MB Size
  5000 Files, 1MB Size, 4 Threads
  1000 Files, 1MB Size
SQLite