Core i5 Laptop

Intel Core i5-5300U testing with a HP 2216 (M71 Ver. 01.27 BIOS) and Intel HD 5500 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2008300-FI-COREI5LAP35
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
CPU Massive 6 Tests
Creator Workloads 4 Tests
HPC - High Performance Computing 6 Tests
Imaging 2 Tests
Machine Learning 3 Tests
Multi-Core 6 Tests
Programmer / Developer System Benchmarks 2 Tests
Server CPU Tests 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
August 28 2020
  7 Hours, 53 Minutes
Run 2
August 29 2020
  11 Hours, 6 Minutes
Run 3
August 29 2020
  8 Hours, 1 Minute
Invert Hiding All Results Option
  9 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 LaptopOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-5300U @ 2.90GHz (2 Cores / 4 Threads)HP 2216 (M71 Ver. 01.27 BIOS)Intel Broadwell-U-OPI8GB256GB MTFDDAK256MAM-1KIntel HD 5500 3GB (900MHz)Intel Broadwell-U AudioIntel I218-LM + Intel 7265Ubuntu 20.045.4.0-33-generic (x86_64)GNOME Shell 3.36.1GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.4GCC 9.3.0ext41366x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopsDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I5 Laptop BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2e- Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

Run 1Run 2Run 3Result OverviewPhoronix Test Suite100%101%101%102%oneDNNECP-CANDLERodiniaTimed Apache CompilationTimed Linux Kernel Compilationlibavif avifencGeekbenchDarmstadt Automotive Parallel Heterogeneous SuiteMontage Astronomical Image Mosaic EngineASTC EncoderNAMDTensorFlow Lite

Core i5 Laptoprodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-linux-kernel: Time To Compilemontage: Mosaic of M17, K band, 1.5 deg x 1.5 degdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clustertensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveecp-candle: P1B2ecp-candle: P3B1geekbench: CPU Multi Coregeekbench: CPU Multi Core - Gaussian Blurgeekbench: CPU Multi Core - Face Detectiongeekbench: CPU Multi Core - Horizon Detectiongeekbench: CPU Single Coregeekbench: CPU Single Core - Gaussian Blurgeekbench: CPU Single Core - Face Detectiongeekbench: CPU Single Core - Horizon DetectionRun 1Run 2Run 31838.834226.859645.608124.40547.07811.1300023.2964307.09034.927427.856937.94061554.43810.73810.7684489.475284.94116.64214.73474.292567.770122.011427.6814193.871976404436.8814398372082793310075809765759475221885200010.4928.63196.201600.3374.8382586.135163377.812.743.278932.85.8619.31918.784225.704643.287116.72746.79211.1371722.9018304.02033.813027.755537.72861532.18811.32510.2545476.985283.32716.64114.74073.879564.364122.223427.5514197.078561502436.5814405172084836710076539766299449251885196710.4928.65196.441601.4472.772551.636163077.612.642.978532.15.9219.21841.514227.506643.340116.12147.02211.1293322.7773311.40435.095327.775837.84191628.52824.35811.0660477.142282.66116.64814.74974.871569.985122.073427.8714118.098715708435.6014397172084803310082109766049445431885320010.5028.62196.171601.5674.8162549.293163077.812.743.178531.75.8919.3OpenBenchmarking.org

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDRun 1Run 2Run 3400800120016002000SE +/- 1.35, N = 3SE +/- 64.46, N = 9SE +/- 1.64, N = 31838.831918.781841.511. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDRun 1Run 2Run 330060090012001500Min: 1836.6 / Avg: 1838.83 / Max: 1841.26Min: 1836.16 / Avg: 1918.78 / Max: 2416.7Min: 1838.81 / Avg: 1841.51 / Max: 1844.471. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRun 1Run 2Run 350100150200250SE +/- 1.00, N = 3SE +/- 0.94, N = 3SE +/- 0.26, N = 3226.86225.70227.511. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRun 1Run 2Run 34080120160200Min: 225.79 / Avg: 226.86 / Max: 228.86Min: 224.05 / Avg: 225.7 / Max: 227.3Min: 227.24 / Avg: 227.51 / Max: 228.021. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRun 1Run 2Run 3140280420560700SE +/- 1.40, N = 3SE +/- 1.18, N = 3SE +/- 0.92, N = 3645.61643.29643.341. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRun 1Run 2Run 3110220330440550Min: 644.1 / Avg: 645.61 / Max: 648.41Min: 641.65 / Avg: 643.29 / Max: 645.59Min: 641.52 / Avg: 643.34 / Max: 644.461. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRun 1Run 2Run 3306090120150SE +/- 0.50, N = 3SE +/- 1.05, N = 3SE +/- 1.09, N = 3124.41116.73116.121. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRun 1Run 2Run 320406080100Min: 123.72 / Avg: 124.4 / Max: 125.39Min: 115.1 / Avg: 116.73 / Max: 118.68Min: 114.05 / Avg: 116.12 / Max: 117.721. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRun 1Run 2Run 31122334455SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 347.0846.7947.021. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRun 1Run 2Run 31020304050Min: 47.04 / Avg: 47.08 / Max: 47.1Min: 46.74 / Avg: 46.79 / Max: 46.83Min: 46.93 / Avg: 47.02 / Max: 47.081. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 33691215SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 311.1311.1411.13
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 33691215Min: 11.1 / Avg: 11.13 / Max: 11.16Min: 11.13 / Avg: 11.14 / Max: 11.15Min: 11.1 / Avg: 11.13 / Max: 11.17

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430SE +/- 0.37, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 323.3022.9022.78MIN: 21.98MIN: 22.05MIN: 22.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3510152025Min: 22.66 / Avg: 23.3 / Max: 23.93Min: 22.83 / Avg: 22.9 / Max: 23.03Min: 22.63 / Avg: 22.78 / Max: 22.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 1Run 2Run 370140210280350SE +/- 0.82, N = 3SE +/- 0.58, N = 3SE +/- 1.46, N = 3307.09304.02311.40MIN: 301.63MIN: 300.13MIN: 302.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 1Run 2Run 360120180240300Min: 305.59 / Avg: 307.09 / Max: 308.43Min: 303.24 / Avg: 304.02 / Max: 305.15Min: 309.51 / Avg: 311.4 / Max: 314.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3816243240SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.17, N = 334.9333.8135.10MIN: 34.35MIN: 33.41MIN: 34.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3816243240Min: 34.64 / Avg: 34.93 / Max: 35.09Min: 33.59 / Avg: 33.81 / Max: 34.04Min: 34.77 / Avg: 35.1 / Max: 35.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 3714212835SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 327.8627.7627.78MIN: 27.51MIN: 27.42MIN: 27.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430Min: 27.75 / Avg: 27.86 / Max: 27.94Min: 27.7 / Avg: 27.76 / Max: 27.78Min: 27.61 / Avg: 27.78 / Max: 27.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 3918273645SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 337.9437.7337.84MIN: 37.65MIN: 37.58MIN: 37.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 3816243240Min: 37.81 / Avg: 37.94 / Max: 38.16Min: 37.71 / Avg: 37.73 / Max: 37.76Min: 37.77 / Avg: 37.84 / Max: 37.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 330060090012001500SE +/- 23.16, N = 3SE +/- 9.40, N = 3SE +/- 19.04, N = 151554.431532.181628.52MIN: 1515.46MIN: 1501.04MIN: 1523.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 330060090012001500Min: 1530.57 / Avg: 1554.43 / Max: 1600.75Min: 1513.74 / Avg: 1532.18 / Max: 1544.61Min: 1533.4 / Avg: 1628.52 / Max: 1742.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 32004006008001000SE +/- 3.35, N = 3SE +/- 7.02, N = 3SE +/- 4.82, N = 3810.74811.33824.36MIN: 803.3MIN: 798.93MIN: 810.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 3140280420560700Min: 806.07 / Avg: 810.74 / Max: 817.23Min: 800.21 / Avg: 811.33 / Max: 824.31Min: 817.69 / Avg: 824.36 / Max: 833.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 1510.7710.2511.07MIN: 10.23MIN: 9.98MIN: 10.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215Min: 10.69 / Avg: 10.77 / Max: 10.85Min: 10.22 / Avg: 10.25 / Max: 10.3Min: 10.26 / Avg: 11.07 / Max: 11.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1Run 2Run 3110220330440550SE +/- 5.93, N = 3SE +/- 0.60, N = 3SE +/- 0.39, N = 3489.48476.99477.141. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1Run 2Run 390180270360450Min: 477.63 / Avg: 489.48 / Max: 495.89Min: 476.02 / Avg: 476.98 / Max: 478.09Min: 476.44 / Avg: 477.14 / Max: 477.791. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1Run 2Run 360120180240300SE +/- 0.43, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3284.94283.33282.661. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1Run 2Run 350100150200250Min: 284.33 / Avg: 284.94 / Max: 285.76Min: 283.17 / Avg: 283.33 / Max: 283.54Min: 282.6 / Avg: 282.66 / Max: 282.711. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1Run 2Run 348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 316.6416.6416.651. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1Run 2Run 348121620Min: 16.61 / Avg: 16.64 / Max: 16.68Min: 16.63 / Avg: 16.64 / Max: 16.65Min: 16.62 / Avg: 16.65 / Max: 16.671. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1Run 2Run 348121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 314.7314.7414.751. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1Run 2Run 348121620Min: 14.71 / Avg: 14.73 / Max: 14.75Min: 14.7 / Avg: 14.74 / Max: 14.77Min: 14.73 / Avg: 14.75 / Max: 14.771. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 320406080100SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 374.2973.8874.87
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 31428425670Min: 74.22 / Avg: 74.29 / Max: 74.42Min: 73.83 / Avg: 73.88 / Max: 73.99Min: 74.81 / Avg: 74.87 / Max: 74.93

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 3120240360480600SE +/- 1.11, N = 3SE +/- 1.24, N = 3SE +/- 1.19, N = 3567.77564.36569.99
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 3100200300400500Min: 566.58 / Avg: 567.77 / Max: 569.99Min: 562.6 / Avg: 564.36 / Max: 566.75Min: 568.79 / Avg: 569.99 / Max: 572.36

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 1Run 2Run 3306090120150SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3122.01122.22122.071. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 1Run 2Run 320406080100Min: 121.84 / Avg: 122.01 / Max: 122.21Min: 122.06 / Avg: 122.22 / Max: 122.36Min: 121.99 / Avg: 122.07 / Max: 122.151. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingRun 1Run 2Run 390180270360450SE +/- 0.20, N = 3SE +/- 0.32, N = 3SE +/- 0.72, N = 3427.68427.55427.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingRun 1Run 2Run 380160240320400Min: 427.37 / Avg: 427.68 / Max: 428.07Min: 426.96 / Avg: 427.55 / Max: 428.04Min: 427 / Avg: 427.87 / Max: 429.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageRun 1Run 2Run 33K6K9K12K15KSE +/- 20.11, N = 3SE +/- 6.44, N = 3SE +/- 7.85, N = 314193.8714197.0814118.101. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageRun 1Run 2Run 32K4K6K8K10KMin: 14155.38 / Avg: 14193.87 / Max: 14223.2Min: 14184.43 / Avg: 14197.08 / Max: 14205.49Min: 14106.83 / Avg: 14118.1 / Max: 14133.211. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterRun 1Run 2Run 390180270360450SE +/- 0.43, N = 3SE +/- 0.77, N = 3SE +/- 0.03, N = 3436.88436.58435.601. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterRun 1Run 2Run 380160240320400Min: 436.02 / Avg: 436.88 / Max: 437.42Min: 435.04 / Avg: 436.58 / Max: 437.36Min: 435.55 / Avg: 435.6 / Max: 435.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 3300K600K900K1200K1500KSE +/- 221.84, N = 3SE +/- 632.46, N = 3SE +/- 29.63, N = 3143983714405171439717
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 3200K400K600K800K1000KMin: 1439600 / Avg: 1439836.67 / Max: 1440280Min: 1439830 / Avg: 1440516.67 / Max: 1441780Min: 1439660 / Avg: 1439716.67 / Max: 1439760

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 34M8M12M16M20MSE +/- 3349.79, N = 3SE +/- 19070.95, N = 3SE +/- 17197.80, N = 3208279332084836720848033
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 34M8M12M16M20MMin: 20823500 / Avg: 20827933.33 / Max: 20834500Min: 20828600 / Avg: 20848366.67 / Max: 20886500Min: 20825100 / Avg: 20848033.33 / Max: 20881700

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 3200K400K600K800K1000KSE +/- 283.08, N = 3SE +/- 255.36, N = 3SE +/- 152.75, N = 3100758010076531008210
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 3200K400K600K800K1000KMin: 1007080 / Avg: 1007580 / Max: 1008060Min: 1007150 / Avg: 1007653.33 / Max: 1007980Min: 1007910 / Avg: 1008210 / Max: 1008410

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 3200K400K600K800K1000KSE +/- 151.72, N = 3SE +/- 93.51, N = 3SE +/- 73.18, N = 3976575976629976604
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 3200K400K600K800K1000KMin: 976342 / Avg: 976575.33 / Max: 976860Min: 976508 / Avg: 976629 / Max: 976813Min: 976516 / Avg: 976603.67 / Max: 976749

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 3200K400K600K800K1000KSE +/- 2867.44, N = 3SE +/- 231.29, N = 3SE +/- 262.66, N = 3947522944925944543
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 3160K320K480K640K800KMin: 944220 / Avg: 947522.33 / Max: 953234Min: 944584 / Avg: 944924.67 / Max: 945366Min: 944134 / Avg: 944543 / Max: 945033

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 34M8M12M16M20MSE +/- 1331.67, N = 3SE +/- 5166.99, N = 3SE +/- 1823.00, N = 3188520001885196718853200
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 33M6M9M12M15MMin: 18849800 / Avg: 18852000 / Max: 18854400Min: 18846700 / Avg: 18851966.67 / Max: 18862300Min: 18850900 / Avg: 18853200 / Max: 18856800

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.4910.4910.501. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 33691215Min: 10.48 / Avg: 10.49 / Max: 10.5Min: 10.49 / Avg: 10.49 / Max: 10.5Min: 10.49 / Avg: 10.5 / Max: 10.51. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 3714212835SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 328.6328.6528.621. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 3612182430Min: 28.61 / Avg: 28.63 / Max: 28.64Min: 28.62 / Avg: 28.65 / Max: 28.68Min: 28.61 / Avg: 28.62 / Max: 28.631. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 34080120160200SE +/- 0.12, N = 3SE +/- 0.20, N = 3SE +/- 0.05, N = 3196.20196.44196.171. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 34080120160200Min: 195.96 / Avg: 196.2 / Max: 196.38Min: 196.16 / Avg: 196.44 / Max: 196.82Min: 196.11 / Avg: 196.17 / Max: 196.281. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 330060090012001500SE +/- 1.31, N = 3SE +/- 0.24, N = 3SE +/- 0.83, N = 31600.331601.441601.561. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 330060090012001500Min: 1598.83 / Avg: 1600.33 / Max: 1602.95Min: 1601.08 / Avg: 1601.44 / Max: 1601.89Min: 1600.04 / Avg: 1601.56 / Max: 1602.891. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2Run 1Run 2Run 32040608010074.8472.7774.82

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1Run 1Run 2Run 360012001800240030002586.142551.642549.29

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi CoreRun 1Run 2Run 3400800120016002000SE +/- 1.33, N = 3SE +/- 2.19, N = 3163316301630
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi CoreRun 1Run 2Run 330060090012001500Min: 1632 / Avg: 1633.33 / Max: 1636Min: 1627 / Avg: 1629.67 / Max: 1634

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Gaussian BlurRun 1Run 2Run 320406080100SE +/- 0.19, N = 3SE +/- 0.15, N = 3SE +/- 0.18, N = 377.877.677.8
OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Gaussian BlurRun 1Run 2Run 31530456075Min: 77.4 / Avg: 77.77 / Max: 78Min: 77.4 / Avg: 77.63 / Max: 77.9Min: 77.5 / Avg: 77.77 / Max: 78.1

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Face DetectionRun 1Run 2Run 33691215SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 312.712.612.7
OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Face DetectionRun 1Run 2Run 348121620Min: 12.7 / Avg: 12.7 / Max: 12.7Min: 12.6 / Avg: 12.63 / Max: 12.7Min: 12.6 / Avg: 12.67 / Max: 12.7

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Horizon DetectionRun 1Run 2Run 31020304050SE +/- 0.03, N = 3SE +/- 0.21, N = 3SE +/- 0.00, N = 343.242.943.1
OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Horizon DetectionRun 1Run 2Run 3918273645Min: 43.1 / Avg: 43.17 / Max: 43.2Min: 42.5 / Avg: 42.9 / Max: 43.2Min: 43.1 / Avg: 43.1 / Max: 43.1

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single CoreRun 1Run 2Run 32004006008001000SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 1.00, N = 3789785785
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single CoreRun 1Run 2Run 3140280420560700Min: 788 / Avg: 788.67 / Max: 790Min: 784 / Avg: 785 / Max: 786Min: 784 / Avg: 785 / Max: 787

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Gaussian BlurRun 1Run 2Run 3816243240SE +/- 1.39, N = 3SE +/- 0.37, N = 3SE +/- 0.34, N = 332.832.131.7
OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Gaussian BlurRun 1Run 2Run 3714212835Min: 30 / Avg: 32.77 / Max: 34.3Min: 31.4 / Avg: 32.13 / Max: 32.6Min: 31.3 / Avg: 31.73 / Max: 32.4

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Single Core - Face DetectionRun 1Run 2Run 31.3322.6643.9965.3286.66SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.865.925.89
OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Single Core - Face DetectionRun 1Run 2Run 3246810Min: 5.73 / Avg: 5.86 / Max: 5.95Min: 5.91 / Avg: 5.92 / Max: 5.95Min: 5.89 / Avg: 5.89 / Max: 5.89

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Horizon DetectionRun 1Run 2Run 3510152025SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 319.319.219.3
OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Horizon DetectionRun 1Run 2Run 3510152025Min: 19.2 / Avg: 19.3 / Max: 19.4Min: 19.1 / Avg: 19.23 / Max: 19.4Min: 19.2 / Avg: 19.27 / Max: 19.3