Sandy Bridge 2020

Intel Core i7-2700K testing with a BIOSTAR B75MU3B v5.0 (4.6.5 BIOS) and Intel Sandybridge Desktop 2GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009147-FI-SANDYBRID16
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 4 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 6 Tests
Compression Tests 4 Tests
CPU Massive 17 Tests
Creator Workloads 17 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 7 Tests
Imaging 4 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 3 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 5 Tests
OCR 2 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 3 Tests
Renderers 3 Tests
Scientific Computing 2 Tests
Server CPU Tests 12 Tests
Single-Threaded 2 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i7 2700K
May 26 2020
  9 Hours, 58 Minutes
Intel Core i7 2700K
September 13 2020
  8 Hours, 55 Minutes
Invert Hiding All Results Option
  9 Hours, 26 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Sandy Bridge 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-2700K @ 3.90GHz (4 Cores / 8 Threads)BIOSTAR B75MU3B v5.0 (4.6.5 BIOS)Intel 2nd Generation Core DRAM8GB525GB Crucial_CT525MX3525GB Crucial CT525MX3Intel Sandybridge Desktop 2GB (1350MHz)Realtek ALC662 rev1G237HLRealtek RTL8111/8168/8411Ubuntu 20.045.5.0-999-generic (x86_64) 20191221GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.53.3 Mesa 19.2.4GCC 9.2.1 20191130ext41920x1080ProcessorMotherboardChipsetMemoryDisksGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionSandy Bridge 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2f- Core i7 2700K: OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-2ubuntu1) - Python 2.7.17 + Python 3.8.2- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion; VMX: vulnerable + mds: Vulnerable; SMT vulnerable + meltdown: Vulnerable + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + tsx_async_abort: Not affected

Core i7 2700K vs. Intel Core i7 2700K ComparisonPhoronix Test SuiteBaseline+5.6%+5.6%+11.2%+11.2%+16.8%+16.8%Speed 0 Two-Pass22.2%1916.7%Time To Compile16.2%Speed 8 Realtime12.6%Speed 6 Realtime12.6%Speed 6 Two-Pass10.9%Speed 4 Two-Pass10.3%36.7%AOM AV1Zstd CompressionTimed Linux Kernel CompilationAOM AV1AOM AV1AOM AV1AOM AV1Zstd CompressionCore i7 2700KIntel Core i7 2700K

Sandy Bridge 2020svt-av1: Enc Mode 0 - 1080prodinia: OpenMP LavaMDjava-gradle-perf: Reactorsvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080prodinia: OpenMP Leukocyteopenvkl: vklBenchmarkyafaray: Total Time For Sample Scenelczero: Eigenlczero: BLASavifenc: 0lczero: BLASlczero: Eigenlczero: Randlczero: Randnamd: ATPase Simulation - 327,506 Atomsaom-av1: Speed 0 Two-Passpmbench: 8 - 50%pmbench: 4 - 80% Reads 20% Writespmbench: 8 - 100% Writesbuild2: Time To Compilepmbench: 8 - 100% Readsdav1d: Chimera 1080p 10-bitbuild-linux-kernel: Time To Compileavifenc: 2rodinia: OpenMP HotSpot3Dgmic: 2D Function Plotting, 1000 Timesembree: Pathtracer - Crownembree: Pathtracer - Asian Dragon Objhugin: Panorama Photo Assistant + Stitching Timerodinia: OpenMP CFD Solverembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragon Objcompress-zstd: 19tensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2embree: Pathtracer - Asian Dragonperf-bench: Epoll Waitstress-ng: NUMAperf-bench: Futex Lock-Pidaphne: OpenMP - Points2Imageembree: Pathtracer ISPC - Asian Dragonoidn: Memorialpyperformance: raytracegeekbench: CPU Multi Core - Horizon Detectiongeekbench: CPU Multi Core - Face Detectiongeekbench: CPU Multi Core - Gaussian Blurgeekbench: CPU Multi Coremontage: Mosaic of M17, K band, 1.5 deg x 1.5 degonednn: Recurrent Neural Network Training - f32 - CPUpyperformance: 2to3pyperformance: python_startupocrmypdf: Processing 60 Page PDF Documentgmic: 3D Elevated Function In Rand Colors, 100 Timesaom-av1: Speed 6 Realtimedav1d: Summer Nature 4Kpyperformance: gogit: Time To Complete Common Git Commandstensorflow-lite: NASNet Mobilegeekbench: CPU Single Core - Horizon Detectiongeekbench: CPU Single Core - Face Detectiongeekbench: CPU Single Core - Gaussian Blurgeekbench: CPU Single Corev-ray: CPUdav1d: Chimera 1080paom-av1: Speed 6 Two-Passrodinia: OpenMP Streamclustermkl-dnn: IP Batch All - u8s8f32luxcorerender: DLSCluxcorerender: Rainbow Colors and Prismtensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floatpmbench: 8 - 80% Reads 20% Writespmbench: 2 - 80% Reads 20% Writespmbench: 1 - 50%pmbench: 2 - 50%pmbench: 1 - 80% Reads 20% Writespmbench: 4 - 50%pmbench: 1 - 100% Writespmbench: 4 - 100% Writespmbench: 2 - 100% Writespmbench: 1 - 100% Readspmbench: 4 - 100% Readspmbench: 2 - 100% Readstensorflow-lite: Mobilenet Quantmkl-dnn: IP Batch All - f32onednn: IP Batch All - f32 - CPUneatbench: CPUonednn: Recurrent Neural Network Inference - f32 - CPUperf-bench: Memcpy 1MBaom-av1: Speed 4 Two-Passpyperformance: regex_compilecompress-7zip: Compress Speed Testdaphne: OpenMP - Euclidean Clustermkl-dnn: Recurrent Neural Network Training - f32build-apache: Time To Compilecompress-zstd: 3daphne: OpenMP - NDT Mappingtesseract-ocr: Time To OCR 7 Imagesmkl-dnn: Recurrent Neural Network Inference - f32pyperformance: json_loadspyperformance: nbodypyperformance: chaospyperformance: django_templatepyperformance: pathlibpyperformance: floatstress-ng: CPU Stressstress-ng: Memory Copyingstress-ng: Vector Mathstress-ng: MMAPstress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Matrix Mathstress-ng: SENDFILEstress-ng: Mallocstress-ng: Cryptostress-ng: MEMFDaom-av1: Speed 8 Realtimestress-ng: CPU Cacheperf-bench: Futex Hashstress-ng: System V Message Passingstress-ng: Context Switchingstress-ng: Socket Activitystress-ng: Semaphoresstress-ng: Forkingstress-ng: Atomicpyperformance: crypto_pyaesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timespyperformance: pickle_pure_pythonmkl-dnn: Deconvolution Batch deconv_1d - u8s8f32onednn: Deconvolution Batch deconv_1d - f32 - CPUmkl-dnn: Deconvolution Batch deconv_1d - f32dav1d: Summer Nature 1080pperf-bench: Sched Pipeoctave-benchmark: system-decompress-gzip: onednn: IP Batch 1D - f32 - CPUmkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch 1D - f32dacapobench: H2perf-bench: Memset 1MBavifenc: 8onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUdacapobench: Jythonavifenc: 10system-decompress-zlib: onednn: Convolution Batch Shapes Auto - f32 - CPUperf-bench: Syscall Basiconednn: Deconvolution Batch deconv_3d - f32 - CPUmkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - f32Core i7 2700KIntel Core i7 2700K0.006296.8130.0930.49542.56476.8961162361350820.11302.77650.99238.287148.9333.12043.57663.42653.997416.13.817469.254.50361.5786.7227.7945.1672.2073088180.861.521301.910.390.43242.8533.720.96181792386.531598.8293.7331298.67661.6920714.8532.8841.17276081.1211863.2443610.3618594404.61522.67218.2121.6511.576903863.291056611.723094.45806314.4031877.90208549.9526.076430.81245.2145175.0410.52589.626824.43714377677684.417966.53091106.833527.084359.7992022051047116.950320.090.12210.11920.07570.0693276.790211.917205.374180.304179.37713.81169126710723333112223185414595.18692719072270.120.3137.62475114.12615085.554911.388.0776.92375138550720.46.4441.58021.3764.9688499785991560.14280.10330.08340.08540.10280.10300.04320.06340.04450.04000.06020.0409601047436.3225120.472.8757470.87249484.0441.9781498.9381.5737.27637.6185174100.034.417019.234039340150737111.5452610213.51273.265332.45329412.80126.328811.3542005.78877151.277121817982172.920OpenBenchmarking.org

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pCore i7 2700K0.00140.00280.00420.00560.007SE +/- 0.000, N = 30.0061. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDIntel Core i7 2700K2004006008001000SE +/- 1.99, N = 31106.831. (CXX) g++ options: -O2 -lOpenCL

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorCore i7 2700K60120180240300SE +/- 5.71, N = 9296.81

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pCore i7 2700K0.02090.04180.06270.08360.1045SE +/- 0.000, N = 30.0931. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pCore i7 2700K0.11140.22280.33420.44560.557SE +/- 0.000, N = 30.4951. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteIntel Core i7 2700K110220330440550SE +/- 1.01, N = 3527.081. (CXX) g++ options: -O2 -lOpenCL

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkCore i7 2700K1020304050SE +/- 0.41, N = 1242.56MIN: 1 / MAX: 118

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneCore i7 2700K100200300400500SE +/- 0.97, N = 3476.901. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: EigenCore i7 2700K306090120150SE +/- 0.67, N = 31161. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASCore i7 2700K50100150200250SE +/- 2.40, N = 32361. (CXX) g++ options: -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Intel Core i7 2700K80160240320400SE +/- 1.05, N = 3359.801. (CXX) g++ options: -O3 -fPIC

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASIntel Core i7 2700K4080120160200SE +/- 0.88, N = 32021. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenIntel Core i7 2700K4080120160200SE +/- 0.88, N = 32051. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: RandomCore i7 2700K30K60K90K120K150KSE +/- 400.88, N = 31350821. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomIntel Core i7 2700K20K40K60K80K100KSE +/- 823.37, N = 31047111. (CXX) g++ options: -flto -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsIntel Core i7 2700K246810SE +/- 0.10733, N = 36.95032

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i7 2700KIntel Core i7 2700K0.02480.04960.07440.09920.124SE +/- 0.00, N = 3SE +/- 0.00, N = 150.110.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

pmbench

Pmbench is a Linux paging and virtual memory benchmark. This test profile will report the average page latency of the system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 8 - Read-Write Ratio: 50%Intel Core i7 2700K0.02750.0550.08250.110.1375SE +/- 0.0010, N = 150.12211. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 4 - Read-Write Ratio: 80% Reads 20% WritesIntel Core i7 2700K0.02680.05360.08040.10720.134SE +/- 0.0011, N = 150.11921. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 8 - Read-Write Ratio: 100% WritesIntel Core i7 2700K0.0170.0340.0510.0680.085SE +/- 0.0013, N = 150.07571. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileCore i7 2700K70140210280350SE +/- 2.03, N = 3302.78

pmbench

Pmbench is a Linux paging and virtual memory benchmark. This test profile will report the average page latency of the system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 8 - Read-Write Ratio: 100% ReadsIntel Core i7 2700K0.01560.03120.04680.06240.078SE +/- 0.0019, N = 140.06931. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitCore i7 2700K1224364860SE +/- 0.07, N = 350.99MIN: 34.01 / MAX: 117.031. (CC) gcc options: -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i7 2700KIntel Core i7 2700K60120180240300SE +/- 0.54, N = 3SE +/- 0.56, N = 3238.29276.79

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Intel Core i7 2700K50100150200250SE +/- 0.25, N = 3211.921. (CXX) g++ options: -O3 -fPIC

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DIntel Core i7 2700K50100150200250SE +/- 1.79, N = 3205.371. (CXX) g++ options: -O2 -lOpenCL

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i7 2700K306090120150SE +/- 2.19, N = 4148.931. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownCore i7 2700K0.70211.40422.10632.80843.5105SE +/- 0.0109, N = 33.1204MIN: 3.09 / MAX: 3.22

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjCore i7 2700K0.80471.60942.41413.21884.0235SE +/- 0.0077, N = 33.5766MIN: 3.55 / MAX: 3.66

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeIntel Core i7 2700K4080120160200SE +/- 0.71, N = 3180.30

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverIntel Core i7 2700K4080120160200SE +/- 0.59, N = 3179.381. (CXX) g++ options: -O2 -lOpenCL

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownCore i7 2700K0.7711.5422.3133.0843.855SE +/- 0.0091, N = 33.4265MIN: 3.4 / MAX: 3.51

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjCore i7 2700K0.89941.79882.69823.59764.497SE +/- 0.0112, N = 33.9974MIN: 3.97 / MAX: 4.09

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i7 2700KIntel Core i7 2700K48121620SE +/- 0.00, N = 3SE +/- 0.03, N = 316.113.81. (CC) gcc options: -O3 -pthread -lz -llzma

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Intel Core i7 2700K3M6M9M12M15MSE +/- 8096.36, N = 311691267

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Intel Core i7 2700K2M4M6M8M10MSE +/- 1729.48, N = 310723333

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonCore i7 2700K0.85891.71782.57673.43564.2945SE +/- 0.0038, N = 33.8174MIN: 3.77 / MAX: 3.87

perf-bench

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitIntel Core i7 2700K20K40K60K80K100KSE +/- 2960.93, N = 151122231. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMACore i7 2700K1530456075SE +/- 0.83, N = 1569.251. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

perf-bench

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiIntel Core i7 2700K400800120016002000SE +/- 14.76, N = 1418541. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageIntel Core i7 2700K3K6K9K12K15KSE +/- 7.89, N = 314595.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonCore i7 2700K1.01332.02663.03994.05325.0665SE +/- 0.0044, N = 34.5036MIN: 4.47 / MAX: 4.57

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialCore i7 2700K0.35330.70661.05991.41321.7665SE +/- 0.00, N = 31.57

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceIntel Core i7 2700K160320480640800SE +/- 1.53, N = 3722

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Horizon DetectionIntel Core i7 2700K1632486480SE +/- 0.88, N = 370.1

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Face DetectionIntel Core i7 2700K510152025SE +/- 0.20, N = 320.3

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Gaussian BlurIntel Core i7 2700K306090120150SE +/- 0.32, N = 3137.6

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi CoreIntel Core i7 2700K5001000150020002500SE +/- 2.08, N = 32475

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degIntel Core i7 2700K306090120150SE +/- 1.65, N = 3114.131. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUIntel Core i7 2700K3K6K9K12K15KSE +/- 77.74, N = 315085.5MIN: 13852.81. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Intel Core i7 2700K120240360480600549

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupIntel Core i7 2700K3691215SE +/- 0.00, N = 311.3

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentIntel Core i7 2700K20406080100SE +/- 0.19, N = 388.08

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i7 2700K20406080100SE +/- 0.08, N = 386.721. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i7 2700KIntel Core i7 2700K246810SE +/- 0.01, N = 3SE +/- 0.01, N = 37.796.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KCore i7 2700K1020304050SE +/- 0.13, N = 345.16MIN: 42.82 / MAX: 49.011. (CC) gcc options: -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goIntel Core i7 2700K80160240320400SE +/- 0.67, N = 3375

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsCore i7 2700K1632486480SE +/- 0.07, N = 372.211. git version 2.24.0

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileIntel Core i7 2700K300K600K900K1200K1500KSE +/- 1374.58, N = 31385507

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Horizon DetectionIntel Core i7 2700K510152025SE +/- 0.03, N = 320.4

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Single Core - Face DetectionIntel Core i7 2700K246810SE +/- 0.02, N = 36.44

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Gaussian BlurIntel Core i7 2700K918273645SE +/- 0.30, N = 341.5

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single CoreIntel Core i7 2700K2004006008001000802

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUCore i7 2700K7001400210028003500SE +/- 14.68, N = 33088

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pCore i7 2700K4080120160200SE +/- 0.14, N = 3180.86MIN: 131.03 / MAX: 328.441. (CC) gcc options: -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i7 2700KIntel Core i7 2700K0.3420.6841.0261.3681.71SE +/- 0.00, N = 3SE +/- 0.00, N = 31.521.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterIntel Core i7 2700K1428425670SE +/- 0.03, N = 364.971. (CXX) g++ options: -O2 -lOpenCL

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: u8s8f32Core i7 2700K30060090012001500SE +/- 6.82, N = 31301.91MIN: 1283.121. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCCore i7 2700K0.08780.17560.26340.35120.439SE +/- 0.00, N = 30.39MIN: 0.38 / MAX: 0.4

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismCore i7 2700K0.09680.19360.29040.38720.484SE +/- 0.00, N = 30.43MIN: 0.42 / MAX: 0.48

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetIntel Core i7 2700K200K400K600K800K1000KSE +/- 863.14, N = 3849978

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatIntel Core i7 2700K130K260K390K520K650KSE +/- 285.65, N = 3599156

pmbench

Pmbench is a Linux paging and virtual memory benchmark. This test profile will report the average page latency of the system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 8 - Read-Write Ratio: 80% Reads 20% WritesIntel Core i7 2700K0.03210.06420.09630.12840.1605SE +/- 0.0015, N = 30.14281. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 2 - Read-Write Ratio: 80% Reads 20% WritesIntel Core i7 2700K0.02320.04640.06960.09280.116SE +/- 0.0003, N = 30.10331. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 1 - Read-Write Ratio: 50%Intel Core i7 2700K0.01880.03760.05640.07520.094SE +/- 0.0001, N = 30.08341. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 2 - Read-Write Ratio: 50%Intel Core i7 2700K0.01920.03840.05760.07680.096SE +/- 0.0002, N = 30.08541. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 1 - Read-Write Ratio: 80% Reads 20% WritesIntel Core i7 2700K0.02310.04620.06930.09240.1155SE +/- 0.0013, N = 30.10281. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 4 - Read-Write Ratio: 50%Intel Core i7 2700K0.02320.04640.06960.09280.116SE +/- 0.0001, N = 30.10301. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 1 - Read-Write Ratio: 100% WritesIntel Core i7 2700K0.00970.01940.02910.03880.0485SE +/- 0.0000, N = 30.04321. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 4 - Read-Write Ratio: 100% WritesIntel Core i7 2700K0.01430.02860.04290.05720.0715SE +/- 0.0001, N = 30.06341. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 2 - Read-Write Ratio: 100% WritesIntel Core i7 2700K0.010.020.030.040.05SE +/- 0.0001, N = 30.04451. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 1 - Read-Write Ratio: 100% ReadsIntel Core i7 2700K0.0090.0180.0270.0360.045SE +/- 0.0001, N = 30.04001. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 4 - Read-Write Ratio: 100% ReadsIntel Core i7 2700K0.01350.0270.04050.0540.0675SE +/- 0.0001, N = 30.06021. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

OpenBenchmarking.orgus - Average Page Latency, Fewer Is BetterpmbenchConcurrent Worker Threads: 2 - Read-Write Ratio: 100% ReadsIntel Core i7 2700K0.00920.01840.02760.03680.046SE +/- 0.0001, N = 30.04091. (CC) gcc options: -lm -luuid -lxml2 -m64 -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantIntel Core i7 2700K130K260K390K520K650KSE +/- 631.38, N = 3601047

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: f32Core i7 2700K50100150200250SE +/- 0.21, N = 3242.85MIN: 239.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUIntel Core i7 2700K90180270360450SE +/- 1.22, N = 3436.32MIN: 327.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i7 2700K0.8371.6742.5113.3484.185SE +/- 0.03, N = 153.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUIntel Core i7 2700K11002200330044005500SE +/- 35.94, N = 35120.47MIN: 4274.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

perf-bench

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBIntel Core i7 2700K0.6471.2941.9412.5883.235SE +/- 0.001270, N = 32.8757471. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i7 2700KIntel Core i7 2700K0.2160.4320.6480.8641.08SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileIntel Core i7 2700K50100150200250249

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i7 2700K4K8K12K16K20KSE +/- 152.29, N = 3181791. (CXX) g++ options: -pipe -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterIntel Core i7 2700K100200300400500SE +/- 0.25, N = 3484.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Training - Data Type: f32Core i7 2700K5001000150020002500SE +/- 17.32, N = 32386.53MIN: 2343.181. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileIntel Core i7 2700K1020304050SE +/- 0.05, N = 341.98

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i7 2700KIntel Core i7 2700K30060090012001500SE +/- 0.89, N = 3SE +/- 7.86, N = 31598.81498.91. (CC) gcc options: -O3 -pthread -lz -llzma

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingIntel Core i7 2700K80160240320400SE +/- 0.38, N = 3381.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesIntel Core i7 2700K918273645SE +/- 0.03, N = 337.28

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Inference - Data Type: f32Core i7 2700K60120180240300SE +/- 1.13, N = 3293.73MIN: 290.381. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsIntel Core i7 2700K918273645SE +/- 0.03, N = 337.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyIntel Core i7 2700K4080120160200185

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosIntel Core i7 2700K4080120160200SE +/- 0.33, N = 3174

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateIntel Core i7 2700K20406080100SE +/- 0.03, N = 3100.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibIntel Core i7 2700K816243240SE +/- 0.07, N = 334.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatIntel Core i7 2700K4080120160200170

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressCore i7 2700K30060090012001500SE +/- 5.96, N = 31298.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingCore i7 2700K140280420560700SE +/- 1.13, N = 3661.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathCore i7 2700K4K8K12K16K20KSE +/- 13.16, N = 320714.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPCore i7 2700K816243240SE +/- 0.09, N = 332.881. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingCore i7 2700K918273645SE +/- 0.04, N = 341.171. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsCore i7 2700K60K120K180K240K300KSE +/- 913.73, N = 3276081.121. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathCore i7 2700K3K6K9K12K15KSE +/- 25.27, N = 311863.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILECore i7 2700K9K18K27K36K45KSE +/- 213.69, N = 343610.361. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocCore i7 2700K4M8M12M16M20MSE +/- 158630.63, N = 318594404.611. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoCore i7 2700K110220330440550SE +/- 6.82, N = 3522.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDCore i7 2700K50100150200250SE +/- 1.14, N = 3218.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i7 2700KIntel Core i7 2700K510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 321.6519.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheCore i7 2700K3691215SE +/- 0.08, N = 311.571. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

perf-bench

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashIntel Core i7 2700K900K1800K2700K3600K4500KSE +/- 2867.09, N = 340393401. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingCore i7 2700K1.5M3M4.5M6M7.5MSE +/- 23852.20, N = 36903863.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingCore i7 2700K200K400K600K800K1000KSE +/- 5263.36, N = 31056611.721. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityCore i7 2700K7001400210028003500SE +/- 8.11, N = 33094.451. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresCore i7 2700K200K400K600K800K1000KSE +/- 597.10, N = 3806314.401. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingCore i7 2700K7K14K21K28K35KSE +/- 57.63, N = 331877.901. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicCore i7 2700K40K80K120K160K200KSE +/- 1309.28, N = 3208549.951. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesIntel Core i7 2700K306090120150SE +/- 0.33, N = 3150

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i7 2700K612182430SE +/- 0.26, N = 326.081. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonIntel Core i7 2700K160320480640800SE +/- 0.33, N = 3737

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Core i7 2700K90180270360450SE +/- 0.21, N = 3430.81MIN: 428.171. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUIntel Core i7 2700K20406080100SE +/- 1.00, N = 3111.55MIN: 71.151. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: f32Core i7 2700K1020304050SE +/- 0.28, N = 345.21MIN: 44.271. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pCore i7 2700K4080120160200SE +/- 0.15, N = 3175.04MIN: 151.74 / MAX: 191.951. (CC) gcc options: -pthread

perf-bench

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeIntel Core i7 2700K60K120K180K240K300KSE +/- 289.96, N = 32610211. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Core i7 2700K3691215SE +/- 0.03, N = 510.53

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionIntel Core i7 2700K0.79021.58042.37063.16083.951SE +/- 0.043, N = 143.512

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUIntel Core i7 2700K1632486480SE +/- 0.14, N = 373.27MIN: 35.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: u8s8f32Core i7 2700K20406080100SE +/- 0.09, N = 389.63MIN: 88.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: f32Core i7 2700K612182430SE +/- 0.06, N = 324.44MIN: 23.411. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Core i7 2700K9001800270036004500SE +/- 7.19, N = 44377

perf-bench

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBIntel Core i7 2700K816243240SE +/- 0.01, N = 332.451. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Intel Core i7 2700K3691215SE +/- 0.01, N = 312.801. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUIntel Core i7 2700K612182430SE +/- 0.25, N = 326.33MIN: 16.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i7 2700K15003000450060007500SE +/- 53.85, N = 46776

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Intel Core i7 2700K3691215SE +/- 0.08, N = 311.351. (CXX) g++ options: -O3 -fPIC

System ZLIB Decompression

This test measures the time to decompress a Linux kernel tarball using ZLIB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Intel Core i7 2700K400800120016002000SE +/- 16.57, N = 102005.79

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUIntel Core i7 2700K1224364860SE +/- 0.22, N = 351.28MIN: 39.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

perf-bench

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicIntel Core i7 2700K5M10M15M20M25MSE +/- 42666.35, N = 3218179821. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -export-dynamic -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lperl -lc -lcrypt -lpython2.7 -lutil -lz -llzma -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUIntel Core i7 2700K4080120160200SE +/- 0.16, N = 3172.92MIN: 161.781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Core i7 2700K20406080100SE +/- 0.05, N = 384.42MIN: 83.351. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: f32Core i7 2700K1530456075SE +/- 0.06, N = 366.53MIN: 66.241. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl

151 Results Shown

SVT-AV1
Rodinia
Java Gradle Build
SVT-AV1:
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
Rodinia
OpenVKL
YafaRay
LeelaChessZero:
  Eigen
  BLAS
libavif avifenc
LeelaChessZero:
  BLAS
  Eigen
LeelaChessZero
LeelaChessZero
NAMD
AOM AV1
pmbench:
  8 - 50%
  4 - 80% Reads 20% Writes
  8 - 100% Writes
Build2
pmbench
dav1d
Timed Linux Kernel Compilation
libavif avifenc
Rodinia
G'MIC
Embree:
  Pathtracer - Crown
  Pathtracer - Asian Dragon Obj
Hugin
Rodinia
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon Obj
Zstd Compression
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
Embree
perf-bench
Stress-NG
perf-bench
Darmstadt Automotive Parallel Heterogeneous Suite
Embree
Intel Open Image Denoise
PyPerformance
Geekbench:
  CPU Multi Core - Horizon Detection
  CPU Multi Core - Face Detection
  CPU Multi Core - Gaussian Blur
  CPU Multi Core
Montage Astronomical Image Mosaic Engine
oneDNN
PyPerformance:
  2to3
  python_startup
OCRMyPDF
G'MIC
AOM AV1
dav1d
PyPerformance
Git
TensorFlow Lite
Geekbench:
  CPU Single Core - Horizon Detection
  CPU Single Core - Face Detection
  CPU Single Core - Gaussian Blur
  CPU Single Core
Chaos Group V-RAY
dav1d
AOM AV1
Rodinia
oneDNN MKL-DNN
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
pmbench:
  8 - 80% Reads 20% Writes
  2 - 80% Reads 20% Writes
  1 - 50%
  2 - 50%
  1 - 80% Reads 20% Writes
  4 - 50%
  1 - 100% Writes
  4 - 100% Writes
  2 - 100% Writes
  1 - 100% Reads
  4 - 100% Reads
  2 - 100% Reads
TensorFlow Lite
oneDNN MKL-DNN
oneDNN
NeatBench
oneDNN
perf-bench
AOM AV1
PyPerformance
7-Zip Compression
Darmstadt Automotive Parallel Heterogeneous Suite
oneDNN MKL-DNN
Timed Apache Compilation
Zstd Compression
Darmstadt Automotive Parallel Heterogeneous Suite
Tesseract OCR
oneDNN MKL-DNN
PyPerformance:
  json_loads
  nbody
  chaos
  django_template
  pathlib
  float
Stress-NG:
  CPU Stress
  Memory Copying
  Vector Math
  MMAP
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Matrix Math
  SENDFILE
  Malloc
  Crypto
  MEMFD
AOM AV1
Stress-NG
perf-bench
Stress-NG:
  System V Message Passing
  Context Switching
  Socket Activity
  Semaphores
  Forking
  Atomic
PyPerformance
G'MIC
PyPerformance
oneDNN MKL-DNN
oneDNN
oneDNN MKL-DNN
dav1d
perf-bench
GNU Octave Benchmark
System GZIP Decompression
oneDNN
oneDNN MKL-DNN:
  IP Batch 1D - u8s8f32
  IP Batch 1D - f32
DaCapo Benchmark
perf-bench
libavif avifenc
oneDNN
DaCapo Benchmark
libavif avifenc
System ZLIB Decompression
oneDNN
perf-bench
oneDNN
oneDNN MKL-DNN:
  Deconvolution Batch deconv_3d - u8s8f32
  Deconvolution Batch deconv_3d - f32