Tiger Lake Linux 5.10

Intel Core i7-1165G7 testing with a Dell 0GG9PT (1.0.3 BIOS) and Intel UHD 3GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010255-FI-TIGERLAKE05
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Web Browsers 1 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 10 Tests
Creator Workloads 6 Tests
Cryptography 3 Tests
Database Test Suite 4 Tests
Desktop Graphics 3 Tests
Disk Test Suite 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 13 Tests
Machine Learning 3 Tests
Multi-Core 6 Tests
Networking Test Suite 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 5 Tests
Server CPU Tests 6 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
v5.9.1
October 24 2020
  9 Hours, 31 Minutes
v5.10 Git Oct23
October 25 2020
  9 Hours, 15 Minutes
Invert Hiding All Results Option
  9 Hours, 23 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tiger Lake Linux 5.10OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (1.0.3 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel UHD 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 20.105.9.1-050901-generic (x86_64)5.9.0-050900daily20201023-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.1OpenCL 3.01.2.145GCC 10.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionTiger Lake Linux 5.10 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - v5.9.1: NONE / errors=remount-ro,relatime,rw- v5.10 Git Oct23: NONE / errors=remount-ro,no_fc,relatime,rw- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3 - Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

v5.9.1 vs. v5.10 Git Oct23 ComparisonPhoronix Test SuiteBaseline+19.9%+19.9%+39.8%+39.8%+59.7%+59.7%+79.6%+79.6%79.7%22.8%19%6.2%3.5%3.4%3.3%3.1%2.3%2.3%2.3%2.1%MMAPT.T.S.S32.9%Write TestOverwrite21.4%Rand FillOverwrite16.4%Fill Sync11.7%SENDFILE11.3%32 - Process7.5%Vulkan GPU - mnasnetCreate Processes4.4%Latency Under LoadSeek RandP.K.L.LContext Switching3.2%16 - Thread3.1%116 - Process2.9%C.S.TCPU-v2-v2 - mobilenet-v2CPU - shufflenet-v2Rand Delete2.2%D.T.P2.2%Summer Nature 4KCreate Files2%Stress-NGeSpeak-NG Speech EngineIORLevelDBFacebook RocksDBLevelDBLevelDBStress-NGHackbenchNCNNOSBenchSockperfLevelDBoneAPI Level Zero TestsStress-NGHackbenchSQLiteHackbenchctx_clockNCNNNCNNLevelDBPostMarkdav1dOSBenchv5.9.1v5.10 Git Oct23

Tiger Lake Linux 5.10stress-ng: SENDFILEhackbench: 32 - Processosbench: Create Processesoneapi-level-zero: Peak Kernel Launch Latencystress-ng: Context Switchingcryptsetup: PBKDF2-sha512hackbench: 16 - Threadsqlite: 1hackbench: 16 - Processctx-clock: Context Switch Timencnn: CPU - shufflenet-v2leveldb: Rand Deletepostmark: Disk Transaction Performancedav1d: Summer Nature 4Kosbench: Create Filesperf-bench: Epoll Waittesseract: 1920 x 1200sockperf: Throughputlczero: BLASleveldb: Hot Readetlegacy: Renderer2 - 1920 x 1200oneapi-level-zero: Host-To-Device-To-Host Image Copyperf-bench: Memcpy 1MBperf-bench: Sched Pipeselenium: StyleBench - Google Chromeselenium: Kraken - Google Chromestress-ng: NUMAosbench: Create Threadswaifu2x-ncnn: 2x - 3 - Noleveldb: Rand Filldav1d: Summer Nature 1080pethr: TCP - Latency - 1stress-ng: CPU Cacheperf-bench: Futex Hashleveldb: Rand Filltensorflow-lite: SqueezeNetcryptsetup: PBKDF2-whirlpoolethr: HTTP - Bandwidth - 1leveldb: Rand Readospray: San Miguel - SciVist-test1: 1ethr: TCP - Connections/s - 1lczero: OpenCLncnn: CPU - googlenetdav1d: Chimera 1080pstress-ng: MEMFDoneapi-level-zero: Peak Integer Computewireguard: cryptopp: Unkeyed Algorithmsleveldb: Seq Fillncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU-v3-v3 - mobilenet-v3perf-bench: Futex Lock-Pitensorflow-lite: Inception ResNet V2sqlite-speedtest: Timed Time - Size 1,000ncnn: Vulkan GPU - squeezenetoidn: Memorialleveldb: Seq Filltensorflow-lite: Inception V4ncnn: CPU - blazefacet-test1: 2sockperf: Latency Ping Pongncnn: CPU - mobilenetrocksdb: Rand Readstress-ng: Atomicncnn: CPU - yolov4-tinyosbench: Launch Programsoneapi-level-zero: Peak System Memory Copy to Shared Memoryncnn: Vulkan GPU - shufflenet-v2oneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Device-To-Host Bandwidthoneapi-level-zero: Device-To-Host Bandwidthncnn: Vulkan GPU - efficientnet-b0tensorflow-lite: Mobilenet Floatncnn: Vulkan GPU - vgg16etlegacy: Default - 1920 x 1200ncnn: CPU - squeezenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - mobilenetffte: N=256, 3D Complex FFT Routineperf-bench: Memset 1MBstress-ng: Malloctensorflow-lite: Mobilenet Quantrealsr-ncnn: 4x - Norealsr-ncnn: 4x - Yesglmark2: 1920 x 1200osbench: Memory Allocationsselenium: WASM collisionDetection - Google Chromerocksdb: Read While Writingncnn: CPU - alexnetncnn: Vulkan GPU - googlenetncnn: CPU - resnet18ncnn: CPU - efficientnet-b0xonotic: 1920 x 1200 - Ultradav1d: Chimera 1080p 10-bitior: Read Testtensorflow-lite: NASNet Mobilencnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50oneapi-level-zero: Peak Float16 Global Memory Bandwidthoneapi-level-zero: Peak Half-Precision Computeperf-bench: Syscall Basicstress-ng: RdRandxonotic: 1920 x 1200 - Ultimateselenium: WASM imageConvolute - Google Chromencnn: CPU - resnet50selenium: Jetstream 2 - Google Chromeopenvkl: vklBenchmarkbuild-linux-kernel: Time To Compilencnn: CPU - vgg16openssl: RSA 4096-bit Performancewaifu2x-ncnn: 2x - 3 - Yesoneapi-level-zero: Peak Single-Precision Computecryptsetup: PBKDF2-whirlpoolselenium: Maze Solver - Google Chromencnn: Vulkan GPU - blazefacencnn: CPU - mnasnetleveldb: Fill Syncospray: San Miguel - Path Tracerrocksdb: Rand Fill Syncrocksdb: Seq Fillrocksdb: Rand Fillncnn: Vulkan GPU - mnasnetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2stress-ng: MMAPleveldb: Seek Randleveldb: Overwriteleveldb: Overwriteleveldb: Fill Syncespeak: Text-To-Speech Synthesissockperf: Latency Under Loadior: Write Testv5.9.1v5.10 Git Oct2357136.11236.44928.80271321.75491450925.641967369113.26167.717115.0101315.3824.115824671.0910.058778193394135.33127591571223.522133.921.785026.71487023249739.7668.693.8810.8925504.17822.712301.419.2025.66400658939.05616957702901381.053.4865.4112.49511960153224.14306.78234.87440.806275.005402.20278935.17.135.491840732559050.81611.495.9225.25181230402.104.5092.87130.2716087477273678.6339.6637.96021114.57273.0426.55096710110.2010104.0326.56722711.0337691538.47201.325.564.549.429.7031053.11851384083.66788332217820.8736919168.516539.38083967.579269281.324568369018.3610.4921.0011.18160.921283364.47856.6440413615.2115.4156.54153032.342255014837771.12123.381855528.30149.19164.70997.75250.27069.60920.927.2301219.554.70.977.730.10.468977678913081035.116.367.0123.075.74921.82340.58515.06135.06525.73674.1551315.95254.16030.06117721.05401405668.881984754116.76365.707118.3251285.2624.648806872.5710.264418197137137.69577719641243.574135.821.481526.34661823561340.2660.395.0211.0228864.13022.454304.769.1025.94404784839.45560667778881394.213.5195.3612.60012060152123.97308.81233.38438.029276.650399.91755835.37.175.461830736519751.08411.435.8925.12780838932.114.5302.88330.1616145632272696.8239.5238.08975214.62123.0526.62648710081.5610076.5326.63968711.0037591038.37201.825.504.539.449.6830990.26064054283.50441132157875.8136987168.622538.64384067.655723281.006768294018.3410.4820.9811.17161.060268164.52857.2840443115.2215.4256.50853034.092253783037790.91123.440913128.288349.17164.76997.78250.34669.59921.027.2281219.527931774.70.977.730.10.469067628383666844.816.356.8541.465.55926.49634.89507.38346.61624.87091.02OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEv5.10 Git Oct23v5.9.112K24K36K48K60KSE +/- 667.09, N = 3SE +/- 715.05, N = 351315.9557136.111. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEv5.10 Git Oct23v5.9.110K20K30K40K50KMin: 50554.68 / Avg: 51315.95 / Max: 52645.48Min: 56409.06 / Avg: 57136.11 / Max: 58566.141. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 32 - Type: Processv5.10 Git Oct23v5.9.160120180240300SE +/- 0.93, N = 3SE +/- 1.56, N = 3254.16236.451. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 32 - Type: Processv5.10 Git Oct23v5.9.150100150200250Min: 252.55 / Avg: 254.16 / Max: 255.76Min: 233.56 / Avg: 236.45 / Max: 238.91. (CC) gcc options: -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.10 Git Oct23v5.9.1714212835SE +/- 0.33, N = 7SE +/- 0.42, N = 1530.0628.801. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.10 Git Oct23v5.9.1714212835Min: 28.42 / Avg: 30.06 / Max: 31.03Min: 25.4 / Avg: 28.8 / Max: 30.951. (CC) gcc options: -lm

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus, Fewer Is BetteroneAPI Level Zero TestsTest: Peak Kernel Launch Latencyv5.10 Git Oct23v5.9.1510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 321.0521.751. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgus, Fewer Is BetteroneAPI Level Zero TestsTest: Peak Kernel Launch Latencyv5.10 Git Oct23v5.9.1510152025Min: 20.92 / Avg: 21.05 / Max: 21.15Min: 21.69 / Avg: 21.75 / Max: 21.831. (CXX) g++ options: -ldl -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context Switchingv5.10 Git Oct23v5.9.1300K600K900K1200K1500KSE +/- 24167.64, N = 3SE +/- 19047.64, N = 31405668.881450925.641. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context Switchingv5.10 Git Oct23v5.9.1300K600K900K1200K1500KMin: 1367610.28 / Avg: 1405668.88 / Max: 1450502.86Min: 1425321.38 / Avg: 1450925.64 / Max: 1488156.331. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-sha512v5.10 Git Oct23v5.9.1400K800K1200K1600K2000KSE +/- 8242.81, N = 3SE +/- 7703.88, N = 319847541967369
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-sha512v5.10 Git Oct23v5.9.1300K600K900K1200K1500KMin: 1974719 / Avg: 1984754.33 / Max: 2001099Min: 1956298 / Avg: 1967369.33 / Max: 1982185

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Threadv5.10 Git Oct23v5.9.1306090120150SE +/- 1.26, N = 3SE +/- 1.27, N = 3116.76113.261. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Threadv5.10 Git Oct23v5.9.120406080100Min: 114.26 / Avg: 116.76 / Max: 118.15Min: 110.78 / Avg: 113.26 / Max: 114.971. (CC) gcc options: -lpthread

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1v5.10 Git Oct23v5.9.11530456075SE +/- 0.29, N = 3SE +/- 0.64, N = 1065.7167.721. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1v5.10 Git Oct23v5.9.11326395265Min: 65.34 / Avg: 65.71 / Max: 66.29Min: 66.03 / Avg: 67.72 / Max: 731. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Processv5.10 Git Oct23v5.9.1306090120150SE +/- 0.46, N = 3SE +/- 1.21, N = 3118.33115.011. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Processv5.10 Git Oct23v5.9.120406080100Min: 117.46 / Avg: 118.33 / Max: 119.01Min: 112.62 / Avg: 115.01 / Max: 116.561. (CC) gcc options: -lpthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch Timev5.10 Git Oct23v5.9.1306090120150SE +/- 1.15, N = 3128131
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch Timev5.10 Git Oct23v5.9.120406080100Min: 126 / Avg: 128 / Max: 130

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2v5.10 Git Oct23v5.9.11.21052.4213.63154.8426.0525SE +/- 0.14, N = 3SE +/- 0.02, N = 35.265.38MIN: 3.67 / MAX: 8.96MIN: 5.27 / MAX: 8.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2v5.10 Git Oct23v5.9.1246810Min: 4.97 / Avg: 5.26 / Max: 5.42Min: 5.35 / Avg: 5.38 / Max: 5.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deletev5.10 Git Oct23v5.9.1612182430SE +/- 0.31, N = 12SE +/- 0.19, N = 1524.6524.121. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deletev5.10 Git Oct23v5.9.1612182430Min: 21.23 / Avg: 24.65 / Max: 25.27Min: 21.51 / Avg: 24.12 / Max: 24.561. (CXX) g++ options: -O3 -lsnappy -lpthread

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancev5.10 Git Oct23v5.9.12K4K6K8K10KSE +/- 106.38, N = 4SE +/- 92.08, N = 6806882461. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancev5.10 Git Oct23v5.9.114002800420056007000Min: 7812 / Avg: 8068.25 / Max: 8333Min: 8064 / Avg: 8246.33 / Max: 86201. (CC) gcc options: -O3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4Kv5.10 Git Oct23v5.9.11632486480SE +/- 0.84, N = 6SE +/- 0.74, N = 872.5771.09MIN: 61.02 / MAX: 123.41MIN: 60.36 / MAX: 123.581. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4Kv5.10 Git Oct23v5.9.11428425670Min: 71.48 / Avg: 72.57 / Max: 76.78Min: 69.43 / Avg: 71.09 / Max: 76.151. (CC) gcc options: -pthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.10 Git Oct23v5.9.13691215SE +/- 0.02, N = 3SE +/- 0.05, N = 310.2610.061. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.10 Git Oct23v5.9.13691215Min: 10.23 / Avg: 10.26 / Max: 10.29Min: 9.97 / Avg: 10.06 / Max: 10.151. (CC) gcc options: -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll Waitv5.10 Git Oct23v5.9.140K80K120K160K200KSE +/- 2028.32, N = 14SE +/- 2438.14, N = 131971371933941. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll Waitv5.10 Git Oct23v5.9.130K60K90K120K150KMin: 192241 / Avg: 197137.36 / Max: 223191Min: 187628 / Avg: 193394.46 / Max: 2220611. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1200v5.10 Git Oct23v5.9.1306090120150SE +/- 1.75, N = 3SE +/- 0.17, N = 3137.70135.33
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1200v5.10 Git Oct23v5.9.1306090120150Min: 135.22 / Avg: 137.7 / Max: 141.08Min: 135.01 / Avg: 135.33 / Max: 135.59

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputv5.10 Git Oct23v5.9.1170K340K510K680K850KSE +/- 4973.96, N = 5SE +/- 2920.66, N = 57719647591571. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputv5.10 Git Oct23v5.9.1130K260K390K520K650KMin: 755027 / Avg: 771963.8 / Max: 785778Min: 752054 / Avg: 759157.4 / Max: 7667691. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASv5.10 Git Oct23v5.9.13060901201501241221. (CXX) g++ options: -flto -pthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readv5.10 Git Oct23v5.9.10.80421.60842.41263.21684.021SE +/- 0.038, N = 3SE +/- 0.014, N = 33.5743.5221. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readv5.10 Git Oct23v5.9.1246810Min: 3.52 / Avg: 3.57 / Max: 3.65Min: 3.5 / Avg: 3.52 / Max: 3.541. (CXX) g++ options: -O3 -lsnappy -lpthread

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 1920 x 1200v5.10 Git Oct23v5.9.1306090120150SE +/- 1.45, N = 3SE +/- 0.29, N = 3135.8133.9
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 1920 x 1200v5.10 Git Oct23v5.9.1306090120150Min: 134.3 / Avg: 135.8 / Max: 138.7Min: 133.4 / Avg: 133.9 / Max: 134.4

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device-To-Host Image Copyv5.10 Git Oct23v5.9.1510152025SE +/- 0.09, N = 3SE +/- 0.04, N = 321.4821.791. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device-To-Host Image Copyv5.10 Git Oct23v5.9.1510152025Min: 21.35 / Avg: 21.48 / Max: 21.64Min: 21.71 / Avg: 21.79 / Max: 21.861. (CXX) g++ options: -ldl -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBv5.10 Git Oct23v5.9.1612182430SE +/- 0.22, N = 12SE +/- 0.28, N = 826.3526.711. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBv5.10 Git Oct23v5.9.1612182430Min: 25.64 / Avg: 26.35 / Max: 27.97Min: 25.87 / Avg: 26.71 / Max: 28.51. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched Pipev5.10 Git Oct23v5.9.150K100K150K200K250KSE +/- 1754.35, N = 15SE +/- 1878.81, N = 152356132324971. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched Pipev5.10 Git Oct23v5.9.140K80K120K160K200KMin: 231528 / Avg: 235612.8 / Max: 256633Min: 226402 / Avg: 232497.07 / Max: 2534691. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromev5.10 Git Oct23v5.9.1918273645SE +/- 0.48, N = 3SE +/- 0.20, N = 340.239.71. chrome 86.0.4240.111
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromev5.10 Git Oct23v5.9.1816243240Min: 39.7 / Avg: 40.23 / Max: 41.2Min: 39.4 / Avg: 39.73 / Max: 40.11. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromev5.10 Git Oct23v5.9.1140280420560700SE +/- 1.13, N = 3SE +/- 1.82, N = 3660.3668.61. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromev5.10 Git Oct23v5.9.1120240360480600Min: 658.9 / Avg: 660.27 / Max: 662.5Min: 665.2 / Avg: 668.63 / Max: 671.41. chrome 86.0.4240.111

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAv5.10 Git Oct23v5.9.120406080100SE +/- 1.54, N = 3SE +/- 1.48, N = 395.0293.881. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAv5.10 Git Oct23v5.9.120406080100Min: 93.37 / Avg: 95.02 / Max: 98.1Min: 91.93 / Avg: 93.88 / Max: 96.781. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.10 Git Oct23v5.9.13691215SE +/- 0.12, N = 3SE +/- 0.11, N = 311.0210.891. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.10 Git Oct23v5.9.13691215Min: 10.78 / Avg: 11.02 / Max: 11.17Min: 10.72 / Avg: 10.89 / Max: 11.11. (CC) gcc options: -lm

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Nov5.10 Git Oct23v5.9.10.94011.88022.82033.76044.7005SE +/- 0.024, N = 3SE +/- 0.030, N = 34.1304.178
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Nov5.10 Git Oct23v5.9.1246810Min: 4.09 / Avg: 4.13 / Max: 4.17Min: 4.12 / Avg: 4.18 / Max: 4.22

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillv5.10 Git Oct23v5.9.1510152025SE +/- 0.37, N = 3SE +/- 0.24, N = 822.4522.711. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillv5.10 Git Oct23v5.9.1510152025Min: 22.03 / Avg: 22.45 / Max: 23.19Min: 22.12 / Avg: 22.71 / Max: 24.221. (CXX) g++ options: -O3 -lsnappy -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pv5.10 Git Oct23v5.9.170140210280350SE +/- 2.52, N = 14SE +/- 2.82, N = 13304.76301.41MIN: 239.66 / MAX: 402.47MIN: 237.77 / MAX: 403.41. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pv5.10 Git Oct23v5.9.150100150200250Min: 298.09 / Avg: 304.76 / Max: 337.15Min: 294.11 / Avg: 301.41 / Max: 333.921. (CC) gcc options: -pthread

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1v5.10 Git Oct23v5.9.13691215SE +/- 0.02, N = 3SE +/- 0.02, N = 39.109.20MIN: 8.09 / MAX: 18.26MIN: 8.05 / MAX: 13.47
OpenBenchmarking.orgMicroseconds, Fewer Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1v5.10 Git Oct23v5.9.13691215Min: 9.08 / Avg: 9.1 / Max: 9.14Min: 9.16 / Avg: 9.2 / Max: 9.22

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU Cachev5.10 Git Oct23v5.9.1612182430SE +/- 0.22, N = 15SE +/- 0.44, N = 325.9425.661. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU Cachev5.10 Git Oct23v5.9.1612182430Min: 24.47 / Avg: 25.94 / Max: 27.97Min: 25.17 / Avg: 25.66 / Max: 26.531. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Hashv5.10 Git Oct23v5.9.1900K1800K2700K3600K4500KSE +/- 59957.22, N = 4SE +/- 58894.69, N = 4404784840065891. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Hashv5.10 Git Oct23v5.9.1700K1400K2100K2800K3500KMin: 3982437 / Avg: 4047847.5 / Max: 4227344Min: 3920097 / Avg: 4006588.75 / Max: 41783881. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillv5.10 Git Oct23v5.9.1918273645SE +/- 0.64, N = 3SE +/- 0.40, N = 839.439.01. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillv5.10 Git Oct23v5.9.1816243240Min: 38.1 / Avg: 39.37 / Max: 40.1Min: 36.5 / Avg: 38.96 / Max: 401. (CXX) g++ options: -O3 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.10 Git Oct23v5.9.1120K240K360K480K600KSE +/- 6788.79, N = 3SE +/- 3535.45, N = 3556066561695
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.10 Git Oct23v5.9.1100K200K300K400K500KMin: 542500 / Avg: 556066.33 / Max: 563328Min: 554633 / Avg: 561695 / Max: 565533

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolv5.10 Git Oct23v5.9.1170K340K510K680K850KSE +/- 2315.33, N = 3SE +/- 3601.02, N = 3777888770290
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolv5.10 Git Oct23v5.9.1130K260K390K520K650KMin: 775573 / Avg: 777888.33 / Max: 782519Min: 764268 / Avg: 770289.67 / Max: 776722

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1v5.10 Git Oct23v5.9.130060090012001500SE +/- 0.80, N = 3SE +/- 0.30, N = 31394.211381.05MIN: 1380 / MAX: 1410MIN: 1370 / MAX: 1400
OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1v5.10 Git Oct23v5.9.12004006008001000Min: 1393.16 / Avg: 1394.21 / Max: 1395.79Min: 1380.53 / Avg: 1381.05 / Max: 1381.58

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readv5.10 Git Oct23v5.9.10.79181.58362.37543.16723.959SE +/- 0.030, N = 3SE +/- 0.017, N = 33.5193.4861. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readv5.10 Git Oct23v5.9.1246810Min: 3.46 / Avg: 3.52 / Max: 3.57Min: 3.45 / Avg: 3.49 / Max: 3.51. (CXX) g++ options: -O3 -lsnappy -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisv5.10 Git Oct23v5.9.11.21732.43463.65194.86926.0865SE +/- 0.01, N = 3SE +/- 0.01, N = 45.365.41MIN: 5.1 / MAX: 5.56MIN: 5.15 / MAX: 5.59
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisv5.10 Git Oct23v5.9.1246810Min: 5.35 / Avg: 5.36 / Max: 5.38Min: 5.41 / Avg: 5.41 / Max: 5.43

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1v5.10 Git Oct23v5.9.13691215SE +/- 0.02, N = 3SE +/- 0.03, N = 312.6012.501. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1v5.10 Git Oct23v5.9.148121620Min: 12.58 / Avg: 12.6 / Max: 12.64Min: 12.46 / Avg: 12.5 / Max: 12.551. (CC) gcc options: -pthread

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1v5.10 Git Oct23v5.9.13K6K9K12K15KSE +/- 101.49, N = 3SE +/- 83.86, N = 31206011960
OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1v5.10 Git Oct23v5.9.12K4K6K8K10KMin: 11860 / Avg: 12060 / Max: 12190Min: 11820 / Avg: 11960 / Max: 12110

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLv5.10 Git Oct23v5.9.130060090012001500SE +/- 1.76, N = 3SE +/- 2.19, N = 3152115321. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLv5.10 Git Oct23v5.9.130060090012001500Min: 1518 / Avg: 1520.67 / Max: 1524Min: 1528 / Avg: 1532.33 / Max: 15351. (CXX) g++ options: -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetv5.10 Git Oct23v5.9.1612182430SE +/- 0.01, N = 3SE +/- 0.07, N = 323.9724.14MIN: 22.03 / MAX: 36.54MIN: 22.04 / MAX: 36.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetv5.10 Git Oct23v5.9.1612182430Min: 23.95 / Avg: 23.97 / Max: 24Min: 24.02 / Avg: 24.14 / Max: 24.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pv5.10 Git Oct23v5.9.170140210280350SE +/- 2.29, N = 15SE +/- 2.26, N = 14308.81306.78MIN: 189.88 / MAX: 692.4MIN: 190.06 / MAX: 675.611. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pv5.10 Git Oct23v5.9.160120180240300Min: 304.87 / Avg: 308.81 / Max: 340.38Min: 298.1 / Avg: 306.78 / Max: 334.931. (CC) gcc options: -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDv5.10 Git Oct23v5.9.150100150200250SE +/- 2.75, N = 3SE +/- 2.98, N = 5233.38234.871. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDv5.10 Git Oct23v5.9.14080120160200Min: 229.92 / Avg: 233.38 / Max: 238.81Min: 231.03 / Avg: 234.87 / Max: 246.681. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Integer Computev5.10 Git Oct23v5.9.1100200300400500SE +/- 1.68, N = 3SE +/- 1.46, N = 3438.03440.811. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Integer Computev5.10 Git Oct23v5.9.180160240320400Min: 435.05 / Avg: 438.03 / Max: 440.88Min: 437.93 / Avg: 440.81 / Max: 442.691. (CXX) g++ options: -ldl -pthread

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Testv5.10 Git Oct23v5.9.160120180240300SE +/- 1.28, N = 3SE +/- 2.88, N = 3276.65275.01
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Testv5.10 Git Oct23v5.9.150100150200250Min: 275.27 / Avg: 276.65 / Max: 279.21Min: 269.92 / Avg: 275.01 / Max: 279.87

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithmsv5.10 Git Oct23v5.9.190180270360450SE +/- 1.42, N = 3SE +/- 0.85, N = 3399.92402.201. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithmsv5.10 Git Oct23v5.9.170140210280350Min: 397.17 / Avg: 399.92 / Max: 401.89Min: 401.2 / Avg: 402.2 / Max: 403.881. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.10 Git Oct23v5.9.1816243240SE +/- 0.42, N = 15SE +/- 0.51, N = 1235.335.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.10 Git Oct23v5.9.1816243240Min: 34.5 / Avg: 35.27 / Max: 41.1Min: 33.8 / Avg: 35.12 / Max: 40.61. (CXX) g++ options: -O3 -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18v5.10 Git Oct23v5.9.1246810SE +/- 0.08, N = 3SE +/- 0.05, N = 37.177.13MIN: 6.85 / MAX: 7.64MIN: 6.9 / MAX: 7.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18v5.10 Git Oct23v5.9.13691215Min: 7.05 / Avg: 7.17 / Max: 7.32Min: 7.04 / Avg: 7.13 / Max: 7.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3v5.10 Git Oct23v5.9.11.23532.47063.70594.94126.1765SE +/- 0.01, N = 3SE +/- 0.02, N = 35.465.49MIN: 5.28 / MAX: 6.04MIN: 5.29 / MAX: 6.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3v5.10 Git Oct23v5.9.1246810Min: 5.44 / Avg: 5.46 / Max: 5.48Min: 5.46 / Avg: 5.49 / Max: 5.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-Piv5.10 Git Oct23v5.9.1400800120016002000SE +/- 27.09, N = 3SE +/- 26.10, N = 3183018401. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-Piv5.10 Git Oct23v5.9.130060090012001500Min: 1779 / Avg: 1830.33 / Max: 1871Min: 1788 / Avg: 1839.67 / Max: 18721. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.10 Git Oct23v5.9.11.6M3.2M4.8M6.4M8MSE +/- 8358.81, N = 3SE +/- 2213.01, N = 373651977325590
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.10 Git Oct23v5.9.11.3M2.6M3.9M5.2M6.5MMin: 7349110 / Avg: 7365196.67 / Max: 7377180Min: 7322260 / Avg: 7325590 / Max: 7329780

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000v5.10 Git Oct23v5.9.11224364860SE +/- 0.48, N = 3SE +/- 0.42, N = 351.0850.821. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000v5.10 Git Oct23v5.9.11020304050Min: 50.18 / Avg: 51.08 / Max: 51.82Min: 50.02 / Avg: 50.82 / Max: 51.431. (CC) gcc options: -O2 -ldl -lz -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetv5.10 Git Oct23v5.9.13691215SE +/- 0.01, N = 3SE +/- 0.03, N = 311.4311.49MIN: 11.18 / MAX: 14.82MIN: 11.29 / MAX: 12.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetv5.10 Git Oct23v5.9.13691215Min: 11.41 / Avg: 11.43 / Max: 11.45Min: 11.46 / Avg: 11.49 / Max: 11.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: Memorialv5.10 Git Oct23v5.9.11.3322.6643.9965.3286.66SE +/- 0.08, N = 3SE +/- 0.10, N = 35.895.92
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: Memorialv5.10 Git Oct23v5.9.1246810Min: 5.79 / Avg: 5.89 / Max: 6.05Min: 5.8 / Avg: 5.92 / Max: 6.11

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.10 Git Oct23v5.9.1612182430SE +/- 0.26, N = 15SE +/- 0.32, N = 1225.1325.251. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.10 Git Oct23v5.9.1612182430Min: 21.51 / Avg: 25.13 / Max: 25.65Min: 21.77 / Avg: 25.25 / Max: 26.161. (CXX) g++ options: -O3 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.10 Git Oct23v5.9.12M4M6M8M10MSE +/- 13211.50, N = 3SE +/- 5070.03, N = 380838938123040
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.10 Git Oct23v5.9.11.4M2.8M4.2M5.6M7MMin: 8068350 / Avg: 8083893.33 / Max: 8110170Min: 8113210 / Avg: 8123040 / Max: 8130110

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.10 Git Oct23v5.9.10.47480.94961.42441.89922.374SE +/- 0.00, N = 3SE +/- 0.00, N = 32.112.10MIN: 1.97 / MAX: 4.13MIN: 1.97 / MAX: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.10 Git Oct23v5.9.1246810Min: 2.1 / Avg: 2.11 / Max: 2.11Min: 2.1 / Avg: 2.1 / Max: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2v5.10 Git Oct23v5.9.11.01932.03863.05794.07725.0965SE +/- 0.007, N = 3SE +/- 0.009, N = 34.5304.5091. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2v5.10 Git Oct23v5.9.1246810Min: 4.52 / Avg: 4.53 / Max: 4.54Min: 4.5 / Avg: 4.51 / Max: 4.531. (CC) gcc options: -pthread

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pongv5.10 Git Oct23v5.9.10.64871.29741.94612.59483.2435SE +/- 0.007, N = 5SE +/- 0.006, N = 52.8832.8711. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pongv5.10 Git Oct23v5.9.1246810Min: 2.86 / Avg: 2.88 / Max: 2.9Min: 2.86 / Avg: 2.87 / Max: 2.891. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv5.10 Git Oct23v5.9.1714212835SE +/- 0.07, N = 3SE +/- 0.02, N = 330.1630.27MIN: 28.87 / MAX: 80.32MIN: 29.76 / MAX: 41.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv5.10 Git Oct23v5.9.1714212835Min: 30.07 / Avg: 30.16 / Max: 30.29Min: 30.24 / Avg: 30.27 / Max: 30.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Readv5.10 Git Oct23v5.9.13M6M9M12M15MSE +/- 202717.01, N = 5SE +/- 196352.27, N = 516145632160874771. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Readv5.10 Git Oct23v5.9.13M6M9M12M15MMin: 15615820 / Avg: 16145632.2 / Max: 16840137Min: 15696012 / Avg: 16087476.6 / Max: 168166301. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Atomicv5.10 Git Oct23v5.9.160K120K180K240K300KSE +/- 3248.86, N = 15SE +/- 3321.50, N = 15272696.82273678.631. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Atomicv5.10 Git Oct23v5.9.150K100K150K200K250KMin: 261380.29 / Avg: 272696.82 / Max: 299757.5Min: 261741.06 / Avg: 273678.63 / Max: 299907.081. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyv5.10 Git Oct23v5.9.1918273645SE +/- 0.02, N = 3SE +/- 0.03, N = 339.5239.66MIN: 38.45 / MAX: 50.32MIN: 38.45 / MAX: 49.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyv5.10 Git Oct23v5.9.1816243240Min: 39.5 / Avg: 39.52 / Max: 39.55Min: 39.6 / Avg: 39.66 / Max: 39.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.10 Git Oct23v5.9.1918273645SE +/- 0.04, N = 3SE +/- 0.17, N = 338.0937.961. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.10 Git Oct23v5.9.1816243240Min: 38.04 / Avg: 38.09 / Max: 38.17Min: 37.69 / Avg: 37.96 / Max: 38.271. (CC) gcc options: -lm

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak System Memory Copy to Shared Memoryv5.10 Git Oct23v5.9.148121620SE +/- 0.04, N = 3SE +/- 0.08, N = 314.6214.571. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak System Memory Copy to Shared Memoryv5.10 Git Oct23v5.9.148121620Min: 14.54 / Avg: 14.62 / Max: 14.67Min: 14.41 / Avg: 14.57 / Max: 14.651. (CXX) g++ options: -ldl -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2v5.10 Git Oct23v5.9.10.68631.37262.05892.74523.4315SE +/- 0.03, N = 3SE +/- 0.01, N = 33.053.04MIN: 2.75 / MAX: 3.43MIN: 2.89 / MAX: 3.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2v5.10 Git Oct23v5.9.1246810Min: 2.99 / Avg: 3.05 / Max: 3.08Min: 3.02 / Avg: 3.04 / Max: 3.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.10 Git Oct23v5.9.1612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 326.6326.551. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.10 Git Oct23v5.9.1612182430Min: 26.58 / Avg: 26.63 / Max: 26.7Min: 26.51 / Avg: 26.55 / Max: 26.581. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.10 Git Oct23v5.9.12K4K6K8K10KSE +/- 14.81, N = 3SE +/- 7.90, N = 310081.5610110.201. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.10 Git Oct23v5.9.12K4K6K8K10KMin: 10052.06 / Avg: 10081.56 / Max: 10098.5Min: 10098.28 / Avg: 10110.2 / Max: 10125.151. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.10 Git Oct23v5.9.12K4K6K8K10KSE +/- 4.26, N = 3SE +/- 11.22, N = 310076.5310104.031. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.10 Git Oct23v5.9.12K4K6K8K10KMin: 10068.46 / Avg: 10076.53 / Max: 10082.92Min: 10091.73 / Avg: 10104.03 / Max: 10126.441. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.10 Git Oct23v5.9.1612182430SE +/- 0.01, N = 3SE +/- 0.03, N = 326.6426.571. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.10 Git Oct23v5.9.1612182430Min: 26.62 / Avg: 26.64 / Max: 26.66Min: 26.51 / Avg: 26.57 / Max: 26.61. (CXX) g++ options: -ldl -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0v5.10 Git Oct23v5.9.13691215SE +/- 0.01, N = 3SE +/- 0.02, N = 311.0011.03MIN: 10.91 / MAX: 11.48MIN: 10.91 / MAX: 11.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0v5.10 Git Oct23v5.9.13691215Min: 10.99 / Avg: 11 / Max: 11.01Min: 11.01 / Avg: 11.03 / Max: 11.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.10 Git Oct23v5.9.180K160K240K320K400KSE +/- 2855.00, N = 3SE +/- 2824.57, N = 3375910376915
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.10 Git Oct23v5.9.170K140K210K280K350KMin: 370204 / Avg: 375910 / Max: 378948Min: 371266 / Avg: 376915 / Max: 379774

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16v5.10 Git Oct23v5.9.1918273645SE +/- 0.02, N = 3SE +/- 0.02, N = 338.3738.47MIN: 38.11 / MAX: 38.7MIN: 38.14 / MAX: 38.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16v5.10 Git Oct23v5.9.1816243240Min: 38.34 / Avg: 38.37 / Max: 38.4Min: 38.45 / Avg: 38.47 / Max: 38.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Default - Resolution: 1920 x 1200v5.10 Git Oct23v5.9.14080120160200SE +/- 2.73, N = 4SE +/- 2.08, N = 13201.8201.3
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Default - Resolution: 1920 x 1200v5.10 Git Oct23v5.9.14080120160200Min: 198.3 / Avg: 201.75 / Max: 209.9Min: 197.9 / Avg: 201.26 / Max: 226

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.10 Git Oct23v5.9.1612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 325.5025.56MIN: 24.75 / MAX: 36.78MIN: 24.72 / MAX: 36.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.10 Git Oct23v5.9.1612182430Min: 25.46 / Avg: 25.5 / Max: 25.54Min: 25.51 / Avg: 25.56 / Max: 25.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2v5.10 Git Oct23v5.9.11.02152.0433.06454.0865.1075SE +/- 0.01, N = 3SE +/- 0.15, N = 34.534.54MIN: 4.08 / MAX: 5.38MIN: 4.03 / MAX: 5.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2v5.10 Git Oct23v5.9.1246810Min: 4.51 / Avg: 4.53 / Max: 4.55Min: 4.33 / Avg: 4.54 / Max: 4.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetv5.10 Git Oct23v5.9.13691215SE +/- 0.04, N = 3SE +/- 0.06, N = 39.449.42MIN: 9.16 / MAX: 9.76MIN: 8.59 / MAX: 9.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetv5.10 Git Oct23v5.9.13691215Min: 9.39 / Avg: 9.44 / Max: 9.51Min: 9.33 / Avg: 9.42 / Max: 9.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetv5.10 Git Oct23v5.9.13691215SE +/- 0.02, N = 3SE +/- 0.01, N = 39.689.70MIN: 9.47 / MAX: 10.05MIN: 9.17 / MAX: 11.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetv5.10 Git Oct23v5.9.13691215Min: 9.64 / Avg: 9.68 / Max: 9.72Min: 9.69 / Avg: 9.7 / Max: 9.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routinev5.10 Git Oct23v5.9.17K14K21K28K35KSE +/- 94.04, N = 3SE +/- 38.53, N = 330990.2631053.121. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routinev5.10 Git Oct23v5.9.15K10K15K20K25KMin: 30849.48 / Avg: 30990.26 / Max: 31168.66Min: 30997.43 / Avg: 31053.12 / Max: 31127.11. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBv5.10 Git Oct23v5.9.120406080100SE +/- 1.03, N = 15SE +/- 0.77, N = 1583.5083.671. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBv5.10 Git Oct23v5.9.11632486480Min: 72.36 / Avg: 83.5 / Max: 85.84Min: 76.31 / Avg: 83.67 / Max: 86.231. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Mallocv5.10 Git Oct23v5.9.17M14M21M28M35MSE +/- 380954.86, N = 3SE +/- 475699.37, N = 332157875.8132217820.871. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Mallocv5.10 Git Oct23v5.9.16M12M18M24M30MMin: 31631766.26 / Avg: 32157875.81 / Max: 32898200.29Min: 31581767.6 / Avg: 32217820.87 / Max: 33148584.691. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.10 Git Oct23v5.9.180K160K240K320K400KSE +/- 2810.57, N = 3SE +/- 3037.19, N = 3369871369191
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.10 Git Oct23v5.9.160K120K180K240K300KMin: 364260 / Avg: 369870.67 / Max: 372973Min: 363123 / Avg: 369191.33 / Max: 372460

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nov5.10 Git Oct23v5.9.11530456075SE +/- 0.32, N = 3SE +/- 0.33, N = 368.6268.52
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nov5.10 Git Oct23v5.9.11326395265Min: 68.16 / Avg: 68.62 / Max: 69.23Min: 67.89 / Avg: 68.52 / Max: 69.02

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesv5.10 Git Oct23v5.9.1120240360480600SE +/- 0.39, N = 3SE +/- 0.37, N = 3538.64539.38
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesv5.10 Git Oct23v5.9.1100200300400500Min: 537.88 / Avg: 538.64 / Max: 539.17Min: 538.66 / Avg: 539.38 / Max: 539.86

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1200v5.10 Git Oct23v5.9.12004006008001000840839

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.10 Git Oct23v5.9.11530456075SE +/- 0.14, N = 3SE +/- 0.07, N = 367.6667.581. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.10 Git Oct23v5.9.11326395265Min: 67.45 / Avg: 67.66 / Max: 67.92Min: 67.51 / Avg: 67.58 / Max: 67.721. (CC) gcc options: -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chromev5.10 Git Oct23v5.9.160120180240300SE +/- 0.43, N = 3SE +/- 0.05, N = 3281.01281.321. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chromev5.10 Git Oct23v5.9.150100150200250Min: 280.55 / Avg: 281.01 / Max: 281.87Min: 281.23 / Avg: 281.32 / Max: 281.421. chrome 86.0.4240.111

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While Writingv5.10 Git Oct23v5.9.1150K300K450K600K750KSE +/- 6146.43, N = 11SE +/- 5312.32, N = 156829406836901. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While Writingv5.10 Git Oct23v5.9.1120K240K360K480K600KMin: 661540 / Avg: 682940 / Max: 737752Min: 662641 / Avg: 683689.87 / Max: 7481631. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.10 Git Oct23v5.9.1510152025SE +/- 0.00, N = 3SE +/- 0.02, N = 318.3418.36MIN: 17.16 / MAX: 21.18MIN: 17.1 / MAX: 30.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.10 Git Oct23v5.9.1510152025Min: 18.34 / Avg: 18.34 / Max: 18.34Min: 18.34 / Avg: 18.36 / Max: 18.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetv5.10 Git Oct23v5.9.13691215SE +/- 0.01, N = 3SE +/- 0.04, N = 310.4810.49MIN: 10.36 / MAX: 10.7MIN: 10.27 / MAX: 11.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetv5.10 Git Oct23v5.9.13691215Min: 10.47 / Avg: 10.48 / Max: 10.49Min: 10.41 / Avg: 10.49 / Max: 10.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18v5.10 Git Oct23v5.9.1510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 320.9821.00MIN: 18.48 / MAX: 25.7MIN: 18.44 / MAX: 23.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18v5.10 Git Oct23v5.9.1510152025Min: 20.93 / Avg: 20.98 / Max: 21.04Min: 20.98 / Avg: 21 / Max: 21.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0v5.10 Git Oct23v5.9.13691215SE +/- 0.04, N = 3SE +/- 0.04, N = 311.1711.18MIN: 10.92 / MAX: 23.39MIN: 10.91 / MAX: 22.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0v5.10 Git Oct23v5.9.13691215Min: 11.1 / Avg: 11.17 / Max: 11.25Min: 11.13 / Avg: 11.18 / Max: 11.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultrav5.10 Git Oct23v5.9.14080120160200SE +/- 0.87, N = 3SE +/- 0.25, N = 3161.06160.92MIN: 82 / MAX: 269MIN: 81 / MAX: 269
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultrav5.10 Git Oct23v5.9.1306090120150Min: 160.14 / Avg: 161.06 / Max: 162.79Min: 160.62 / Avg: 160.92 / Max: 161.43

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitv5.10 Git Oct23v5.9.11428425670SE +/- 1.01, N = 3SE +/- 0.97, N = 364.5264.47MIN: 40.89 / MAX: 203.17MIN: 41.01 / MAX: 203.781. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitv5.10 Git Oct23v5.9.11326395265Min: 63.49 / Avg: 64.52 / Max: 66.55Min: 63.49 / Avg: 64.47 / Max: 66.411. (CC) gcc options: -pthread

IOR

IOR is a parallel I/O storage benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Read Testv5.10 Git Oct23v5.9.12004006008001000SE +/- 2.95, N = 15SE +/- 11.16, N = 3857.28856.64MIN: 680.98 / MAX: 945.14MIN: 789.76 / MAX: 930.541. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Read Testv5.10 Git Oct23v5.9.1150300450600750Min: 833.97 / Avg: 857.28 / Max: 878.08Min: 843.83 / Avg: 856.64 / Max: 878.871. (CC) gcc options: -O2 -lm -pthread -lmpi

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.10 Git Oct23v5.9.190K180K270K360K450KSE +/- 2810.87, N = 3SE +/- 3396.00, N = 3404431404136
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.10 Git Oct23v5.9.170K140K210K280K350KMin: 398810 / Avg: 404431 / Max: 407320Min: 397344 / Avg: 404136 / Max: 407541

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyv5.10 Git Oct23v5.9.148121620SE +/- 0.01, N = 3SE +/- 0.02, N = 315.2215.21MIN: 14.94 / MAX: 15.71MIN: 14.84 / MAX: 24.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyv5.10 Git Oct23v5.9.148121620Min: 15.2 / Avg: 15.22 / Max: 15.24Min: 15.18 / Avg: 15.21 / Max: 15.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50v5.10 Git Oct23v5.9.148121620SE +/- 0.04, N = 3SE +/- 0.02, N = 315.4215.41MIN: 15.22 / MAX: 15.68MIN: 15.05 / MAX: 15.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50v5.10 Git Oct23v5.9.148121620Min: 15.38 / Avg: 15.42 / Max: 15.5Min: 15.38 / Avg: 15.41 / Max: 15.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Float16 Global Memory Bandwidthv5.10 Git Oct23v5.9.11326395265SE +/- 0.08, N = 3SE +/- 0.16, N = 356.5156.541. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Float16 Global Memory Bandwidthv5.10 Git Oct23v5.9.11122334455Min: 56.34 / Avg: 56.51 / Max: 56.62Min: 56.23 / Avg: 56.54 / Max: 56.71. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Half-Precision Computev5.10 Git Oct23v5.9.17001400210028003500SE +/- 1.26, N = 3SE +/- 3.59, N = 33034.093032.341. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Half-Precision Computev5.10 Git Oct23v5.9.15001000150020002500Min: 3032.47 / Avg: 3034.09 / Max: 3036.58Min: 3025.72 / Avg: 3032.34 / Max: 3038.071. (CXX) g++ options: -ldl -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall Basicv5.10 Git Oct23v5.9.15M10M15M20M25MSE +/- 26155.84, N = 3SE +/- 174877.88, N = 322537830225501481. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall Basicv5.10 Git Oct23v5.9.14M8M12M16M20MMin: 22510223 / Avg: 22537829.67 / Max: 22590114Min: 22360339 / Avg: 22550148 / Max: 228994661. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandv5.10 Git Oct23v5.9.18K16K24K32K40KSE +/- 120.80, N = 3SE +/- 99.04, N = 337790.9137771.121. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandv5.10 Git Oct23v5.9.17K14K21K28K35KMin: 37622.93 / Avg: 37790.91 / Max: 38025.27Min: 37626.95 / Avg: 37771.12 / Max: 37960.851. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultimatev5.10 Git Oct23v5.9.1306090120150SE +/- 0.40, N = 3SE +/- 0.39, N = 3123.44123.38MIN: 35 / MAX: 224MIN: 36 / MAX: 213
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultimatev5.10 Git Oct23v5.9.120406080100Min: 122.97 / Avg: 123.44 / Max: 124.24Min: 122.97 / Avg: 123.38 / Max: 124.17

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chromev5.10 Git Oct23v5.9.1714212835SE +/- 0.14, N = 3SE +/- 0.16, N = 328.2928.301. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chromev5.10 Git Oct23v5.9.1612182430Min: 28.02 / Avg: 28.29 / Max: 28.45Min: 27.99 / Avg: 28.3 / Max: 28.521. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50v5.10 Git Oct23v5.9.11122334455SE +/- 0.03, N = 3SE +/- 0.02, N = 349.1749.19MIN: 46.94 / MAX: 66.12MIN: 46.9 / MAX: 61.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50v5.10 Git Oct23v5.9.11020304050Min: 49.11 / Avg: 49.17 / Max: 49.2Min: 49.16 / Avg: 49.19 / Max: 49.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromev5.10 Git Oct23v5.9.14080120160200SE +/- 0.88, N = 3SE +/- 1.42, N = 3164.77164.711. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromev5.10 Git Oct23v5.9.1306090120150Min: 163.58 / Avg: 164.77 / Max: 166.48Min: 162.7 / Avg: 164.71 / Max: 167.471. chrome 86.0.4240.111

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkv5.10 Git Oct23v5.9.120406080100SE +/- 0.07, N = 3SE +/- 0.30, N = 397.7897.75MIN: 1 / MAX: 446MIN: 1 / MAX: 446
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkv5.10 Git Oct23v5.9.120406080100Min: 97.67 / Avg: 97.78 / Max: 97.92Min: 97.33 / Avg: 97.75 / Max: 98.33

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compilev5.10 Git Oct23v5.9.150100150200250SE +/- 1.03, N = 3SE +/- 0.49, N = 3250.35250.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compilev5.10 Git Oct23v5.9.150100150200250Min: 249.13 / Avg: 250.35 / Max: 252.38Min: 249.44 / Avg: 250.27 / Max: 251.14

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16v5.10 Git Oct23v5.9.11530456075SE +/- 0.13, N = 3SE +/- 0.04, N = 369.5969.60MIN: 65.36 / MAX: 81.27MIN: 65.61 / MAX: 81.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16v5.10 Git Oct23v5.9.11326395265Min: 69.34 / Avg: 69.59 / Max: 69.79Min: 69.53 / Avg: 69.6 / Max: 69.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performancev5.10 Git Oct23v5.9.12004006008001000SE +/- 9.40, N = 13SE +/- 9.19, N = 14921.0920.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performancev5.10 Git Oct23v5.9.1160320480640800Min: 908.2 / Avg: 920.98 / Max: 1033.5Min: 902 / Avg: 920.94 / Max: 1039.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesv5.10 Git Oct23v5.9.1612182430SE +/- 0.00, N = 3SE +/- 0.02, N = 327.2327.23
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesv5.10 Git Oct23v5.9.1612182430Min: 27.22 / Avg: 27.23 / Max: 27.23Min: 27.2 / Avg: 27.23 / Max: 27.27

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Single-Precision Computev5.10 Git Oct23v5.9.130060090012001500SE +/- 0.00, N = 3SE +/- 0.01, N = 31219.521219.551. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Single-Precision Computev5.10 Git Oct23v5.9.12004006008001000Min: 1219.52 / Avg: 1219.52 / Max: 1219.53Min: 1219.54 / Avg: 1219.55 / Max: 1219.571. (CXX) g++ options: -ldl -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-whirlpoolv5.10 Git Oct23200K400K600K800K1000KSE +/- 1202.00, N = 3793177

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google Chromev5.10 Git Oct23v5.9.11.05752.1153.17254.235.2875SE +/- 0.03, N = 3SE +/- 0.00, N = 34.74.71. chrome 86.0.4240.111
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google Chromev5.10 Git Oct23v5.9.1246810Min: 4.6 / Avg: 4.67 / Max: 4.7Min: 4.7 / Avg: 4.7 / Max: 4.71. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefacev5.10 Git Oct23v5.9.10.21830.43660.65490.87321.0915SE +/- 0.01, N = 3SE +/- 0.02, N = 30.970.97MIN: 0.88 / MAX: 1.11MIN: 0.88 / MAX: 1.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefacev5.10 Git Oct23v5.9.1246810Min: 0.94 / Avg: 0.97 / Max: 0.99Min: 0.94 / Avg: 0.97 / Max: 0.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.10 Git Oct23v5.9.1246810SE +/- 0.04, N = 3SE +/- 0.04, N = 37.737.73MIN: 7.34 / MAX: 19.51MIN: 7.58 / MAX: 19.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.10 Git Oct23v5.9.13691215Min: 7.65 / Avg: 7.73 / Max: 7.79Min: 7.67 / Avg: 7.73 / Max: 7.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncv5.10 Git Oct23v5.9.10.02250.0450.06750.090.1125SE +/- 0.00, N = 3SE +/- 0.00, N = 30.10.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncv5.10 Git Oct23v5.9.112345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11. (CXX) g++ options: -O3 -lsnappy -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracerv5.10 Git Oct23v5.9.10.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.46MIN: 0.45MIN: 0.45
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracerv5.10 Git Oct23v5.9.112345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill Syncv5.10 Git Oct23v5.9.12004006008001000SE +/- 26.33, N = 14SE +/- 29.07, N = 139068971. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill Syncv5.10 Git Oct23v5.9.1160320480640800Min: 622 / Avg: 906.14 / Max: 989Min: 596 / Avg: 896.62 / Max: 9901. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential Fillv5.10 Git Oct23v5.9.1160K320K480K640K800KSE +/- 25204.67, N = 15SE +/- 30877.90, N = 127628387678911. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential Fillv5.10 Git Oct23v5.9.1130K260K390K520K650KMin: 610637 / Avg: 762837.93 / Max: 907685Min: 573255 / Avg: 767890.5 / Max: 8827861. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fillv5.10 Git Oct23v5.9.180K160K240K320K400KSE +/- 14798.54, N = 15SE +/- 17977.32, N = 153666843081031. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fillv5.10 Git Oct23v5.9.160K120K180K240K300KMin: 287620 / Avg: 366683.87 / Max: 505984Min: 243762 / Avg: 308103.07 / Max: 5207001. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetv5.10 Git Oct23v5.9.11.14982.29963.44944.59925.749SE +/- 0.02, N = 3SE +/- 0.21, N = 34.815.11MIN: 4.28 / MAX: 5.17MIN: 4.3 / MAX: 7.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetv5.10 Git Oct23v5.9.1246810Min: 4.79 / Avg: 4.81 / Max: 4.84Min: 4.87 / Avg: 5.11 / Max: 5.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3v5.10 Git Oct23v5.9.1246810SE +/- 0.91, N = 3SE +/- 0.89, N = 36.356.36MIN: 4.48 / MAX: 11.58MIN: 4.47 / MAX: 13.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3v5.10 Git Oct23v5.9.13691215Min: 4.54 / Avg: 6.35 / Max: 7.27Min: 4.58 / Avg: 6.36 / Max: 7.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2v5.10 Git Oct23v5.9.1246810SE +/- 0.73, N = 3SE +/- 0.79, N = 36.857.01MIN: 5.34 / MAX: 12.7MIN: 5.33 / MAX: 12.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2v5.10 Git Oct23v5.9.13691215Min: 5.45 / Avg: 6.85 / Max: 7.91Min: 5.44 / Avg: 7.01 / Max: 7.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPv5.10 Git Oct23v5.9.1918273645SE +/- 0.67, N = 3SE +/- 0.78, N = 1541.4623.071. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPv5.10 Git Oct23v5.9.1918273645Min: 40.76 / Avg: 41.46 / Max: 42.79Min: 18.9 / Avg: 23.07 / Max: 31.21. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomv5.10 Git Oct23v5.9.11.29352.5873.88055.1746.4675SE +/- 0.122, N = 15SE +/- 0.125, N = 155.5595.7491. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomv5.10 Git Oct23v5.9.1246810Min: 4.43 / Avg: 5.56 / Max: 5.88Min: 4.57 / Avg: 5.75 / Max: 6.161. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwritev5.10 Git Oct23v5.9.1612182430SE +/- 1.70, N = 12SE +/- 0.09, N = 326.5021.821. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwritev5.10 Git Oct23v5.9.1612182430Min: 21.63 / Avg: 26.5 / Max: 37.76Min: 21.73 / Avg: 21.82 / Max: 221. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwritev5.10 Git Oct23v5.9.1918273645SE +/- 1.96, N = 12SE +/- 0.17, N = 334.840.51. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwritev5.10 Git Oct23v5.9.1816243240Min: 23.4 / Avg: 34.76 / Max: 40.9Min: 40.2 / Avg: 40.53 / Max: 40.71. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncv5.10 Git Oct23v5.9.12K4K6K8K10KSE +/- 1780.01, N = 3SE +/- 87.54, N = 39507.388515.061. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncv5.10 Git Oct23v5.9.116003200480064008000Min: 7611.33 / Avg: 9507.38 / Max: 13064.83Min: 8393.47 / Avg: 8515.06 / Max: 8684.951. (CXX) g++ options: -O3 -lsnappy -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.10 Git Oct23v5.9.11122334455SE +/- 2.27, N = 16SE +/- 0.79, N = 1646.6235.071. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.10 Git Oct23v5.9.1918273645Min: 34.39 / Avg: 46.62 / Max: 65.22Min: 31.49 / Avg: 35.07 / Max: 43.161. (CC) gcc options: -O2 -std=c99

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Loadv5.10 Git Oct23v5.9.1612182430SE +/- 1.43, N = 25SE +/- 1.26, N = 2524.8725.741. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Loadv5.10 Git Oct23v5.9.1612182430Min: 11.14 / Avg: 24.87 / Max: 30.48Min: 7.91 / Avg: 25.74 / Max: 30.041. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

IOR

IOR is a parallel I/O storage benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Write Testv5.10 Git Oct23v5.9.120406080100SE +/- 2.17, N = 15SE +/- 1.21, N = 391.0274.15MIN: 17.47 / MAX: 147.46MIN: 21.68 / MAX: 122.311. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Write Testv5.10 Git Oct23v5.9.120406080100Min: 76.3 / Avg: 91.02 / Max: 108.98Min: 72.61 / Avg: 74.15 / Max: 76.541. (CC) gcc options: -O2 -lm -pthread -lmpi

135 Results Shown

Stress-NG
Hackbench
OSBench
oneAPI Level Zero Tests
Stress-NG
Cryptsetup
Hackbench
SQLite
Hackbench
ctx_clock
NCNN
LevelDB
PostMark
dav1d
OSBench
perf-bench
Tesseract
Sockperf
LeelaChessZero
LevelDB
ET: Legacy
oneAPI Level Zero Tests
perf-bench:
  Memcpy 1MB
  Sched Pipe
Selenium:
  StyleBench - Google Chrome
  Kraken - Google Chrome
Stress-NG
OSBench
Waifu2x-NCNN Vulkan
LevelDB
dav1d
Ethr
Stress-NG
perf-bench
LevelDB
TensorFlow Lite
Cryptsetup
Ethr
LevelDB
OSPray
t-test1
Ethr
LeelaChessZero
NCNN
dav1d
Stress-NG
oneAPI Level Zero Tests
WireGuard + Linux Networking Stack Stress Test
Crypto++
LevelDB
NCNN:
  Vulkan GPU - resnet18
  Vulkan GPU-v3-v3 - mobilenet-v3
perf-bench
TensorFlow Lite
SQLite Speedtest
NCNN
Intel Open Image Denoise
LevelDB
TensorFlow Lite
NCNN
t-test1
Sockperf
NCNN
Facebook RocksDB
Stress-NG
NCNN
OSBench
oneAPI Level Zero Tests
NCNN
oneAPI Level Zero Tests:
  Host-To-Device Bandwidth:
    GB/s
    usec
  Device-To-Host Bandwidth:
    usec
    GB/s
NCNN
TensorFlow Lite
NCNN
ET: Legacy
NCNN:
  CPU - squeezenet
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - alexnet
  Vulkan GPU - mobilenet
FFTE
perf-bench
Stress-NG
TensorFlow Lite
RealSR-NCNN:
  4x - No
  4x - Yes
GLmark2
OSBench
Selenium
Facebook RocksDB
NCNN:
  CPU - alexnet
  Vulkan GPU - googlenet
  CPU - resnet18
  CPU - efficientnet-b0
Xonotic
dav1d
IOR
TensorFlow Lite
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
oneAPI Level Zero Tests:
  Peak Float16 Global Memory Bandwidth
  Peak Half-Precision Compute
perf-bench
Stress-NG
Xonotic
Selenium
NCNN
Selenium
OpenVKL
Timed Linux Kernel Compilation
NCNN
OpenSSL
Waifu2x-NCNN Vulkan
oneAPI Level Zero Tests
Cryptsetup
Selenium
NCNN:
  Vulkan GPU - blazeface
  CPU - mnasnet
LevelDB
OSPray
Facebook RocksDB:
  Rand Fill Sync
  Seq Fill
  Rand Fill
NCNN:
  Vulkan GPU - mnasnet
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Stress-NG
LevelDB:
  Seek Rand
  Overwrite
  Overwrite
  Fill Sync
eSpeak-NG Speech Engine
Sockperf
IOR