Tiger Lake Linux 5.10

Intel Core i7-1165G7 testing with a Dell 0GG9PT (1.0.3 BIOS) and Intel UHD 3GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010255-FI-TIGERLAKE05
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Web Browsers 1 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 10 Tests
Creator Workloads 6 Tests
Cryptography 3 Tests
Database Test Suite 4 Tests
Desktop Graphics 3 Tests
Disk Test Suite 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 13 Tests
Machine Learning 3 Tests
Multi-Core 6 Tests
Networking Test Suite 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 5 Tests
Server CPU Tests 6 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
v5.9.1
October 24 2020
  9 Hours, 31 Minutes
v5.10 Git Oct23
October 25 2020
  9 Hours, 15 Minutes
Invert Hiding All Results Option
  9 Hours, 23 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tiger Lake Linux 5.10OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (1.0.3 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel UHD 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 20.105.9.1-050901-generic (x86_64)5.9.0-050900daily20201023-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.1OpenCL 3.01.2.145GCC 10.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionTiger Lake Linux 5.10 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - v5.9.1: NONE / errors=remount-ro,relatime,rw- v5.10 Git Oct23: NONE / errors=remount-ro,no_fc,relatime,rw- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3 - Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

v5.9.1 vs. v5.10 Git Oct23 ComparisonPhoronix Test SuiteBaseline+19.9%+19.9%+39.8%+39.8%+59.7%+59.7%+79.6%+79.6%79.7%22.8%19%6.2%3.5%3.4%3.3%3.1%2.3%2.3%2.3%2.1%MMAPT.T.S.S32.9%Write TestOverwrite21.4%Rand FillOverwrite16.4%Fill Sync11.7%SENDFILE11.3%32 - Process7.5%Vulkan GPU - mnasnetCreate Processes4.4%Latency Under LoadSeek RandP.K.L.LContext Switching3.2%16 - Thread3.1%116 - Process2.9%C.S.TCPU-v2-v2 - mobilenet-v2CPU - shufflenet-v2Rand Delete2.2%D.T.P2.2%Summer Nature 4KCreate Files2%Stress-NGeSpeak-NG Speech EngineIORLevelDBFacebook RocksDBLevelDBLevelDBStress-NGHackbenchNCNNOSBenchSockperfLevelDBoneAPI Level Zero TestsStress-NGHackbenchSQLiteHackbenchctx_clockNCNNNCNNLevelDBPostMarkdav1dOSBenchv5.9.1v5.10 Git Oct23

Tiger Lake Linux 5.10stress-ng: SENDFILEhackbench: 32 - Processosbench: Create Processesoneapi-level-zero: Peak Kernel Launch Latencystress-ng: Context Switchingcryptsetup: PBKDF2-sha512hackbench: 16 - Threadsqlite: 1hackbench: 16 - Processctx-clock: Context Switch Timencnn: CPU - shufflenet-v2leveldb: Rand Deletepostmark: Disk Transaction Performancedav1d: Summer Nature 4Kosbench: Create Filesperf-bench: Epoll Waittesseract: 1920 x 1200sockperf: Throughputlczero: BLASleveldb: Hot Readetlegacy: Renderer2 - 1920 x 1200oneapi-level-zero: Host-To-Device-To-Host Image Copyperf-bench: Memcpy 1MBperf-bench: Sched Pipeselenium: StyleBench - Google Chromeselenium: Kraken - Google Chromestress-ng: NUMAosbench: Create Threadswaifu2x-ncnn: 2x - 3 - Noleveldb: Rand Filldav1d: Summer Nature 1080pethr: TCP - Latency - 1stress-ng: CPU Cacheperf-bench: Futex Hashleveldb: Rand Filltensorflow-lite: SqueezeNetcryptsetup: PBKDF2-whirlpoolethr: HTTP - Bandwidth - 1leveldb: Rand Readospray: San Miguel - SciVist-test1: 1ethr: TCP - Connections/s - 1lczero: OpenCLncnn: CPU - googlenetdav1d: Chimera 1080pstress-ng: MEMFDoneapi-level-zero: Peak Integer Computewireguard: cryptopp: Unkeyed Algorithmsleveldb: Seq Fillncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU-v3-v3 - mobilenet-v3perf-bench: Futex Lock-Pitensorflow-lite: Inception ResNet V2sqlite-speedtest: Timed Time - Size 1,000ncnn: Vulkan GPU - squeezenetoidn: Memorialleveldb: Seq Filltensorflow-lite: Inception V4ncnn: CPU - blazefacet-test1: 2sockperf: Latency Ping Pongncnn: CPU - mobilenetrocksdb: Rand Readstress-ng: Atomicncnn: CPU - yolov4-tinyosbench: Launch Programsoneapi-level-zero: Peak System Memory Copy to Shared Memoryncnn: Vulkan GPU - shufflenet-v2oneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Host-To-Device Bandwidthoneapi-level-zero: Device-To-Host Bandwidthoneapi-level-zero: Device-To-Host Bandwidthncnn: Vulkan GPU - efficientnet-b0tensorflow-lite: Mobilenet Floatncnn: Vulkan GPU - vgg16etlegacy: Default - 1920 x 1200ncnn: CPU - squeezenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - mobilenetffte: N=256, 3D Complex FFT Routineperf-bench: Memset 1MBstress-ng: Malloctensorflow-lite: Mobilenet Quantrealsr-ncnn: 4x - Norealsr-ncnn: 4x - Yesglmark2: 1920 x 1200osbench: Memory Allocationsselenium: WASM collisionDetection - Google Chromerocksdb: Read While Writingncnn: CPU - alexnetncnn: Vulkan GPU - googlenetncnn: CPU - resnet18ncnn: CPU - efficientnet-b0xonotic: 1920 x 1200 - Ultradav1d: Chimera 1080p 10-bitior: Read Testtensorflow-lite: NASNet Mobilencnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50oneapi-level-zero: Peak Float16 Global Memory Bandwidthoneapi-level-zero: Peak Half-Precision Computeperf-bench: Syscall Basicstress-ng: RdRandxonotic: 1920 x 1200 - Ultimateselenium: WASM imageConvolute - Google Chromencnn: CPU - resnet50selenium: Jetstream 2 - Google Chromeopenvkl: vklBenchmarkbuild-linux-kernel: Time To Compilencnn: CPU - vgg16openssl: RSA 4096-bit Performancewaifu2x-ncnn: 2x - 3 - Yesoneapi-level-zero: Peak Single-Precision Computecryptsetup: PBKDF2-whirlpoolselenium: Maze Solver - Google Chromencnn: Vulkan GPU - blazefacencnn: CPU - mnasnetleveldb: Fill Syncospray: San Miguel - Path Tracerrocksdb: Rand Fill Syncrocksdb: Seq Fillrocksdb: Rand Fillncnn: Vulkan GPU - mnasnetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2stress-ng: MMAPleveldb: Seek Randleveldb: Overwriteleveldb: Overwriteleveldb: Fill Syncespeak: Text-To-Speech Synthesissockperf: Latency Under Loadior: Write Testv5.9.1v5.10 Git Oct2357136.11236.44928.80271321.75491450925.641967369113.26167.717115.0101315.3824.115824671.0910.058778193394135.33127591571223.522133.921.785026.71487023249739.7668.693.8810.8925504.17822.712301.419.2025.66400658939.05616957702901381.053.4865.4112.49511960153224.14306.78234.87440.806275.005402.20278935.17.135.491840732559050.81611.495.9225.25181230402.104.5092.87130.2716087477273678.6339.6637.96021114.57273.0426.55096710110.2010104.0326.56722711.0337691538.47201.325.564.549.429.7031053.11851384083.66788332217820.8736919168.516539.38083967.579269281.324568369018.3610.4921.0011.18160.921283364.47856.6440413615.2115.4156.54153032.342255014837771.12123.381855528.30149.19164.70997.75250.27069.60920.927.2301219.554.70.977.730.10.468977678913081035.116.367.0123.075.74921.82340.58515.06135.06525.73674.1551315.95254.16030.06117721.05401405668.881984754116.76365.707118.3251285.2624.648806872.5710.264418197137137.69577719641243.574135.821.481526.34661823561340.2660.395.0211.0228864.13022.454304.769.1025.94404784839.45560667778881394.213.5195.3612.60012060152123.97308.81233.38438.029276.650399.91755835.37.175.461830736519751.08411.435.8925.12780838932.114.5302.88330.1616145632272696.8239.5238.08975214.62123.0526.62648710081.5610076.5326.63968711.0037591038.37201.825.504.539.449.6830990.26064054283.50441132157875.8136987168.622538.64384067.655723281.006768294018.3410.4820.9811.17161.060268164.52857.2840443115.2215.4256.50853034.092253783037790.91123.440913128.288349.17164.76997.78250.34669.59921.027.2281219.527931774.70.977.730.10.469067628383666844.816.356.8541.465.55926.49634.89507.38346.61624.87091.02OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEv5.9.1v5.10 Git Oct2312K24K36K48K60KSE +/- 715.05, N = 3SE +/- 667.09, N = 357136.1151315.951. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEv5.9.1v5.10 Git Oct2310K20K30K40K50KMin: 56409.06 / Avg: 57136.11 / Max: 58566.14Min: 50554.68 / Avg: 51315.95 / Max: 52645.481. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 32 - Type: Processv5.9.1v5.10 Git Oct2360120180240300SE +/- 1.56, N = 3SE +/- 0.93, N = 3236.45254.161. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 32 - Type: Processv5.9.1v5.10 Git Oct2350100150200250Min: 233.56 / Avg: 236.45 / Max: 238.9Min: 252.55 / Avg: 254.16 / Max: 255.761. (CC) gcc options: -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.9.1v5.10 Git Oct23714212835SE +/- 0.42, N = 15SE +/- 0.33, N = 728.8030.061. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.9.1v5.10 Git Oct23714212835Min: 25.4 / Avg: 28.8 / Max: 30.95Min: 28.42 / Avg: 30.06 / Max: 31.031. (CC) gcc options: -lm

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus, Fewer Is BetteroneAPI Level Zero TestsTest: Peak Kernel Launch Latencyv5.9.1v5.10 Git Oct23510152025SE +/- 0.04, N = 3SE +/- 0.07, N = 321.7521.051. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgus, Fewer Is BetteroneAPI Level Zero TestsTest: Peak Kernel Launch Latencyv5.9.1v5.10 Git Oct23510152025Min: 21.69 / Avg: 21.75 / Max: 21.83Min: 20.92 / Avg: 21.05 / Max: 21.151. (CXX) g++ options: -ldl -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context Switchingv5.9.1v5.10 Git Oct23300K600K900K1200K1500KSE +/- 19047.64, N = 3SE +/- 24167.64, N = 31450925.641405668.881. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context Switchingv5.9.1v5.10 Git Oct23300K600K900K1200K1500KMin: 1425321.38 / Avg: 1450925.64 / Max: 1488156.33Min: 1367610.28 / Avg: 1405668.88 / Max: 1450502.861. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-sha512v5.9.1v5.10 Git Oct23400K800K1200K1600K2000KSE +/- 7703.88, N = 3SE +/- 3930.67, N = 319673692028208
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-sha512v5.9.1v5.10 Git Oct23400K800K1200K1600K2000KMin: 1956298 / Avg: 1967369.33 / Max: 1982185Min: 2024277 / Avg: 2028207.67 / Max: 2036069

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Threadv5.9.1v5.10 Git Oct23306090120150SE +/- 1.27, N = 3SE +/- 1.26, N = 3113.26116.761. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Threadv5.9.1v5.10 Git Oct2320406080100Min: 110.78 / Avg: 113.26 / Max: 114.97Min: 114.26 / Avg: 116.76 / Max: 118.151. (CC) gcc options: -lpthread

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1v5.9.1v5.10 Git Oct231530456075SE +/- 0.64, N = 10SE +/- 0.29, N = 367.7265.711. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1v5.9.1v5.10 Git Oct231326395265Min: 66.03 / Avg: 67.72 / Max: 73Min: 65.34 / Avg: 65.71 / Max: 66.291. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Processv5.9.1v5.10 Git Oct23306090120150SE +/- 1.21, N = 3SE +/- 0.46, N = 3115.01118.331. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: Processv5.9.1v5.10 Git Oct2320406080100Min: 112.62 / Avg: 115.01 / Max: 116.56Min: 117.46 / Avg: 118.33 / Max: 119.011. (CC) gcc options: -lpthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch Timev5.9.1v5.10 Git Oct23306090120150SE +/- 1.15, N = 3131128
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch Timev5.9.1v5.10 Git Oct2320406080100Min: 126 / Avg: 128 / Max: 130

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2v5.9.1v5.10 Git Oct231.21052.4213.63154.8426.0525SE +/- 0.02, N = 3SE +/- 0.14, N = 35.385.26MIN: 5.27 / MAX: 8.74MIN: 3.67 / MAX: 8.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2v5.9.1v5.10 Git Oct23246810Min: 5.35 / Avg: 5.38 / Max: 5.41Min: 4.97 / Avg: 5.26 / Max: 5.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deletev5.9.1v5.10 Git Oct23612182430SE +/- 0.19, N = 15SE +/- 0.31, N = 1224.1224.651. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deletev5.9.1v5.10 Git Oct23612182430Min: 21.51 / Avg: 24.12 / Max: 24.56Min: 21.23 / Avg: 24.65 / Max: 25.271. (CXX) g++ options: -O3 -lsnappy -lpthread

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancev5.9.1v5.10 Git Oct232K4K6K8K10KSE +/- 92.08, N = 6SE +/- 106.38, N = 4824680681. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performancev5.9.1v5.10 Git Oct2314002800420056007000Min: 8064 / Avg: 8246.33 / Max: 8620Min: 7812 / Avg: 8068.25 / Max: 83331. (CC) gcc options: -O3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4Kv5.9.1v5.10 Git Oct231632486480SE +/- 0.74, N = 8SE +/- 0.84, N = 671.0972.57MIN: 60.36 / MAX: 123.58MIN: 61.02 / MAX: 123.411. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4Kv5.9.1v5.10 Git Oct231428425670Min: 69.43 / Avg: 71.09 / Max: 76.15Min: 71.48 / Avg: 72.57 / Max: 76.781. (CC) gcc options: -pthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.9.1v5.10 Git Oct233691215SE +/- 0.05, N = 3SE +/- 0.02, N = 310.0610.261. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.9.1v5.10 Git Oct233691215Min: 9.97 / Avg: 10.06 / Max: 10.15Min: 10.23 / Avg: 10.26 / Max: 10.291. (CC) gcc options: -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll Waitv5.9.1v5.10 Git Oct2340K80K120K160K200KSE +/- 2438.14, N = 13SE +/- 2028.32, N = 141933941971371. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll Waitv5.9.1v5.10 Git Oct2330K60K90K120K150KMin: 187628 / Avg: 193394.46 / Max: 222061Min: 192241 / Avg: 197137.36 / Max: 2231911. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1200v5.9.1v5.10 Git Oct23306090120150SE +/- 0.17, N = 3SE +/- 1.75, N = 3135.33137.70
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1200v5.9.1v5.10 Git Oct23306090120150Min: 135.01 / Avg: 135.33 / Max: 135.59Min: 135.22 / Avg: 137.7 / Max: 141.08

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputv5.9.1v5.10 Git Oct23170K340K510K680K850KSE +/- 2920.66, N = 5SE +/- 4973.96, N = 57591577719641. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputv5.9.1v5.10 Git Oct23130K260K390K520K650KMin: 752054 / Avg: 759157.4 / Max: 766769Min: 755027 / Avg: 771963.8 / Max: 7857781. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASv5.9.1v5.10 Git Oct233060901201501221241. (CXX) g++ options: -flto -pthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readv5.9.1v5.10 Git Oct230.80421.60842.41263.21684.021SE +/- 0.014, N = 3SE +/- 0.038, N = 33.5223.5741. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readv5.9.1v5.10 Git Oct23246810Min: 3.5 / Avg: 3.52 / Max: 3.54Min: 3.52 / Avg: 3.57 / Max: 3.651. (CXX) g++ options: -O3 -lsnappy -lpthread

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 1920 x 1200v5.9.1v5.10 Git Oct23306090120150SE +/- 0.29, N = 3SE +/- 1.45, N = 3133.9135.8
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 1920 x 1200v5.9.1v5.10 Git Oct23306090120150Min: 133.4 / Avg: 133.9 / Max: 134.4Min: 134.3 / Avg: 135.8 / Max: 138.7

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device-To-Host Image Copyv5.9.1v5.10 Git Oct23510152025SE +/- 0.04, N = 3SE +/- 0.09, N = 321.7921.481. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device-To-Host Image Copyv5.9.1v5.10 Git Oct23510152025Min: 21.71 / Avg: 21.79 / Max: 21.86Min: 21.35 / Avg: 21.48 / Max: 21.641. (CXX) g++ options: -ldl -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBv5.9.1v5.10 Git Oct23612182430SE +/- 0.28, N = 8SE +/- 0.22, N = 1226.7126.351. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBv5.9.1v5.10 Git Oct23612182430Min: 25.87 / Avg: 26.71 / Max: 28.5Min: 25.64 / Avg: 26.35 / Max: 27.971. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched Pipev5.9.1v5.10 Git Oct2350K100K150K200K250KSE +/- 1878.81, N = 15SE +/- 1754.35, N = 152324972356131. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched Pipev5.9.1v5.10 Git Oct2340K80K120K160K200KMin: 226402 / Avg: 232497.07 / Max: 253469Min: 231528 / Avg: 235612.8 / Max: 2566331. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromev5.9.1v5.10 Git Oct23918273645SE +/- 0.20, N = 3SE +/- 0.48, N = 339.740.21. chrome 86.0.4240.111
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromev5.9.1v5.10 Git Oct23816243240Min: 39.4 / Avg: 39.73 / Max: 40.1Min: 39.7 / Avg: 40.23 / Max: 41.21. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromev5.9.1v5.10 Git Oct23140280420560700SE +/- 1.82, N = 3SE +/- 1.13, N = 3668.6660.31. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromev5.9.1v5.10 Git Oct23120240360480600Min: 665.2 / Avg: 668.63 / Max: 671.4Min: 658.9 / Avg: 660.27 / Max: 662.51. chrome 86.0.4240.111

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAv5.9.1v5.10 Git Oct2320406080100SE +/- 1.48, N = 3SE +/- 1.54, N = 393.8895.021. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAv5.9.1v5.10 Git Oct2320406080100Min: 91.93 / Avg: 93.88 / Max: 96.78Min: 93.37 / Avg: 95.02 / Max: 98.11. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.9.1v5.10 Git Oct233691215SE +/- 0.11, N = 3SE +/- 0.12, N = 310.8911.021. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.9.1v5.10 Git Oct233691215Min: 10.72 / Avg: 10.89 / Max: 11.1Min: 10.78 / Avg: 11.02 / Max: 11.171. (CC) gcc options: -lm

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Nov5.9.1v5.10 Git Oct230.94011.88022.82033.76044.7005SE +/- 0.030, N = 3SE +/- 0.024, N = 34.1784.130
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Nov5.9.1v5.10 Git Oct23246810Min: 4.12 / Avg: 4.18 / Max: 4.22Min: 4.09 / Avg: 4.13 / Max: 4.17

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillv5.9.1v5.10 Git Oct23510152025SE +/- 0.24, N = 8SE +/- 0.37, N = 322.7122.451. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillv5.9.1v5.10 Git Oct23510152025Min: 22.12 / Avg: 22.71 / Max: 24.22Min: 22.03 / Avg: 22.45 / Max: 23.191. (CXX) g++ options: -O3 -lsnappy -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pv5.9.1v5.10 Git Oct2370140210280350SE +/- 2.82, N = 13SE +/- 2.52, N = 14301.41304.76MIN: 237.77 / MAX: 403.4MIN: 239.66 / MAX: 402.471. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pv5.9.1v5.10 Git Oct2350100150200250Min: 294.11 / Avg: 301.41 / Max: 333.92Min: 298.09 / Avg: 304.76 / Max: 337.151. (CC) gcc options: -pthread

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1v5.9.1v5.10 Git Oct233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 39.209.10MIN: 8.05 / MAX: 13.47MIN: 8.09 / MAX: 18.26
OpenBenchmarking.orgMicroseconds, Fewer Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1v5.9.1v5.10 Git Oct233691215Min: 9.16 / Avg: 9.2 / Max: 9.22Min: 9.08 / Avg: 9.1 / Max: 9.14

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU Cachev5.9.1v5.10 Git Oct23612182430SE +/- 0.44, N = 3SE +/- 0.22, N = 1525.6625.941. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU Cachev5.9.1v5.10 Git Oct23612182430Min: 25.17 / Avg: 25.66 / Max: 26.53Min: 24.47 / Avg: 25.94 / Max: 27.971. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Hashv5.9.1v5.10 Git Oct23900K1800K2700K3600K4500KSE +/- 58894.69, N = 4SE +/- 59957.22, N = 4400658940478481. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Hashv5.9.1v5.10 Git Oct23700K1400K2100K2800K3500KMin: 3920097 / Avg: 4006588.75 / Max: 4178388Min: 3982437 / Avg: 4047847.5 / Max: 42273441. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillv5.9.1v5.10 Git Oct23918273645SE +/- 0.40, N = 8SE +/- 0.64, N = 339.039.41. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillv5.9.1v5.10 Git Oct23816243240Min: 36.5 / Avg: 38.96 / Max: 40Min: 38.1 / Avg: 39.37 / Max: 40.11. (CXX) g++ options: -O3 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.9.1v5.10 Git Oct23120K240K360K480K600KSE +/- 3535.45, N = 3SE +/- 6788.79, N = 3561695556066
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.9.1v5.10 Git Oct23100K200K300K400K500KMin: 554633 / Avg: 561695 / Max: 565533Min: 542500 / Avg: 556066.33 / Max: 563328

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolv5.9.1v5.10 Git Oct23170K340K510K680K850KSE +/- 3601.02, N = 3SE +/- 2315.33, N = 3770290777888
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolv5.9.1v5.10 Git Oct23130K260K390K520K650KMin: 764268 / Avg: 770289.67 / Max: 776722Min: 775573 / Avg: 777888.33 / Max: 782519

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1v5.9.1v5.10 Git Oct2330060090012001500SE +/- 0.30, N = 3SE +/- 0.80, N = 31381.051394.21MIN: 1370 / MAX: 1400MIN: 1380 / MAX: 1410
OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1v5.9.1v5.10 Git Oct232004006008001000Min: 1380.53 / Avg: 1381.05 / Max: 1381.58Min: 1393.16 / Avg: 1394.21 / Max: 1395.79

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readv5.9.1v5.10 Git Oct230.79181.58362.37543.16723.959SE +/- 0.017, N = 3SE +/- 0.030, N = 33.4863.5191. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readv5.9.1v5.10 Git Oct23246810Min: 3.45 / Avg: 3.49 / Max: 3.5Min: 3.46 / Avg: 3.52 / Max: 3.571. (CXX) g++ options: -O3 -lsnappy -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisv5.9.1v5.10 Git Oct231.21732.43463.65194.86926.0865SE +/- 0.01, N = 4SE +/- 0.01, N = 35.415.36MIN: 5.15 / MAX: 5.59MIN: 5.1 / MAX: 5.56
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisv5.9.1v5.10 Git Oct23246810Min: 5.41 / Avg: 5.41 / Max: 5.43Min: 5.35 / Avg: 5.36 / Max: 5.38

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1v5.9.1v5.10 Git Oct233691215SE +/- 0.03, N = 3SE +/- 0.02, N = 312.5012.601. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1v5.9.1v5.10 Git Oct2348121620Min: 12.46 / Avg: 12.5 / Max: 12.55Min: 12.58 / Avg: 12.6 / Max: 12.641. (CC) gcc options: -pthread

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1v5.9.1v5.10 Git Oct233K6K9K12K15KSE +/- 83.86, N = 3SE +/- 101.49, N = 31196012060
OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1v5.9.1v5.10 Git Oct232K4K6K8K10KMin: 11820 / Avg: 11960 / Max: 12110Min: 11860 / Avg: 12060 / Max: 12190

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLv5.9.1v5.10 Git Oct2330060090012001500SE +/- 2.19, N = 3SE +/- 1.76, N = 3153215211. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLv5.9.1v5.10 Git Oct2330060090012001500Min: 1528 / Avg: 1532.33 / Max: 1535Min: 1518 / Avg: 1520.67 / Max: 15241. (CXX) g++ options: -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetv5.9.1v5.10 Git Oct23612182430SE +/- 0.07, N = 3SE +/- 0.01, N = 324.1423.97MIN: 22.04 / MAX: 36.82MIN: 22.03 / MAX: 36.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetv5.9.1v5.10 Git Oct23612182430Min: 24.02 / Avg: 24.14 / Max: 24.27Min: 23.95 / Avg: 23.97 / Max: 241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pv5.9.1v5.10 Git Oct2370140210280350SE +/- 2.26, N = 14SE +/- 2.29, N = 15306.78308.81MIN: 190.06 / MAX: 675.61MIN: 189.88 / MAX: 692.41. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pv5.9.1v5.10 Git Oct2360120180240300Min: 298.1 / Avg: 306.78 / Max: 334.93Min: 304.87 / Avg: 308.81 / Max: 340.381. (CC) gcc options: -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDv5.9.1v5.10 Git Oct2350100150200250SE +/- 2.98, N = 5SE +/- 2.75, N = 3234.87233.381. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDv5.9.1v5.10 Git Oct234080120160200Min: 231.03 / Avg: 234.87 / Max: 246.68Min: 229.92 / Avg: 233.38 / Max: 238.811. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Integer Computev5.9.1v5.10 Git Oct23100200300400500SE +/- 1.46, N = 3SE +/- 1.68, N = 3440.81438.031. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Integer Computev5.9.1v5.10 Git Oct2380160240320400Min: 437.93 / Avg: 440.81 / Max: 442.69Min: 435.05 / Avg: 438.03 / Max: 440.881. (CXX) g++ options: -ldl -pthread

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Testv5.9.1v5.10 Git Oct2360120180240300SE +/- 2.88, N = 3SE +/- 1.28, N = 3275.01276.65
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Testv5.9.1v5.10 Git Oct2350100150200250Min: 269.92 / Avg: 275.01 / Max: 279.87Min: 275.27 / Avg: 276.65 / Max: 279.21

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithmsv5.9.1v5.10 Git Oct2390180270360450SE +/- 0.85, N = 3SE +/- 1.42, N = 3402.20399.921. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithmsv5.9.1v5.10 Git Oct2370140210280350Min: 401.2 / Avg: 402.2 / Max: 403.88Min: 397.17 / Avg: 399.92 / Max: 401.891. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.9.1v5.10 Git Oct23816243240SE +/- 0.51, N = 12SE +/- 0.42, N = 1535.135.31. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.9.1v5.10 Git Oct23816243240Min: 33.8 / Avg: 35.12 / Max: 40.6Min: 34.5 / Avg: 35.27 / Max: 41.11. (CXX) g++ options: -O3 -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18v5.9.1v5.10 Git Oct23246810SE +/- 0.05, N = 3SE +/- 0.08, N = 37.137.17MIN: 6.9 / MAX: 7.84MIN: 6.85 / MAX: 7.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18v5.9.1v5.10 Git Oct233691215Min: 7.04 / Avg: 7.13 / Max: 7.2Min: 7.05 / Avg: 7.17 / Max: 7.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3v5.9.1v5.10 Git Oct231.23532.47063.70594.94126.1765SE +/- 0.02, N = 3SE +/- 0.01, N = 35.495.46MIN: 5.29 / MAX: 6.41MIN: 5.28 / MAX: 6.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3v5.9.1v5.10 Git Oct23246810Min: 5.46 / Avg: 5.49 / Max: 5.54Min: 5.44 / Avg: 5.46 / Max: 5.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-Piv5.9.1v5.10 Git Oct23400800120016002000SE +/- 26.10, N = 3SE +/- 27.09, N = 3184018301. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-Piv5.9.1v5.10 Git Oct2330060090012001500Min: 1788 / Avg: 1839.67 / Max: 1872Min: 1779 / Avg: 1830.33 / Max: 18711. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.9.1v5.10 Git Oct231.6M3.2M4.8M6.4M8MSE +/- 2213.01, N = 3SE +/- 8358.81, N = 373255907365197
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.9.1v5.10 Git Oct231.3M2.6M3.9M5.2M6.5MMin: 7322260 / Avg: 7325590 / Max: 7329780Min: 7349110 / Avg: 7365196.67 / Max: 7377180

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000v5.9.1v5.10 Git Oct231224364860SE +/- 0.42, N = 3SE +/- 0.48, N = 350.8251.081. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000v5.9.1v5.10 Git Oct231020304050Min: 50.02 / Avg: 50.82 / Max: 51.43Min: 50.18 / Avg: 51.08 / Max: 51.821. (CC) gcc options: -O2 -ldl -lz -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetv5.9.1v5.10 Git Oct233691215SE +/- 0.03, N = 3SE +/- 0.01, N = 311.4911.43MIN: 11.29 / MAX: 12.01MIN: 11.18 / MAX: 14.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetv5.9.1v5.10 Git Oct233691215Min: 11.46 / Avg: 11.49 / Max: 11.56Min: 11.41 / Avg: 11.43 / Max: 11.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: Memorialv5.9.1v5.10 Git Oct231.3322.6643.9965.3286.66SE +/- 0.10, N = 3SE +/- 0.08, N = 35.925.89
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: Memorialv5.9.1v5.10 Git Oct23246810Min: 5.8 / Avg: 5.92 / Max: 6.11Min: 5.79 / Avg: 5.89 / Max: 6.05

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.9.1v5.10 Git Oct23612182430SE +/- 0.32, N = 12SE +/- 0.26, N = 1525.2525.131. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillv5.9.1v5.10 Git Oct23612182430Min: 21.77 / Avg: 25.25 / Max: 26.16Min: 21.51 / Avg: 25.13 / Max: 25.651. (CXX) g++ options: -O3 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.9.1v5.10 Git Oct232M4M6M8M10MSE +/- 5070.03, N = 3SE +/- 13211.50, N = 381230408083893
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.9.1v5.10 Git Oct231.4M2.8M4.2M5.6M7MMin: 8113210 / Avg: 8123040 / Max: 8130110Min: 8068350 / Avg: 8083893.33 / Max: 8110170

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.9.1v5.10 Git Oct230.47480.94961.42441.89922.374SE +/- 0.00, N = 3SE +/- 0.00, N = 32.102.11MIN: 1.97 / MAX: 2.22MIN: 1.97 / MAX: 4.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.9.1v5.10 Git Oct23246810Min: 2.1 / Avg: 2.1 / Max: 2.11Min: 2.1 / Avg: 2.11 / Max: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2v5.9.1v5.10 Git Oct231.01932.03863.05794.07725.0965SE +/- 0.009, N = 3SE +/- 0.007, N = 34.5094.5301. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2v5.9.1v5.10 Git Oct23246810Min: 4.5 / Avg: 4.51 / Max: 4.53Min: 4.52 / Avg: 4.53 / Max: 4.541. (CC) gcc options: -pthread

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pongv5.9.1v5.10 Git Oct230.64871.29741.94612.59483.2435SE +/- 0.006, N = 5SE +/- 0.007, N = 52.8712.8831. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pongv5.9.1v5.10 Git Oct23246810Min: 2.86 / Avg: 2.87 / Max: 2.89Min: 2.86 / Avg: 2.88 / Max: 2.91. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv5.9.1v5.10 Git Oct23714212835SE +/- 0.02, N = 3SE +/- 0.07, N = 330.2730.16MIN: 29.76 / MAX: 41.79MIN: 28.87 / MAX: 80.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv5.9.1v5.10 Git Oct23714212835Min: 30.24 / Avg: 30.27 / Max: 30.3Min: 30.07 / Avg: 30.16 / Max: 30.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Readv5.9.1v5.10 Git Oct233M6M9M12M15MSE +/- 196352.27, N = 5SE +/- 202717.01, N = 516087477161456321. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Readv5.9.1v5.10 Git Oct233M6M9M12M15MMin: 15696012 / Avg: 16087476.6 / Max: 16816630Min: 15615820 / Avg: 16145632.2 / Max: 168401371. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Atomicv5.9.1v5.10 Git Oct2360K120K180K240K300KSE +/- 3321.50, N = 15SE +/- 3248.86, N = 15273678.63272696.821. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Atomicv5.9.1v5.10 Git Oct2350K100K150K200K250KMin: 261741.06 / Avg: 273678.63 / Max: 299907.08Min: 261380.29 / Avg: 272696.82 / Max: 299757.51. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyv5.9.1v5.10 Git Oct23918273645SE +/- 0.03, N = 3SE +/- 0.02, N = 339.6639.52MIN: 38.45 / MAX: 49.68MIN: 38.45 / MAX: 50.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyv5.9.1v5.10 Git Oct23816243240Min: 39.6 / Avg: 39.66 / Max: 39.69Min: 39.5 / Avg: 39.52 / Max: 39.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.9.1v5.10 Git Oct23918273645SE +/- 0.17, N = 3SE +/- 0.04, N = 337.9638.091. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.9.1v5.10 Git Oct23816243240Min: 37.69 / Avg: 37.96 / Max: 38.27Min: 38.04 / Avg: 38.09 / Max: 38.171. (CC) gcc options: -lm

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak System Memory Copy to Shared Memoryv5.9.1v5.10 Git Oct2348121620SE +/- 0.08, N = 3SE +/- 0.04, N = 314.5714.621. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak System Memory Copy to Shared Memoryv5.9.1v5.10 Git Oct2348121620Min: 14.41 / Avg: 14.57 / Max: 14.65Min: 14.54 / Avg: 14.62 / Max: 14.671. (CXX) g++ options: -ldl -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2v5.9.1v5.10 Git Oct230.68631.37262.05892.74523.4315SE +/- 0.01, N = 3SE +/- 0.03, N = 33.043.05MIN: 2.89 / MAX: 3.89MIN: 2.75 / MAX: 3.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2v5.9.1v5.10 Git Oct23246810Min: 3.02 / Avg: 3.04 / Max: 3.07Min: 2.99 / Avg: 3.05 / Max: 3.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.9.1v5.10 Git Oct23612182430SE +/- 0.02, N = 3SE +/- 0.04, N = 326.5526.631. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.9.1v5.10 Git Oct23612182430Min: 26.51 / Avg: 26.55 / Max: 26.58Min: 26.58 / Avg: 26.63 / Max: 26.71. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.9.1v5.10 Git Oct232K4K6K8K10KSE +/- 7.90, N = 3SE +/- 14.81, N = 310110.2010081.561. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Host-To-Device Bandwidthv5.9.1v5.10 Git Oct232K4K6K8K10KMin: 10098.28 / Avg: 10110.2 / Max: 10125.15Min: 10052.06 / Avg: 10081.56 / Max: 10098.51. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.9.1v5.10 Git Oct232K4K6K8K10KSE +/- 11.22, N = 3SE +/- 4.26, N = 310104.0310076.531. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgusec, Fewer Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.9.1v5.10 Git Oct232K4K6K8K10KMin: 10091.73 / Avg: 10104.03 / Max: 10126.44Min: 10068.46 / Avg: 10076.53 / Max: 10082.921. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.9.1v5.10 Git Oct23612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 326.5726.641. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Device-To-Host Bandwidthv5.9.1v5.10 Git Oct23612182430Min: 26.51 / Avg: 26.57 / Max: 26.6Min: 26.62 / Avg: 26.64 / Max: 26.661. (CXX) g++ options: -ldl -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0v5.9.1v5.10 Git Oct233691215SE +/- 0.02, N = 3SE +/- 0.01, N = 311.0311.00MIN: 10.91 / MAX: 11.24MIN: 10.91 / MAX: 11.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0v5.9.1v5.10 Git Oct233691215Min: 11.01 / Avg: 11.03 / Max: 11.06Min: 10.99 / Avg: 11 / Max: 11.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.9.1v5.10 Git Oct2380K160K240K320K400KSE +/- 2824.57, N = 3SE +/- 2855.00, N = 3376915375910
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.9.1v5.10 Git Oct2370K140K210K280K350KMin: 371266 / Avg: 376915 / Max: 379774Min: 370204 / Avg: 375910 / Max: 378948

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16v5.9.1v5.10 Git Oct23918273645SE +/- 0.02, N = 3SE +/- 0.02, N = 338.4738.37MIN: 38.14 / MAX: 38.95MIN: 38.11 / MAX: 38.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16v5.9.1v5.10 Git Oct23816243240Min: 38.45 / Avg: 38.47 / Max: 38.5Min: 38.34 / Avg: 38.37 / Max: 38.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Default - Resolution: 1920 x 1200v5.9.1v5.10 Git Oct234080120160200SE +/- 2.08, N = 13SE +/- 2.73, N = 4201.3201.8
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Default - Resolution: 1920 x 1200v5.9.1v5.10 Git Oct234080120160200Min: 197.9 / Avg: 201.26 / Max: 226Min: 198.3 / Avg: 201.75 / Max: 209.9

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.9.1v5.10 Git Oct23612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 325.5625.50MIN: 24.72 / MAX: 36.31MIN: 24.75 / MAX: 36.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.9.1v5.10 Git Oct23612182430Min: 25.51 / Avg: 25.56 / Max: 25.62Min: 25.46 / Avg: 25.5 / Max: 25.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2v5.9.1v5.10 Git Oct231.02152.0433.06454.0865.1075SE +/- 0.15, N = 3SE +/- 0.01, N = 34.544.53MIN: 4.03 / MAX: 5.62MIN: 4.08 / MAX: 5.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2v5.9.1v5.10 Git Oct23246810Min: 4.33 / Avg: 4.54 / Max: 4.82Min: 4.51 / Avg: 4.53 / Max: 4.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetv5.9.1v5.10 Git Oct233691215SE +/- 0.06, N = 3SE +/- 0.04, N = 39.429.44MIN: 8.59 / MAX: 9.88MIN: 9.16 / MAX: 9.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetv5.9.1v5.10 Git Oct233691215Min: 9.33 / Avg: 9.42 / Max: 9.54Min: 9.39 / Avg: 9.44 / Max: 9.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetv5.9.1v5.10 Git Oct233691215SE +/- 0.01, N = 3SE +/- 0.02, N = 39.709.68MIN: 9.17 / MAX: 11.01MIN: 9.47 / MAX: 10.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetv5.9.1v5.10 Git Oct233691215Min: 9.69 / Avg: 9.7 / Max: 9.71Min: 9.64 / Avg: 9.68 / Max: 9.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routinev5.9.1v5.10 Git Oct237K14K21K28K35KSE +/- 38.53, N = 3SE +/- 94.04, N = 331053.1230990.261. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routinev5.9.1v5.10 Git Oct235K10K15K20K25KMin: 30997.43 / Avg: 31053.12 / Max: 31127.1Min: 30849.48 / Avg: 30990.26 / Max: 31168.661. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBv5.9.1v5.10 Git Oct2320406080100SE +/- 0.77, N = 15SE +/- 1.03, N = 1583.6783.501. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBv5.9.1v5.10 Git Oct231632486480Min: 76.31 / Avg: 83.67 / Max: 86.23Min: 72.36 / Avg: 83.5 / Max: 85.841. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Mallocv5.9.1v5.10 Git Oct237M14M21M28M35MSE +/- 475699.37, N = 3SE +/- 380954.86, N = 332217820.8732157875.811. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Mallocv5.9.1v5.10 Git Oct236M12M18M24M30MMin: 31581767.6 / Avg: 32217820.87 / Max: 33148584.69Min: 31631766.26 / Avg: 32157875.81 / Max: 32898200.291. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.9.1v5.10 Git Oct2380K160K240K320K400KSE +/- 3037.19, N = 3SE +/- 2810.57, N = 3369191369871
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.9.1v5.10 Git Oct2360K120K180K240K300KMin: 363123 / Avg: 369191.33 / Max: 372460Min: 364260 / Avg: 369870.67 / Max: 372973

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nov5.9.1v5.10 Git Oct231530456075SE +/- 0.33, N = 3SE +/- 0.32, N = 368.5268.62
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nov5.9.1v5.10 Git Oct231326395265Min: 67.89 / Avg: 68.52 / Max: 69.02Min: 68.16 / Avg: 68.62 / Max: 69.23

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesv5.9.1v5.10 Git Oct23120240360480600SE +/- 0.37, N = 3SE +/- 0.39, N = 3539.38538.64
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesv5.9.1v5.10 Git Oct23100200300400500Min: 538.66 / Avg: 539.38 / Max: 539.86Min: 537.88 / Avg: 538.64 / Max: 539.17

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1200v5.9.1v5.10 Git Oct232004006008001000839840

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.9.1v5.10 Git Oct231530456075SE +/- 0.07, N = 3SE +/- 0.14, N = 367.5867.661. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.9.1v5.10 Git Oct231326395265Min: 67.51 / Avg: 67.58 / Max: 67.72Min: 67.45 / Avg: 67.66 / Max: 67.921. (CC) gcc options: -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chromev5.9.1v5.10 Git Oct2360120180240300SE +/- 0.05, N = 3SE +/- 0.43, N = 3281.32281.011. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chromev5.9.1v5.10 Git Oct2350100150200250Min: 281.23 / Avg: 281.32 / Max: 281.42Min: 280.55 / Avg: 281.01 / Max: 281.871. chrome 86.0.4240.111

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While Writingv5.9.1v5.10 Git Oct23150K300K450K600K750KSE +/- 5312.32, N = 15SE +/- 6146.43, N = 116836906829401. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While Writingv5.9.1v5.10 Git Oct23120K240K360K480K600KMin: 662641 / Avg: 683689.87 / Max: 748163Min: 661540 / Avg: 682940 / Max: 7377521. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.9.1v5.10 Git Oct23510152025SE +/- 0.02, N = 3SE +/- 0.00, N = 318.3618.34MIN: 17.1 / MAX: 30.21MIN: 17.16 / MAX: 21.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.9.1v5.10 Git Oct23510152025Min: 18.34 / Avg: 18.36 / Max: 18.4Min: 18.34 / Avg: 18.34 / Max: 18.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetv5.9.1v5.10 Git Oct233691215SE +/- 0.04, N = 3SE +/- 0.01, N = 310.4910.48MIN: 10.27 / MAX: 11.02MIN: 10.36 / MAX: 10.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetv5.9.1v5.10 Git Oct233691215Min: 10.41 / Avg: 10.49 / Max: 10.56Min: 10.47 / Avg: 10.48 / Max: 10.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18v5.9.1v5.10 Git Oct23510152025SE +/- 0.01, N = 3SE +/- 0.03, N = 321.0020.98MIN: 18.44 / MAX: 23.65MIN: 18.48 / MAX: 25.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18v5.9.1v5.10 Git Oct23510152025Min: 20.98 / Avg: 21 / Max: 21.01Min: 20.93 / Avg: 20.98 / Max: 21.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0v5.9.1v5.10 Git Oct233691215SE +/- 0.04, N = 3SE +/- 0.04, N = 311.1811.17MIN: 10.91 / MAX: 22.47MIN: 10.92 / MAX: 23.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0v5.9.1v5.10 Git Oct233691215Min: 11.13 / Avg: 11.18 / Max: 11.26Min: 11.1 / Avg: 11.17 / Max: 11.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultrav5.9.1v5.10 Git Oct234080120160200SE +/- 0.25, N = 3SE +/- 0.87, N = 3160.92161.06MIN: 81 / MAX: 269MIN: 82 / MAX: 269
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultrav5.9.1v5.10 Git Oct23306090120150Min: 160.62 / Avg: 160.92 / Max: 161.43Min: 160.14 / Avg: 161.06 / Max: 162.79

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitv5.9.1v5.10 Git Oct231428425670SE +/- 0.97, N = 3SE +/- 1.01, N = 364.4764.52MIN: 41.01 / MAX: 203.78MIN: 40.89 / MAX: 203.171. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitv5.9.1v5.10 Git Oct231326395265Min: 63.49 / Avg: 64.47 / Max: 66.41Min: 63.49 / Avg: 64.52 / Max: 66.551. (CC) gcc options: -pthread

IOR

IOR is a parallel I/O storage benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Read Testv5.9.1v5.10 Git Oct232004006008001000SE +/- 11.16, N = 3SE +/- 2.95, N = 15856.64857.28MIN: 789.76 / MAX: 930.54MIN: 680.98 / MAX: 945.141. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Read Testv5.9.1v5.10 Git Oct23150300450600750Min: 843.83 / Avg: 856.64 / Max: 878.87Min: 833.97 / Avg: 857.28 / Max: 878.081. (CC) gcc options: -O2 -lm -pthread -lmpi

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.9.1v5.10 Git Oct2390K180K270K360K450KSE +/- 3396.00, N = 3SE +/- 2810.87, N = 3404136404431
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.9.1v5.10 Git Oct2370K140K210K280K350KMin: 397344 / Avg: 404136 / Max: 407541Min: 398810 / Avg: 404431 / Max: 407320

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyv5.9.1v5.10 Git Oct2348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 315.2115.22MIN: 14.84 / MAX: 24.13MIN: 14.94 / MAX: 15.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyv5.9.1v5.10 Git Oct2348121620Min: 15.18 / Avg: 15.21 / Max: 15.24Min: 15.2 / Avg: 15.22 / Max: 15.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50v5.9.1v5.10 Git Oct2348121620SE +/- 0.02, N = 3SE +/- 0.04, N = 315.4115.42MIN: 15.05 / MAX: 15.58MIN: 15.22 / MAX: 15.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50v5.9.1v5.10 Git Oct2348121620Min: 15.38 / Avg: 15.41 / Max: 15.45Min: 15.38 / Avg: 15.42 / Max: 15.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Float16 Global Memory Bandwidthv5.9.1v5.10 Git Oct231326395265SE +/- 0.16, N = 3SE +/- 0.08, N = 356.5456.511. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Float16 Global Memory Bandwidthv5.9.1v5.10 Git Oct231122334455Min: 56.23 / Avg: 56.54 / Max: 56.7Min: 56.34 / Avg: 56.51 / Max: 56.621. (CXX) g++ options: -ldl -pthread

OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Half-Precision Computev5.9.1v5.10 Git Oct237001400210028003500SE +/- 3.59, N = 3SE +/- 1.26, N = 33032.343034.091. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGFLOPS, More Is BetteroneAPI Level Zero TestsTest: Peak Half-Precision Computev5.9.1v5.10 Git Oct235001000150020002500Min: 3025.72 / Avg: 3032.34 / Max: 3038.07Min: 3032.47 / Avg: 3034.09 / Max: 3036.581. (CXX) g++ options: -ldl -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall Basicv5.9.1v5.10 Git Oct235M10M15M20M25MSE +/- 174877.88, N = 3SE +/- 26155.84, N = 322550148225378301. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma
OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall Basicv5.9.1v5.10 Git Oct234M8M12M16M20MMin: 22360339 / Avg: 22550148 / Max: 22899466Min: 22510223 / Avg: 22537829.67 / Max: 225901141. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -llzma -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandv5.9.1v5.10 Git Oct238K16K24K32K40KSE +/- 99.04, N = 3SE +/- 120.80, N = 337771.1237790.911. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandv5.9.1v5.10 Git Oct237K14K21K28K35KMin: 37626.95 / Avg: 37771.12 / Max: 37960.85Min: 37622.93 / Avg: 37790.91 / Max: 38025.271. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultimatev5.9.1v5.10 Git Oct23306090120150SE +/- 0.39, N = 3SE +/- 0.40, N = 3123.38123.44MIN: 36 / MAX: 213MIN: 35 / MAX: 224
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 1920 x 1200 - Effects Quality: Ultimatev5.9.1v5.10 Git Oct2320406080100Min: 122.97 / Avg: 123.38 / Max: 124.17Min: 122.97 / Avg: 123.44 / Max: 124.24

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chromev5.9.1v5.10 Git Oct23714212835SE +/- 0.16, N = 3SE +/- 0.14, N = 328.3028.291. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chromev5.9.1v5.10 Git Oct23612182430Min: 27.99 / Avg: 28.3 / Max: 28.52Min: 28.02 / Avg: 28.29 / Max: 28.451. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50v5.9.1v5.10 Git Oct231122334455SE +/- 0.02, N = 3SE +/- 0.03, N = 349.1949.17MIN: 46.9 / MAX: 61.16MIN: 46.94 / MAX: 66.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50v5.9.1v5.10 Git Oct231020304050Min: 49.16 / Avg: 49.19 / Max: 49.22Min: 49.11 / Avg: 49.17 / Max: 49.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromev5.9.1v5.10 Git Oct234080120160200SE +/- 1.42, N = 3SE +/- 0.88, N = 3164.71164.771. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromev5.9.1v5.10 Git Oct23306090120150Min: 162.7 / Avg: 164.71 / Max: 167.47Min: 163.58 / Avg: 164.77 / Max: 166.481. chrome 86.0.4240.111

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkv5.9.1v5.10 Git Oct2320406080100SE +/- 0.30, N = 3SE +/- 0.07, N = 397.7597.78MIN: 1 / MAX: 446MIN: 1 / MAX: 446
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkv5.9.1v5.10 Git Oct2320406080100Min: 97.33 / Avg: 97.75 / Max: 98.33Min: 97.67 / Avg: 97.78 / Max: 97.92

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compilev5.9.1v5.10 Git Oct2350100150200250SE +/- 0.49, N = 3SE +/- 1.03, N = 3250.27250.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compilev5.9.1v5.10 Git Oct2350100150200250Min: 249.44 / Avg: 250.27 / Max: 251.14Min: 249.13 / Avg: 250.35 / Max: 252.38

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16v5.9.1v5.10 Git Oct231530456075SE +/- 0.04, N = 3SE +/- 0.13, N = 369.6069.59MIN: 65.61 / MAX: 81.18MIN: 65.36 / MAX: 81.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16v5.9.1v5.10 Git Oct231326395265Min: 69.53 / Avg: 69.6 / Max: 69.68Min: 69.34 / Avg: 69.59 / Max: 69.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performancev5.9.1v5.10 Git Oct232004006008001000SE +/- 9.19, N = 14SE +/- 9.40, N = 13920.9921.01. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performancev5.9.1v5.10 Git Oct23160320480640800Min: 902 / Avg: 920.94 / Max: 1039.2Min: 908.2 / Avg: 920.98 / Max: 1033.51. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesv5.9.1v5.10 Git Oct23612182430SE +/- 0.02, N = 3SE +/- 0.00, N = 327.2327.23
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesv5.9.1v5.10 Git Oct23612182430Min: 27.2 / Avg: 27.23 / Max: 27.27Min: 27.22 / Avg: 27.23 / Max: 27.23

oneAPI Level Zero Tests

This is benchmarking the collection of Intel oneAPI Level Zero Tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Single-Precision Computev5.9.1v5.10 Git Oct2330060090012001500SE +/- 0.01, N = 3SE +/- 0.00, N = 31219.551219.521. (CXX) g++ options: -ldl -pthread
OpenBenchmarking.orgGB/s, More Is BetteroneAPI Level Zero TestsTest: Peak Single-Precision Computev5.9.1v5.10 Git Oct232004006008001000Min: 1219.54 / Avg: 1219.55 / Max: 1219.57Min: 1219.52 / Avg: 1219.52 / Max: 1219.531. (CXX) g++ options: -ldl -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.3PBKDF2-whirlpoolv5.10 Git Oct23200K400K600K800K1000KSE +/- 1202.00, N = 3793177

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google Chromev5.9.1v5.10 Git Oct231.05752.1153.17254.235.2875SE +/- 0.00, N = 3SE +/- 0.03, N = 34.74.71. chrome 86.0.4240.111
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google Chromev5.9.1v5.10 Git Oct23246810Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.6 / Avg: 4.67 / Max: 4.71. chrome 86.0.4240.111

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefacev5.9.1v5.10 Git Oct230.21830.43660.65490.87321.0915SE +/- 0.02, N = 3SE +/- 0.01, N = 30.970.97MIN: 0.88 / MAX: 1.12MIN: 0.88 / MAX: 1.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefacev5.9.1v5.10 Git Oct23246810Min: 0.94 / Avg: 0.97 / Max: 0.99Min: 0.94 / Avg: 0.97 / Max: 0.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.9.1v5.10 Git Oct23246810SE +/- 0.04, N = 3SE +/- 0.04, N = 37.737.73MIN: 7.58 / MAX: 19.03MIN: 7.34 / MAX: 19.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.9.1v5.10 Git Oct233691215Min: 7.67 / Avg: 7.73 / Max: 7.8Min: 7.65 / Avg: 7.73 / Max: 7.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncv5.9.1v5.10 Git Oct230.02250.0450.06750.090.1125SE +/- 0.00, N = 3SE +/- 0.00, N = 30.10.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncv5.9.1v5.10 Git Oct2312345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11. (CXX) g++ options: -O3 -lsnappy -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracerv5.9.1v5.10 Git Oct230.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.46MIN: 0.45MIN: 0.45
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracerv5.9.1v5.10 Git Oct2312345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill Syncv5.9.1v5.10 Git Oct232004006008001000SE +/- 29.07, N = 13SE +/- 26.33, N = 148979061. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill Syncv5.9.1v5.10 Git Oct23160320480640800Min: 596 / Avg: 896.62 / Max: 990Min: 622 / Avg: 906.14 / Max: 9891. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential Fillv5.9.1v5.10 Git Oct23160K320K480K640K800KSE +/- 30877.90, N = 12SE +/- 25204.67, N = 157678917628381. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential Fillv5.9.1v5.10 Git Oct23130K260K390K520K650KMin: 573255 / Avg: 767890.5 / Max: 882786Min: 610637 / Avg: 762837.93 / Max: 9076851. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fillv5.9.1v5.10 Git Oct2380K160K240K320K400KSE +/- 17977.32, N = 15SE +/- 14798.54, N = 153081033666841. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fillv5.9.1v5.10 Git Oct2360K120K180K240K300KMin: 243762 / Avg: 308103.07 / Max: 520700Min: 287620 / Avg: 366683.87 / Max: 5059841. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetv5.9.1v5.10 Git Oct231.14982.29963.44944.59925.749SE +/- 0.21, N = 3SE +/- 0.02, N = 35.114.81MIN: 4.3 / MAX: 7.06MIN: 4.28 / MAX: 5.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetv5.9.1v5.10 Git Oct23246810Min: 4.87 / Avg: 5.11 / Max: 5.53Min: 4.79 / Avg: 4.81 / Max: 4.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3v5.9.1v5.10 Git Oct23246810SE +/- 0.89, N = 3SE +/- 0.91, N = 36.366.35MIN: 4.47 / MAX: 13.1MIN: 4.48 / MAX: 11.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3v5.9.1v5.10 Git Oct233691215Min: 4.58 / Avg: 6.36 / Max: 7.27Min: 4.54 / Avg: 6.35 / Max: 7.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2v5.9.1v5.10 Git Oct23246810SE +/- 0.79, N = 3SE +/- 0.73, N = 37.016.85MIN: 5.33 / MAX: 12.18MIN: 5.34 / MAX: 12.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2v5.9.1v5.10 Git Oct233691215Min: 5.44 / Avg: 7.01 / Max: 7.99Min: 5.45 / Avg: 6.85 / Max: 7.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPv5.9.1v5.10 Git Oct23918273645SE +/- 0.78, N = 15SE +/- 0.67, N = 323.0741.461. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPv5.9.1v5.10 Git Oct23918273645Min: 18.9 / Avg: 23.07 / Max: 31.2Min: 40.76 / Avg: 41.46 / Max: 42.791. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomv5.9.1v5.10 Git Oct231.29352.5873.88055.1746.4675SE +/- 0.125, N = 15SE +/- 0.122, N = 155.7495.5591. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomv5.9.1v5.10 Git Oct23246810Min: 4.57 / Avg: 5.75 / Max: 6.16Min: 4.43 / Avg: 5.56 / Max: 5.881. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwritev5.9.1v5.10 Git Oct23612182430SE +/- 0.09, N = 3SE +/- 1.70, N = 1221.8226.501. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwritev5.9.1v5.10 Git Oct23612182430Min: 21.73 / Avg: 21.82 / Max: 22Min: 21.63 / Avg: 26.5 / Max: 37.761. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwritev5.9.1v5.10 Git Oct23918273645SE +/- 0.17, N = 3SE +/- 1.96, N = 1240.534.81. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwritev5.9.1v5.10 Git Oct23816243240Min: 40.2 / Avg: 40.53 / Max: 40.7Min: 23.4 / Avg: 34.76 / Max: 40.91. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncv5.9.1v5.10 Git Oct232K4K6K8K10KSE +/- 87.54, N = 3SE +/- 1780.01, N = 38515.069507.381. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncv5.9.1v5.10 Git Oct2316003200480064008000Min: 8393.47 / Avg: 8515.06 / Max: 8684.95Min: 7611.33 / Avg: 9507.38 / Max: 13064.831. (CXX) g++ options: -O3 -lsnappy -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.9.1v5.10 Git Oct231122334455SE +/- 0.79, N = 16SE +/- 2.27, N = 1635.0746.621. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.9.1v5.10 Git Oct23918273645Min: 31.49 / Avg: 35.07 / Max: 43.16Min: 34.39 / Avg: 46.62 / Max: 65.221. (CC) gcc options: -O2 -std=c99

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Loadv5.9.1v5.10 Git Oct23612182430SE +/- 1.26, N = 25SE +/- 1.43, N = 2525.7424.871. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Loadv5.9.1v5.10 Git Oct23612182430Min: 7.91 / Avg: 25.74 / Max: 30.04Min: 11.14 / Avg: 24.87 / Max: 30.481. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

IOR

IOR is a parallel I/O storage benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Write Testv5.9.1v5.10 Git Oct2320406080100SE +/- 1.21, N = 3SE +/- 2.17, N = 1574.1591.02MIN: 21.68 / MAX: 122.31MIN: 17.47 / MAX: 147.461. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.2.1Write Testv5.9.1v5.10 Git Oct2320406080100Min: 72.61 / Avg: 74.15 / Max: 76.54Min: 76.3 / Avg: 91.02 / Max: 108.981. (CC) gcc options: -O2 -lm -pthread -lmpi

135 Results Shown

Stress-NG
Hackbench
OSBench
oneAPI Level Zero Tests
Stress-NG
Cryptsetup
Hackbench
SQLite
Hackbench
ctx_clock
NCNN
LevelDB
PostMark
dav1d
OSBench
perf-bench
Tesseract
Sockperf
LeelaChessZero
LevelDB
ET: Legacy
oneAPI Level Zero Tests
perf-bench:
  Memcpy 1MB
  Sched Pipe
Selenium:
  StyleBench - Google Chrome
  Kraken - Google Chrome
Stress-NG
OSBench
Waifu2x-NCNN Vulkan
LevelDB
dav1d
Ethr
Stress-NG
perf-bench
LevelDB
TensorFlow Lite
Cryptsetup
Ethr
LevelDB
OSPray
t-test1
Ethr
LeelaChessZero
NCNN
dav1d
Stress-NG
oneAPI Level Zero Tests
WireGuard + Linux Networking Stack Stress Test
Crypto++
LevelDB
NCNN:
  Vulkan GPU - resnet18
  Vulkan GPU-v3-v3 - mobilenet-v3
perf-bench
TensorFlow Lite
SQLite Speedtest
NCNN
Intel Open Image Denoise
LevelDB
TensorFlow Lite
NCNN
t-test1
Sockperf
NCNN
Facebook RocksDB
Stress-NG
NCNN
OSBench
oneAPI Level Zero Tests
NCNN
oneAPI Level Zero Tests:
  Host-To-Device Bandwidth:
    GB/s
    usec
  Device-To-Host Bandwidth:
    usec
    GB/s
NCNN
TensorFlow Lite
NCNN
ET: Legacy
NCNN:
  CPU - squeezenet
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - alexnet
  Vulkan GPU - mobilenet
FFTE
perf-bench
Stress-NG
TensorFlow Lite
RealSR-NCNN:
  4x - No
  4x - Yes
GLmark2
OSBench
Selenium
Facebook RocksDB
NCNN:
  CPU - alexnet
  Vulkan GPU - googlenet
  CPU - resnet18
  CPU - efficientnet-b0
Xonotic
dav1d
IOR
TensorFlow Lite
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
oneAPI Level Zero Tests:
  Peak Float16 Global Memory Bandwidth
  Peak Half-Precision Compute
perf-bench
Stress-NG
Xonotic
Selenium
NCNN
Selenium
OpenVKL
Timed Linux Kernel Compilation
NCNN
OpenSSL
Waifu2x-NCNN Vulkan
oneAPI Level Zero Tests
Cryptsetup
Selenium
NCNN:
  Vulkan GPU - blazeface
  CPU - mnasnet
LevelDB
OSPray
Facebook RocksDB:
  Rand Fill Sync
  Seq Fill
  Rand Fill
NCNN:
  Vulkan GPU - mnasnet
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Stress-NG
LevelDB:
  Seek Rand
  Overwrite
  Overwrite
  Fill Sync
eSpeak-NG Speech Engine
Sockperf
IOR