Ryzen 9 5900X Clear Linux

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (2702 BIOS) and Sapphire AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB on Fedora 33 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012193-HA-2012181HA51
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 3 Tests
CPU Massive 20 Tests
Creator Workloads 16 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
Fortran Tests 5 Tests
Game Development 4 Tests
HPC - High Performance Computing 18 Tests
Imaging 4 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 8 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 9 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 2 Tests
Renderers 3 Tests
Scientific Computing 9 Tests
Server 7 Tests
Server CPU Tests 13 Tests
Single-Threaded 5 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 5 Tests
Vulkan Compute 4 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Clear Linux 34100
December 18 2020
  8 Hours, 34 Minutes
Fedora Workstation 33
December 18 2020
  6 Hours, 38 Minutes
Invert Hiding All Results Option
  7 Hours, 36 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5900X Clear LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionClear Linux 34100Fedora Workstation 33AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (2702 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB (1780/875MHz)AMD Navi 10 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Clear Linux OS 341005.9.15-1008.native (x86_64)GNOME Shell 3.38.2X Server 1.20.10modesetting 1.20.104.6 Mesa 20.3.1 (LLVM 10.0.1)1.2.145GCC 10.2.1 20201217 releases/gcc-10.2.0-643-g7cbb07d2fc + Clang 10.0.1 + LLVM 10.0.1ext43840x21601000GB Sabrent Rocket 4.0 1TB + 15GB Ultra USB 3.0Fedora 335.9.14-200.fc33.x86_64 (x86_64)X Server + Wayland4.6 Mesa 20.2.4 (LLVM 11.0.0)GCC 10.2.1 20201125 + Clang 11.0.0btrfsOpenBenchmarking.orgEnvironment Details- Clear Linux 34100: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags -Wa,-mbranches-within-32B-boundaries" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" Compiler Details- Clear Linux 34100: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-languages=c,c++,fortran,go --enable-ld=default --enable-libstdcxx-pch --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=westmere --with-gcc-major-version-only --with-glibc-version=2.19 --with-gnu-ld --with-isl --with-ppl=yes --with-tune=haswell - Fedora Workstation 33: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Disk Details- Clear Linux 34100: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096- Fedora Workstation 33: NONE / relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096Processor Details- Clear Linux 34100: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Fedora Workstation 33: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Graphics Details- Clear Linux 34100: GLAMORPython Details- Clear Linux 34100: Python 3.9.1- Fedora Workstation 33: Python 3.9.0Security Details- Clear Linux 34100: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Fedora Workstation 33: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Clear Linux 34100 vs. Fedora Workstation 33 ComparisonPhoronix Test SuiteBaseline+74.3%+74.3%+148.6%+148.6%+222.9%+222.9%27.7%12.3%6.2%6.1%4.6%3.6%3.2%3.2%3.2%2.3%2.3%2.2%Kostya297%PartialTweets287.5%DistinctUserID269.5%Cylinder162.2%Seq Fill121.1%Seq Fill120.6%Rand Fill120.4%Rand Fill119.9%Overwrite119.7%Overwrite119.3%LargeRand75.8%Bosphorus 1080p74.9%1054.4%853.7%P.P.B52.1%P.B.S38.9%H.2.V.E38.2%GET33%blosclz29.6%3840 x 2160SET27.2%1024.7%M.S.A - LSU RNA22.7%20.5%Rand Read20.3%CPU19.2%Enc Mode 8 - 1080p18.7%Timed Time - Size 1,00016.6%Bosphorus 4K14.8%T.F.A.T.T13.4%mobilenet-v1-1.013.4%Total Time612%Time To Compile11.7%UASTC Level 311.3%resnet-v2-5010.8%59.9%9.9%SqueezeNetV1.09.7%Read While Writing9.4%Vulkan GPU - yolov4-tiny9.3%CPU - regnety_400m9.3%Vulkan GPU - regnety_400m8.9%inception-v38.2%Unkeyed Algorithms8.1%Enc Mode 4 - 1080p7.9%MobileNetV2_2247.5%CPU - yolov4-tiny7.4%N.2.3.C.F.R7.2%27.2%4x - YesBLASVulkan GPU - squeezenet_ssd6%9 - D.S5.7%P.D.S5.5%CPU - squeezenet_ssd5.5%4.9%3 - D.S4.9%4x - NoWater Benchmark4.4%Hot Read4.3%Vulkan GPU - vgg164.3%Rand Read4.2%CPU - vgg164.1%Rand Fill3.8%CPU - alexnet3.7%Vulkan GPU - alexnet3.6%P.P.SCPU - Supercar3.5%Thorough3.3%CPU - Bedroom3.2%3 - Compression Speed19SmallQ.1.L.H.C3.2%Vulkan GPU - mnasnet3%1.H.M.2.D2.7%CPU2.5%9 - Compression Speed2.5%Time To CompileCPU-v3-v3 - mobilenet-v3CPU - MobileNet v22.2%CPU - blazefaceCPU - resnet182.2%CoreMark Size 666 - I.P.S2%simdjsonsimdjsonsimdjsonIncompact3DLevelDBLevelDBLevelDBLevelDBLevelDBLevelDBsimdjsonx265libavif avifenclibavif avifencLibRawPHPBenchx264RedisC-BloscGLmark2Redisrav1eTimed MAFFT AlignmentNumpy BenchmarkFacebook RocksDBDeepSpeechSVT-AV1SQLite Speedtestx265PyBenchMobile Neural NetworkStockfishrav1eTimed Linux Kernel CompilationBasis UniversalMobile Neural Networkrav1eNode.js V8 Web Tooling BenchmarkMobile Neural NetworkFacebook RocksDBNCNNNCNNNCNNMobile Neural NetworkCrypto++SVT-AV1Mobile Neural NetworkNCNNFFTElibavif avifencRealSR-NCNNLeelaChessZeroNCNNLZ4 CompressionTimed HMMer SearchNCNNRNNoiseLZ4 CompressionRealSR-NCNNGROMACSLevelDBNCNNLevelDBNCNNFacebook RocksDBNCNNNCNNHimeno BenchmarkIndigoBenchASTC EncoderIndigoBenchLZ4 CompressionZstd CompressionminiFEWebP Image EncodeNCNNasmFishChaos Group V-RAYLZ4 CompressionTimed FFmpeg CompilationNCNNTNNNCNNNCNNCoremarkClear Linux 34100Fedora Workstation 33

Ryzen 9 5900X Clear Linuxsimdjson: Kostyasimdjson: PartialTweetsincompact3d: Cylinderleveldb: Seq Fillleveldb: Seq Fillleveldb: Rand Fillleveldb: Rand Fillleveldb: Overwriteleveldb: Overwritesimdjson: LargeRandx265: Bosphorus 1080pavifenc: 10avifenc: 8libraw: Post-Processing Benchmarkphpbench: PHP Benchmark Suitex264: H.264 Video Encodingredis: GETblosc: blosclzglmark2: 3840 x 2160redis: SETrav1e: 10mafft: Multiple Sequence Alignment - LSU RNAnumpy: rocksdb: Rand Readdeepspeech: CPUsvt-av1: Enc Mode 8 - 1080psqlite-speedtest: Timed Time - Size 1,000x265: Bosphorus 4Kpybench: Total For Average Test Timesmnn: mobilenet-v1-1.0stockfish: Total Timerav1e: 6build-linux-kernel: Time To Compilebasis: UASTC Level 3mnn: resnet-v2-50rav1e: 5node-web-tooling: mnn: SqueezeNetV1.0rocksdb: Read While Writingncnn: Vulkan GPU - yolov4-tinyncnn: CPU - regnety_400mncnn: Vulkan GPU - regnety_400mmnn: inception-v3svt-av1: Enc Mode 4 - 1080pmnn: MobileNetV2_224ncnn: CPU - yolov4-tinyffte: N=256, 3D Complex FFT Routineavifenc: 2realsr-ncnn: 4x - Yeslczero: BLASncnn: Vulkan GPU - squeezenet_ssdcompress-lz4: 9 - Decompression Speedhmmer: Pfam Database Searchncnn: CPU - squeezenet_ssdrnnoise: compress-lz4: 3 - Decompression Speedrealsr-ncnn: 4x - Nogromacs: Water Benchmarkleveldb: Hot Readncnn: Vulkan GPU - vgg16leveldb: Rand Readncnn: CPU - vgg16rocksdb: Rand Fillncnn: CPU - alexnetncnn: Vulkan GPU - alexnethimeno: Poisson Pressure Solverindigobench: CPU - Supercarastcenc: Thoroughindigobench: CPU - Bedroomcompress-lz4: 3 - Compression Speedcompress-zstd: 19minife: Smallwebp: Quality 100, Lossless, Highest Compressionncnn: Vulkan GPU - mnasnetasmfish: 1024 Hash Memory, 26 Depthv-ray: CPUcompress-lz4: 9 - Compression Speedbuild-ffmpeg: Time To Compilencnn: CPU-v3-v3 - mobilenet-v3tnn: CPU - MobileNet v2ncnn: CPU - blazefacencnn: CPU - resnet18coremark: CoreMark Size 666 - Iterations Per Secondcrafty: Elapsed Timedolfyn: Computational Fluid Dynamicsncnn: Vulkan GPU - resnet50webp: Quality 100, Losslessncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetwebp: Quality 100, Highest Compressionncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - shufflenet-v2ncnn: CPU - efficientnet-b0ncnn: CPU - resnet50ncnn: Vulkan GPU - resnet18compress-zstd: 3ncnn: CPU-v2-v2 - mobilenet-v2rocksdb: Seq Fillncnn: CPU - googlenetncnn: CPU - mobilenetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - efficientnet-b0astcenc: Exhaustivelammps: Rhodopsin Proteinncnn: Vulkan GPU-v2-v2 - mobilenet-v2lczero: Eigenblender: Classroom - CPU-Onlywaifu2x-ncnn: 2x - 3 - Yesncnn: Vulkan GPU - blazefaceblender: Barbershop - CPU-Onlytensorflow-lite: Inception V4cryptopp: Keyed Algorithmshpcg: rawtherapee: Total Benchmark Timerocksdb: Rand Fill Synctnn: CPU - SqueezeNet v1.1ncnn: Vulkan GPU-v3-v3 - mobilenet-v3lammps: 20k Atomssimdjson: DistinctUserIDcryptopp: Unkeyed Algorithmsbetsy: ETC2 RGB - Highestbetsy: ETC1 - HighestClear Linux 34100Fedora Workstation 332.663.4178.652323442.65262.260.443.94943.98360.31.0986.353.1613.28777.531133092176.473889653.1015945.820752883714.255.2177.218637.3512220068654.5162256.89940.05924.967374.878327944182.21744.39733.12723.6911.63917.355.574356925818.8315.6215.5924.4386.7302.84418.8541134.19958889533.68584.91154413.2713921.779.04813.3914.51013899.312.3241.29710.67950.8010.75251.0015148859.989.945152.2845537.21315.023.44372.1043.74164.2826.6393.71501824802699669.9830.7344.08210.6861.8312.59657805.4920971174893913.22024.0512.7394.873.765.50912.204.845.2124.1612.675038.54.26171137012.1212.2612.275.26121.7810.7544.25571278.826.4031.82368.911790687836.1373554.9540049.5291653544211.4164.0110.7783.51498.1915667.4726.6730.670.88206.19190094.29328.227.496.66596.61027.50.6249.374.8805.05250.98815981127.682924174.9812305.426502267746.964.1838.856529.0010160776264.9808647.91946.69521.748365.531368438361.98049.57436.88626.2391.49115.796.115326283620.5817.0716.9826.4416.2403.05820.2538377.54327549636.10179.93657714.0713168.483.42514.1315.21913253.711.7841.24211.14152.9911.20353.09145989710.3510.305336.6874696.97015.513.33574.4145.14296.5627.4813.82488736682634468.3030.0403.99215.3981.7912.87645061.6557641197653012.99323.6512.9544.793.825.42512.024.775.2823.8412.835099.84.21169154411.9812.4012.415.31122.8610.8394.22575280.616.3671.83370.231797047833.8831434.9663749.4171650643211.2594.010.95460.7905737.5036.748OpenBenchmarking.org

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaClear Linux 34100Fedora Workstation 330.59851.1971.79552.3942.9925SE +/- 0.03, N = 3SE +/- 0.01, N = 32.660.67-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaClear Linux 34100Fedora Workstation 33246810Min: 2.63 / Avg: 2.66 / Max: 2.73Min: 0.66 / Avg: 0.67 / Max: 0.681. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsClear Linux 34100Fedora Workstation 330.76731.53462.30193.06923.8365SE +/- 0.03, N = 3SE +/- 0.01, N = 33.410.88-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsClear Linux 34100Fedora Workstation 33246810Min: 3.37 / Avg: 3.41 / Max: 3.46Min: 0.87 / Avg: 0.88 / Max: 0.91. (CXX) g++ options: -pthread

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderClear Linux 34100Fedora Workstation 3350100150200250SE +/- 1.07, N = 3SE +/- 0.25, N = 378.65206.191. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderClear Linux 34100Fedora Workstation 334080120160200Min: 77.55 / Avg: 78.65 / Max: 80.8Min: 205.76 / Avg: 206.19 / Max: 206.641. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.02, N = 3SE +/- 0.85, N = 342.6594.29-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 3320406080100Min: 42.61 / Avg: 42.65 / Max: 42.68Min: 93.16 / Avg: 94.29 / Max: 95.951. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 331428425670SE +/- 0.03, N = 3SE +/- 0.24, N = 362.228.2-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 331224364860Min: 62.2 / Avg: 62.23 / Max: 62.3Min: 27.7 / Avg: 28.17 / Max: 28.51. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 331428425670SE +/- 0.19, N = 3SE +/- 0.03, N = 360.427.4-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 331224364860Min: 60 / Avg: 60.37 / Max: 60.6Min: 27.4 / Avg: 27.43 / Max: 27.51. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.15, N = 3SE +/- 0.17, N = 343.9596.67-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 3320406080100Min: 43.79 / Avg: 43.95 / Max: 44.26Min: 96.42 / Avg: 96.67 / Max: 96.991. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.09, N = 3SE +/- 0.10, N = 343.9896.61-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 3320406080100Min: 43.83 / Avg: 43.98 / Max: 44.16Min: 96.43 / Avg: 96.61 / Max: 96.781. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 331326395265SE +/- 0.12, N = 3SE +/- 0.03, N = 360.327.5-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 331224364860Min: 60.1 / Avg: 60.3 / Max: 60.5Min: 27.4 / Avg: 27.47 / Max: 27.51. (CXX) g++ options: -lsnappy -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomClear Linux 34100Fedora Workstation 330.24530.49060.73590.98121.2265SE +/- 0.01, N = 3SE +/- 0.01, N = 31.090.62-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomClear Linux 34100Fedora Workstation 33246810Min: 1.08 / Avg: 1.09 / Max: 1.11Min: 0.61 / Avg: 0.62 / Max: 0.641. (CXX) g++ options: -pthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.13, N = 3SE +/- 0.19, N = 386.3549.37-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lnuma-O21. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClear Linux 34100Fedora Workstation 331632486480Min: 86.16 / Avg: 86.35 / Max: 86.59Min: 49.17 / Avg: 49.37 / Max: 49.751. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Clear Linux 34100Fedora Workstation 331.0982.1963.2944.3925.49SE +/- 0.007, N = 3SE +/- 0.010, N = 33.1614.8801. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Clear Linux 34100Fedora Workstation 33246810Min: 3.15 / Avg: 3.16 / Max: 3.18Min: 4.86 / Avg: 4.88 / Max: 4.891. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Clear Linux 34100Fedora Workstation 331.13672.27343.41014.54685.6835SE +/- 0.033, N = 3SE +/- 0.028, N = 33.2875.0521. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Clear Linux 34100Fedora Workstation 33246810Min: 3.24 / Avg: 3.29 / Max: 3.35Min: 5.01 / Avg: 5.05 / Max: 5.111. (CXX) g++ options: -O3 -fPIC

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.09, N = 3SE +/- 0.25, N = 377.5350.98-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear Linux 34100Fedora Workstation 331530456075Min: 77.43 / Avg: 77.53 / Max: 77.71Min: 50.65 / Avg: 50.98 / Max: 51.471. (CXX) g++ options: -fopenmp -ljpeg -lz -lm

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear Linux 34100Fedora Workstation 33200K400K600K800K1000KSE +/- 5322.04, N = 3SE +/- 7560.09, N = 61133092815981
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear Linux 34100Fedora Workstation 33200K400K600K800K1000KMin: 1122502 / Avg: 1133092 / Max: 1139315Min: 789210 / Avg: 815981.33 / Max: 840793

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingClear Linux 34100Fedora Workstation 334080120160200SE +/- 1.96, N = 3SE +/- 0.92, N = 3176.47127.68-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingClear Linux 34100Fedora Workstation 33306090120150Min: 172.82 / Avg: 176.47 / Max: 179.54Min: 126.51 / Avg: 127.68 / Max: 129.51. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETClear Linux 34100Fedora Workstation 33800K1600K2400K3200K4000KSE +/- 60192.77, N = 15SE +/- 40605.62, N = 153889653.102924174.98-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETClear Linux 34100Fedora Workstation 33700K1400K2100K2800K3500KMin: 3559630 / Avg: 3889653.1 / Max: 4292944.5Min: 2688258 / Avg: 2924174.98 / Max: 3175212.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzClear Linux 34100Fedora Workstation 333K6K9K12K15KSE +/- 67.29, N = 3SE +/- 16.50, N = 315945.812305.4-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzClear Linux 34100Fedora Workstation 333K6K9K12K15KMin: 15818.4 / Avg: 15945.77 / Max: 16047.1Min: 12272.9 / Avg: 12305.37 / Max: 12326.71. (CXX) g++ options: -rdynamic

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 3840 x 2160Clear Linux 34100Fedora Workstation 33600120018002400300020752650

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETClear Linux 34100Fedora Workstation 33600K1200K1800K2400K3000KSE +/- 26710.78, N = 7SE +/- 33629.52, N = 132883714.252267746.96-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETClear Linux 34100Fedora Workstation 33500K1000K1500K2000K2500KMin: 2785782.75 / Avg: 2883714.25 / Max: 2985170Min: 2101243.75 / Avg: 2267746.96 / Max: 24573171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Clear Linux 34100Fedora Workstation 331.17382.34763.52144.69525.869SE +/- 0.034, N = 3SE +/- 0.015, N = 35.2174.183
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Clear Linux 34100Fedora Workstation 33246810Min: 5.17 / Avg: 5.22 / Max: 5.28Min: 4.15 / Avg: 4.18 / Max: 4.2

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAClear Linux 34100Fedora Workstation 33246810SE +/- 0.025, N = 3SE +/- 0.092, N = 57.2188.8561. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAClear Linux 34100Fedora Workstation 333691215Min: 7.19 / Avg: 7.22 / Max: 7.27Min: 8.68 / Avg: 8.86 / Max: 9.211. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkClear Linux 34100Fedora Workstation 33140280420560700SE +/- 6.77, N = 3SE +/- 3.81, N = 3637.35529.00
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkClear Linux 34100Fedora Workstation 33110220330440550Min: 627.44 / Avg: 637.35 / Max: 650.3Min: 524.01 / Avg: 529 / Max: 536.49

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadClear Linux 34100Fedora Workstation 3330M60M90M120M150MSE +/- 125078.44, N = 3SE +/- 778043.91, N = 15122200686101607762-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadClear Linux 34100Fedora Workstation 3320M40M60M80M100MMin: 121969407 / Avg: 122200685.67 / Max: 122398887Min: 98595008 / Avg: 101607762 / Max: 1058902011. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUClear Linux 34100Fedora Workstation 331530456075SE +/- 0.28, N = 3SE +/- 0.30, N = 354.5264.98
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUClear Linux 34100Fedora Workstation 331326395265Min: 53.97 / Avg: 54.52 / Max: 54.9Min: 64.41 / Avg: 64.98 / Max: 65.4

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClear Linux 34100Fedora Workstation 331326395265SE +/- 0.03, N = 3SE +/- 0.26, N = 356.9047.921. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClear Linux 34100Fedora Workstation 331122334455Min: 56.87 / Avg: 56.9 / Max: 56.96Min: 47.39 / Avg: 47.92 / Max: 48.221. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear Linux 34100Fedora Workstation 331122334455SE +/- 0.26, N = 3SE +/- 0.40, N = 340.0646.70-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CC) gcc options: -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear Linux 34100Fedora Workstation 331020304050Min: 39.54 / Avg: 40.06 / Max: 40.37Min: 45.94 / Avg: 46.7 / Max: 47.281. (CC) gcc options: -ldl -lz -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClear Linux 34100Fedora Workstation 33612182430SE +/- 0.12, N = 3SE +/- 0.04, N = 324.9621.74-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lnuma-O21. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClear Linux 34100Fedora Workstation 33612182430Min: 24.84 / Avg: 24.96 / Max: 25.2Min: 21.68 / Avg: 21.74 / Max: 21.821. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear Linux 34100Fedora Workstation 332004006008001000SE +/- 7.22, N = 3737836
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear Linux 34100Fedora Workstation 33150300450600750Min: 724 / Avg: 736.67 / Max: 749

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Clear Linux 34100Fedora Workstation 331.24452.4893.73354.9786.2225SE +/- 0.014, N = 12SE +/- 0.016, N = 34.8785.531-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.75 / MAX: 12.01MIN: 5.47 / MAX: 11.631. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Clear Linux 34100Fedora Workstation 33246810Min: 4.77 / Avg: 4.88 / Max: 4.99Min: 5.5 / Avg: 5.53 / Max: 5.561. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeClear Linux 34100Fedora Workstation 338M16M24M32M40MSE +/- 392223.91, N = 3SE +/- 175887.84, N = 33279441836843836-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeClear Linux 34100Fedora Workstation 336M12M18M24M30MMin: 32046589 / Avg: 32794418 / Max: 33373473Min: 36622103 / Avg: 36843835.67 / Max: 371912091. (CXX) g++ options: -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Clear Linux 34100Fedora Workstation 330.49880.99761.49641.99522.494SE +/- 0.002, N = 3SE +/- 0.002, N = 32.2171.980
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Clear Linux 34100Fedora Workstation 33246810Min: 2.21 / Avg: 2.22 / Max: 2.22Min: 1.98 / Avg: 1.98 / Max: 1.98

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileClear Linux 34100Fedora Workstation 331122334455SE +/- 0.46, N = 5SE +/- 0.59, N = 444.4049.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileClear Linux 34100Fedora Workstation 331020304050Min: 43.61 / Avg: 44.4 / Max: 46.05Min: 48.67 / Avg: 49.57 / Max: 51.32

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Clear Linux 34100Fedora Workstation 33816243240SE +/- 0.01, N = 3SE +/- 0.06, N = 333.1336.89-O3-O21. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Clear Linux 34100Fedora Workstation 33816243240Min: 33.11 / Avg: 33.13 / Max: 33.15Min: 36.77 / Avg: 36.89 / Max: 36.961. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -rdynamic -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Clear Linux 34100Fedora Workstation 33612182430SE +/- 0.19, N = 12SE +/- 0.13, N = 323.6926.24-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 22.84 / MAX: 30.95MIN: 25.85 / MAX: 32.131. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Clear Linux 34100Fedora Workstation 33612182430Min: 23.04 / Avg: 23.69 / Max: 24.99Min: 26.01 / Avg: 26.24 / Max: 26.471. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Clear Linux 34100Fedora Workstation 330.36880.73761.10641.47521.844SE +/- 0.002, N = 3SE +/- 0.007, N = 31.6391.491
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Clear Linux 34100Fedora Workstation 33246810Min: 1.64 / Avg: 1.64 / Max: 1.64Min: 1.48 / Avg: 1.49 / Max: 1.5

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear Linux 34100Fedora Workstation 3348121620SE +/- 0.07, N = 3SE +/- 0.07, N = 317.3515.791. Nodejs v14.15.1
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear Linux 34100Fedora Workstation 3348121620Min: 17.22 / Avg: 17.35 / Max: 17.44Min: 15.68 / Avg: 15.79 / Max: 15.911. Nodejs v14.15.1

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Clear Linux 34100Fedora Workstation 33246810SE +/- 0.078, N = 12SE +/- 0.034, N = 35.5746.115-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.11 / MAX: 6.49MIN: 5.95 / MAX: 6.971. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Clear Linux 34100Fedora Workstation 33246810Min: 5.16 / Avg: 5.57 / Max: 6.05Min: 6.05 / Avg: 6.12 / Max: 6.171. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingClear Linux 34100Fedora Workstation 33800K1600K2400K3200K4000KSE +/- 28070.38, N = 15SE +/- 35265.20, N = 435692583262836-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingClear Linux 34100Fedora Workstation 33600K1200K1800K2400K3000KMin: 3437965 / Avg: 3569257.6 / Max: 3796429Min: 3179976 / Avg: 3262836 / Max: 33498781. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33510152025SE +/- 0.08, N = 3SE +/- 0.43, N = 318.8320.58-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 18.62 / MAX: 19.15-O2 - MIN: 19.83 / MAX: 22.441. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33510152025Min: 18.73 / Avg: 18.83 / Max: 18.98Min: 19.97 / Avg: 20.58 / Max: 21.421. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClear Linux 34100Fedora Workstation 3348121620SE +/- 0.07, N = 3SE +/- 0.16, N = 315.6217.07-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 15.47 / MAX: 15.86-O2 - MIN: 16.77 / MAX: 18.291. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClear Linux 34100Fedora Workstation 3348121620Min: 15.51 / Avg: 15.62 / Max: 15.74Min: 16.84 / Avg: 17.07 / Max: 17.371. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mClear Linux 34100Fedora Workstation 3348121620SE +/- 0.00, N = 3SE +/- 0.04, N = 315.5916.98-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 15.53 / MAX: 15.8-O2 - MIN: 16.82 / MAX: 18.781. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mClear Linux 34100Fedora Workstation 3348121620Min: 15.59 / Avg: 15.59 / Max: 15.6Min: 16.9 / Avg: 16.98 / Max: 17.031. (CXX) g++ options: -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Clear Linux 34100Fedora Workstation 33612182430SE +/- 0.20, N = 12SE +/- 0.16, N = 324.4426.44-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 22.88 / MAX: 33.11MIN: 25.92 / MAX: 31.941. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Clear Linux 34100Fedora Workstation 33612182430Min: 23.14 / Avg: 24.44 / Max: 25.7Min: 26.13 / Avg: 26.44 / Max: 26.611. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClear Linux 34100Fedora Workstation 33246810SE +/- 0.019, N = 3SE +/- 0.004, N = 36.7306.2401. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClear Linux 34100Fedora Workstation 333691215Min: 6.69 / Avg: 6.73 / Max: 6.75Min: 6.23 / Avg: 6.24 / Max: 6.241. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Clear Linux 34100Fedora Workstation 330.68811.37622.06432.75243.4405SE +/- 0.020, N = 12SE +/- 0.020, N = 32.8443.058-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 2.64 / MAX: 10.76MIN: 2.97 / MAX: 3.981. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Clear Linux 34100Fedora Workstation 33246810Min: 2.71 / Avg: 2.84 / Max: 2.94Min: 3.03 / Avg: 3.06 / Max: 3.11. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 318.8520.25-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 18.67 / MAX: 19.82-O2 - MIN: 20.08 / MAX: 29.31. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33510152025Min: 18.81 / Avg: 18.85 / Max: 18.88Min: 20.22 / Avg: 20.25 / Max: 20.291. (CXX) g++ options: -rdynamic -lgomp -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineClear Linux 34100Fedora Workstation 339K18K27K36K45KSE +/- 91.08, N = 3SE +/- 23.97, N = 341134.2038377.541. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineClear Linux 34100Fedora Workstation 337K14K21K28K35KMin: 41006.55 / Avg: 41134.2 / Max: 41310.57Min: 38341.06 / Avg: 38377.54 / Max: 38422.721. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Clear Linux 34100Fedora Workstation 33816243240SE +/- 0.33, N = 6SE +/- 0.22, N = 333.6936.101. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Clear Linux 34100Fedora Workstation 33816243240Min: 33.17 / Avg: 33.68 / Max: 35.25Min: 35.7 / Avg: 36.1 / Max: 36.461. (CXX) g++ options: -O3 -fPIC

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.02, N = 3SE +/- 0.04, N = 384.9179.94
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesClear Linux 34100Fedora Workstation 331632486480Min: 84.87 / Avg: 84.91 / Max: 84.94Min: 79.89 / Avg: 79.94 / Max: 80.02

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASClear Linux 34100Fedora Workstation 33120240360480600SE +/- 2.33, N = 3544577-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASClear Linux 34100Fedora Workstation 33100200300400500Min: 572 / Avg: 576.67 / Max: 5791. (CXX) g++ options: -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 3348121620SE +/- 0.03, N = 3SE +/- 0.10, N = 313.2714.07-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 13.07 / MAX: 13.53-O2 - MIN: 13.77 / MAX: 15.141. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 3348121620Min: 13.21 / Avg: 13.27 / Max: 13.32Min: 13.88 / Avg: 14.07 / Max: 14.221. (CXX) g++ options: -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClear Linux 34100Fedora Workstation 333K6K9K12K15KSE +/- 40.00, N = 3SE +/- 37.15, N = 313921.713168.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClear Linux 34100Fedora Workstation 332K4K6K8K10KMin: 13877.9 / Avg: 13921.73 / Max: 14001.6Min: 13094.7 / Avg: 13168.43 / Max: 13213.31. (CC) gcc options: -O3

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.04, N = 3SE +/- 0.07, N = 379.0583.43-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchClear Linux 34100Fedora Workstation 331632486480Min: 78.98 / Avg: 79.05 / Max: 79.09Min: 83.34 / Avg: 83.43 / Max: 83.561. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 3348121620SE +/- 0.01, N = 3SE +/- 0.03, N = 313.3914.13-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 13.2 / MAX: 13.65-O2 - MIN: 13.96 / MAX: 14.421. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 3348121620Min: 13.36 / Avg: 13.39 / Max: 13.4Min: 14.1 / Avg: 14.13 / Max: 14.181. (CXX) g++ options: -rdynamic -lgomp -lpthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Clear Linux 34100Fedora Workstation 3348121620SE +/- 0.12, N = 3SE +/- 0.19, N = 314.5115.22-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CC) gcc options: -pedantic -fvisibility=hidden -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Clear Linux 34100Fedora Workstation 3348121620Min: 14.27 / Avg: 14.51 / Max: 14.64Min: 14.84 / Avg: 15.22 / Max: 15.471. (CC) gcc options: -pedantic -fvisibility=hidden -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClear Linux 34100Fedora Workstation 333K6K9K12K15KSE +/- 32.58, N = 3SE +/- 8.58, N = 313899.313253.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClear Linux 34100Fedora Workstation 332K4K6K8K10KMin: 13836.4 / Avg: 13899.33 / Max: 13945.4Min: 13236.7 / Avg: 13253.73 / Max: 13264.11. (CC) gcc options: -O3

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoClear Linux 34100Fedora Workstation 333691215SE +/- 0.03, N = 3SE +/- 0.00, N = 312.3211.78
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoClear Linux 34100Fedora Workstation 3348121620Min: 12.29 / Avg: 12.32 / Max: 12.37Min: 11.78 / Avg: 11.78 / Max: 11.79

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkClear Linux 34100Fedora Workstation 330.29180.58360.87541.16721.459SE +/- 0.003, N = 3SE +/- 0.001, N = 31.2971.242-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread -lrt -lpthread -lm -ldl
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkClear Linux 34100Fedora Workstation 33246810Min: 1.29 / Avg: 1.3 / Max: 1.3Min: 1.24 / Avg: 1.24 / Max: 1.251. (CXX) g++ options: -pthread -lrt -lpthread -lm -ldl

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadClear Linux 34100Fedora Workstation 333691215SE +/- 0.11, N = 3SE +/- 0.05, N = 310.6811.14-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadClear Linux 34100Fedora Workstation 333691215Min: 10.56 / Avg: 10.68 / Max: 10.91Min: 11.09 / Avg: 11.14 / Max: 11.231. (CXX) g++ options: -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Clear Linux 34100Fedora Workstation 331224364860SE +/- 0.20, N = 3SE +/- 0.30, N = 350.8052.99-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 50 / MAX: 53.22-O2 - MIN: 51.94 / MAX: 61.721. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Clear Linux 34100Fedora Workstation 331122334455Min: 50.4 / Avg: 50.8 / Max: 51.04Min: 52.39 / Avg: 52.99 / Max: 53.311. (CXX) g++ options: -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadClear Linux 34100Fedora Workstation 333691215SE +/- 0.05, N = 3SE +/- 0.09, N = 310.7511.20-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadClear Linux 34100Fedora Workstation 333691215Min: 10.66 / Avg: 10.75 / Max: 10.81Min: 11.09 / Avg: 11.2 / Max: 11.381. (CXX) g++ options: -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clear Linux 34100Fedora Workstation 331224364860SE +/- 0.02, N = 3SE +/- 0.14, N = 351.0053.09-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 50.51 / MAX: 51.55-O2 - MIN: 52.31 / MAX: 55.651. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clear Linux 34100Fedora Workstation 331122334455Min: 50.98 / Avg: 51 / Max: 51.03Min: 52.8 / Avg: 53.09 / Max: 53.251. (CXX) g++ options: -rdynamic -lgomp -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillClear Linux 34100Fedora Workstation 33300K600K900K1200K1500KSE +/- 1801.14, N = 3SE +/- 771.67, N = 315148851459897-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillClear Linux 34100Fedora Workstation 33300K600K900K1200K1500KMin: 1511682 / Avg: 1514885.33 / Max: 1517914Min: 1458369 / Avg: 1459897 / Max: 14608491. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClear Linux 34100Fedora Workstation 333691215SE +/- 0.05, N = 3SE +/- 0.04, N = 39.9810.35-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 9.87 / MAX: 10.17-O2 - MIN: 10.26 / MAX: 10.531. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClear Linux 34100Fedora Workstation 333691215Min: 9.91 / Avg: 9.98 / Max: 10.08Min: 10.3 / Avg: 10.35 / Max: 10.421. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetClear Linux 34100Fedora Workstation 333691215SE +/- 0.04, N = 3SE +/- 0.02, N = 39.9410.30-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 9.85 / MAX: 10.09-O2 - MIN: 10.2 / MAX: 10.511. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetClear Linux 34100Fedora Workstation 333691215Min: 9.9 / Avg: 9.94 / Max: 10.02Min: 10.26 / Avg: 10.3 / Max: 10.321. (CXX) g++ options: -rdynamic -lgomp -lpthread

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClear Linux 34100Fedora Workstation 3311002200330044005500SE +/- 64.56, N = 15SE +/- 75.77, N = 155152.285336.69-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClear Linux 34100Fedora Workstation 339001800270036004500Min: 4834.94 / Avg: 5152.28 / Max: 5704.92Min: 4932.09 / Avg: 5336.69 / Max: 5809.031. (CC) gcc options: -O3 -mavx2

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear Linux 34100Fedora Workstation 33246810SE +/- 0.008, N = 3SE +/- 0.022, N = 37.2136.970
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear Linux 34100Fedora Workstation 333691215Min: 7.2 / Avg: 7.21 / Max: 7.23Min: 6.94 / Avg: 6.97 / Max: 7.01

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughClear Linux 34100Fedora Workstation 3348121620SE +/- 0.02, N = 3SE +/- 0.02, N = 315.0215.511. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughClear Linux 34100Fedora Workstation 3348121620Min: 15 / Avg: 15.02 / Max: 15.06Min: 15.49 / Avg: 15.51 / Max: 15.541. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear Linux 34100Fedora Workstation 330.77471.54942.32413.09883.8735SE +/- 0.004, N = 3SE +/- 0.012, N = 33.4433.335
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear Linux 34100Fedora Workstation 33246810Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.31 / Avg: 3.34 / Max: 3.35

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClear Linux 34100Fedora Workstation 3320406080100SE +/- 0.81, N = 3SE +/- 0.44, N = 372.1074.411. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClear Linux 34100Fedora Workstation 331428425670Min: 70.96 / Avg: 72.1 / Max: 73.67Min: 73.57 / Avg: 74.41 / Max: 75.071. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Clear Linux 34100Fedora Workstation 331020304050SE +/- 0.10, N = 3SE +/- 0.03, N = 343.745.1-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Clear Linux 34100Fedora Workstation 33918273645Min: 43.5 / Avg: 43.7 / Max: 43.8Min: 45.1 / Avg: 45.13 / Max: 45.21. (CC) gcc options: -O3 -pthread -lz

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallClear Linux 34100Fedora Workstation 339001800270036004500SE +/- 1.02, N = 3SE +/- 1.15, N = 34164.284296.56-lmpi_cxx1. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallClear Linux 34100Fedora Workstation 337001400210028003500Min: 4163.2 / Avg: 4164.28 / Max: 4166.31Min: 4294.83 / Avg: 4296.56 / Max: 4298.741. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClear Linux 34100Fedora Workstation 33612182430SE +/- 0.13, N = 3SE +/- 0.15, N = 326.6427.48-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClear Linux 34100Fedora Workstation 33612182430Min: 26.49 / Avg: 26.64 / Max: 26.89Min: 27.18 / Avg: 27.48 / Max: 27.661. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetClear Linux 34100Fedora Workstation 330.85951.7192.57853.4384.2975SE +/- 0.01, N = 3SE +/- 0.02, N = 33.713.82-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.69 / MAX: 3.9-O2 - MIN: 3.75 / MAX: 4.341. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetClear Linux 34100Fedora Workstation 33246810Min: 3.7 / Avg: 3.71 / Max: 3.73Min: 3.79 / Avg: 3.82 / Max: 3.851. (CXX) g++ options: -rdynamic -lgomp -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthClear Linux 34100Fedora Workstation 3311M22M33M44M55MSE +/- 565989.30, N = 4SE +/- 218738.44, N = 35018248048873668
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthClear Linux 34100Fedora Workstation 339M18M27M36M45MMin: 48950805 / Avg: 50182479.5 / Max: 51694980Min: 48484829 / Avg: 48873668 / Max: 49241704

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUClear Linux 34100Fedora Workstation 336K12K18K24K30KSE +/- 88.64, N = 3SE +/- 117.22, N = 32699626344
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUClear Linux 34100Fedora Workstation 335K10K15K20K25KMin: 26876 / Avg: 26996 / Max: 27169Min: 26177 / Avg: 26344 / Max: 26570

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClear Linux 34100Fedora Workstation 331632486480SE +/- 0.99, N = 3SE +/- 0.44, N = 369.9868.301. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClear Linux 34100Fedora Workstation 331428425670Min: 68.93 / Avg: 69.98 / Max: 71.95Min: 67.55 / Avg: 68.3 / Max: 69.071. (CC) gcc options: -O3

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileClear Linux 34100Fedora Workstation 33714212835SE +/- 0.07, N = 3SE +/- 0.02, N = 330.7330.04
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileClear Linux 34100Fedora Workstation 33714212835Min: 30.6 / Avg: 30.73 / Max: 30.81Min: 30.01 / Avg: 30.04 / Max: 30.07

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 330.9181.8362.7543.6724.59SE +/- 0.04, N = 3SE +/- 0.02, N = 34.083.99-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.94 / MAX: 16.5-O2 - MIN: 3.92 / MAX: 4.571. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33246810Min: 4.01 / Avg: 4.08 / Max: 4.13Min: 3.94 / Avg: 3.99 / Max: 4.021. (CXX) g++ options: -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clear Linux 34100Fedora Workstation 3350100150200250SE +/- 1.24, N = 3SE +/- 2.37, N = 3210.69215.40-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 205.04 / MAX: 218.74-O2 - MIN: 209.05 / MAX: 237.631. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clear Linux 34100Fedora Workstation 334080120160200Min: 209.35 / Avg: 210.69 / Max: 213.16Min: 212.9 / Avg: 215.4 / Max: 220.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClear Linux 34100Fedora Workstation 330.41180.82361.23541.64722.059SE +/- 0.01, N = 3SE +/- 0.01, N = 31.831.79-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 1.8 / MAX: 2.01-O2 - MIN: 1.76 / MAX: 2.461. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClear Linux 34100Fedora Workstation 33246810Min: 1.82 / Avg: 1.83 / Max: 1.84Min: 1.78 / Avg: 1.79 / Max: 1.81. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clear Linux 34100Fedora Workstation 333691215SE +/- 0.02, N = 3SE +/- 0.05, N = 312.5912.87-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.5 / MAX: 12.77-O2 - MIN: 12.71 / MAX: 13.861. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clear Linux 34100Fedora Workstation 3348121620Min: 12.57 / Avg: 12.59 / Max: 12.62Min: 12.79 / Avg: 12.87 / Max: 12.971. (CXX) g++ options: -rdynamic -lgomp -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClear Linux 34100Fedora Workstation 33140K280K420K560K700KSE +/- 9204.52, N = 3SE +/- 1428.64, N = 3657805.49645061.66-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClear Linux 34100Fedora Workstation 33110K220K330K440K550KMin: 639403.22 / Avg: 657805.49 / Max: 667439.17Min: 642426.95 / Avg: 645061.66 / Max: 647336.481. (CC) gcc options: -O2 -lrt" -lrt

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeClear Linux 34100Fedora Workstation 333M6M9M12M15MSE +/- 145230.66, N = 3SE +/- 45937.01, N = 311748939119765301. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeClear Linux 34100Fedora Workstation 332M4M6M8M10MMin: 11525989 / Avg: 11748938.67 / Max: 12021646Min: 11884788 / Avg: 11976530 / Max: 120266651. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsClear Linux 34100Fedora Workstation 333691215SE +/- 0.12, N = 3SE +/- 0.15, N = 313.2212.99
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsClear Linux 34100Fedora Workstation 3348121620Min: 12.97 / Avg: 13.22 / Max: 13.35Min: 12.76 / Avg: 12.99 / Max: 13.27

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Clear Linux 34100Fedora Workstation 33612182430SE +/- 0.08, N = 3SE +/- 0.18, N = 324.0523.65-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 23.68 / MAX: 24.47-O2 - MIN: 23.13 / MAX: 24.841. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Clear Linux 34100Fedora Workstation 33612182430Min: 23.95 / Avg: 24.05 / Max: 24.2Min: 23.3 / Avg: 23.65 / Max: 23.891. (CXX) g++ options: -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClear Linux 34100Fedora Workstation 333691215SE +/- 0.16, N = 3SE +/- 0.15, N = 312.7412.95-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClear Linux 34100Fedora Workstation 3348121620Min: 12.42 / Avg: 12.74 / Max: 12.92Min: 12.65 / Avg: 12.95 / Max: 13.111. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 331.09582.19163.28744.38325.479SE +/- 0.02, N = 3SE +/- 0.01, N = 34.874.79-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.81 / MAX: 5.05-O2 - MIN: 4.74 / MAX: 5.21. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33246810Min: 4.85 / Avg: 4.87 / Max: 4.9Min: 4.78 / Avg: 4.79 / Max: 4.821. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClear Linux 34100Fedora Workstation 330.85951.7192.57853.4384.2975SE +/- 0.03, N = 3SE +/- 0.02, N = 33.763.82-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.7 / MAX: 3.97-O2 - MIN: 3.75 / MAX: 4.91. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClear Linux 34100Fedora Workstation 33246810Min: 3.72 / Avg: 3.76 / Max: 3.82Min: 3.79 / Avg: 3.82 / Max: 3.851. (CXX) g++ options: -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClear Linux 34100Fedora Workstation 331.23952.4793.71854.9586.1975SE +/- 0.056, N = 3SE +/- 0.010, N = 35.5095.425-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClear Linux 34100Fedora Workstation 33246810Min: 5.4 / Avg: 5.51 / Max: 5.58Min: 5.41 / Avg: 5.43 / Max: 5.451. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetClear Linux 34100Fedora Workstation 333691215SE +/- 0.13, N = 3SE +/- 0.06, N = 312.2012.02-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 11.96 / MAX: 19.56-O2 - MIN: 11.89 / MAX: 18.241. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetClear Linux 34100Fedora Workstation 3348121620Min: 12.04 / Avg: 12.2 / Max: 12.46Min: 11.96 / Avg: 12.02 / Max: 12.131. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 331.0892.1783.2674.3565.445SE +/- 0.01, N = 3SE +/- 0.03, N = 34.844.77-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.78 / MAX: 4.97-O2 - MIN: 4.69 / MAX: 5.171. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33246810Min: 4.82 / Avg: 4.84 / Max: 4.86Min: 4.71 / Avg: 4.77 / Max: 4.831. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 331.1882.3763.5644.7525.94SE +/- 0.02, N = 3SE +/- 0.02, N = 35.215.28-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.17 / MAX: 5.42-O2 - MIN: 5.22 / MAX: 5.641. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33246810Min: 5.19 / Avg: 5.21 / Max: 5.25Min: 5.25 / Avg: 5.28 / Max: 5.321. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clear Linux 34100Fedora Workstation 33612182430SE +/- 0.05, N = 3SE +/- 0.02, N = 324.1623.84-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 23.93 / MAX: 24.64-O2 - MIN: 23.59 / MAX: 24.791. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clear Linux 34100Fedora Workstation 33612182430Min: 24.07 / Avg: 24.16 / Max: 24.22Min: 23.8 / Avg: 23.84 / Max: 23.881. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Clear Linux 34100Fedora Workstation 333691215SE +/- 0.09, N = 3SE +/- 0.04, N = 312.6712.83-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.49 / MAX: 12.92-O2 - MIN: 12.66 / MAX: 13.761. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Clear Linux 34100Fedora Workstation 3348121620Min: 12.57 / Avg: 12.67 / Max: 12.86Min: 12.75 / Avg: 12.83 / Max: 12.871. (CXX) g++ options: -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Clear Linux 34100Fedora Workstation 3311002200330044005500SE +/- 15.97, N = 3SE +/- 42.48, N = 35038.55099.8-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Clear Linux 34100Fedora Workstation 339001800270036004500Min: 5010.5 / Avg: 5038.53 / Max: 5065.8Min: 5049.9 / Avg: 5099.8 / Max: 5184.31. (CC) gcc options: -O3 -pthread -lz

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 330.95851.9172.87553.8344.7925SE +/- 0.00, N = 3SE +/- 0.02, N = 34.264.21-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.21 / MAX: 4.43-O2 - MIN: 4.14 / MAX: 4.861. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33246810Min: 4.26 / Avg: 4.26 / Max: 4.27Min: 4.18 / Avg: 4.21 / Max: 4.241. (CXX) g++ options: -rdynamic -lgomp -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillClear Linux 34100Fedora Workstation 33400K800K1200K1600K2000KSE +/- 20216.20, N = 3SE +/- 13510.56, N = 317113701691544-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillClear Linux 34100Fedora Workstation 33300K600K900K1200K1500KMin: 1671952 / Avg: 1711369.67 / Max: 1738874Min: 1671558 / Avg: 1691544.33 / Max: 17172861. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClear Linux 34100Fedora Workstation 333691215SE +/- 0.05, N = 3SE +/- 0.05, N = 312.1211.98-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 11.99 / MAX: 12.53-O2 - MIN: 11.8 / MAX: 12.271. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClear Linux 34100Fedora Workstation 3348121620Min: 12.07 / Avg: 12.12 / Max: 12.22Min: 11.88 / Avg: 11.98 / Max: 12.061. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClear Linux 34100Fedora Workstation 333691215SE +/- 0.05, N = 3SE +/- 0.06, N = 312.2612.40-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.1 / MAX: 12.52-O2 - MIN: 12.19 / MAX: 13.561. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClear Linux 34100Fedora Workstation 3348121620Min: 12.21 / Avg: 12.26 / Max: 12.37Min: 12.3 / Avg: 12.4 / Max: 12.521. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetClear Linux 34100Fedora Workstation 333691215SE +/- 0.05, N = 3SE +/- 0.09, N = 312.2712.41-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.09 / MAX: 12.49-O2 - MIN: 12.21 / MAX: 13.611. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetClear Linux 34100Fedora Workstation 3348121620Min: 12.21 / Avg: 12.27 / Max: 12.37Min: 12.28 / Avg: 12.41 / Max: 12.591. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 331.19482.38963.58444.77925.974SE +/- 0.08, N = 3SE +/- 0.02, N = 35.265.31-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.15 / MAX: 25.33-O2 - MIN: 5.24 / MAX: 6.311. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33246810Min: 5.17 / Avg: 5.26 / Max: 5.41Min: 5.29 / Avg: 5.31 / Max: 5.361. (CXX) g++ options: -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveClear Linux 34100Fedora Workstation 33306090120150SE +/- 0.14, N = 3SE +/- 0.17, N = 3121.78122.861. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveClear Linux 34100Fedora Workstation 3320406080100Min: 121.51 / Avg: 121.78 / Max: 121.96Min: 122.55 / Avg: 122.86 / Max: 123.121. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClear Linux 34100Fedora Workstation 333691215SE +/- 0.02, N = 3SE +/- 0.07, N = 310.7510.84-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClear Linux 34100Fedora Workstation 333691215Min: 10.72 / Avg: 10.75 / Max: 10.78Min: 10.73 / Avg: 10.84 / Max: 10.971. (CXX) g++ options: -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 330.95631.91262.86893.82524.7815SE +/- 0.01, N = 3SE +/- 0.00, N = 34.254.22-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.19 / MAX: 4.43-O2 - MIN: 4.15 / MAX: 5.181. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33246810Min: 4.24 / Avg: 4.25 / Max: 4.26Min: 4.22 / Avg: 4.22 / Max: 4.221. (CXX) g++ options: -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenClear Linux 34100Fedora Workstation 33120240360480600SE +/- 1.73, N = 3571575-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenClear Linux 34100Fedora Workstation 33100200300400500Min: 572 / Avg: 575 / Max: 5781. (CXX) g++ options: -flto -pthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 3360120180240300SE +/- 0.40, N = 3SE +/- 0.34, N = 3278.82280.61
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 3350100150200250Min: 278.12 / Avg: 278.82 / Max: 279.52Min: 279.95 / Avg: 280.61 / Max: 281.08

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesClear Linux 34100Fedora Workstation 33246810SE +/- 0.016, N = 3SE +/- 0.029, N = 36.4036.367
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesClear Linux 34100Fedora Workstation 333691215Min: 6.38 / Avg: 6.4 / Max: 6.43Min: 6.31 / Avg: 6.37 / Max: 6.4

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceClear Linux 34100Fedora Workstation 330.41180.82361.23541.64722.059SE +/- 0.00, N = 3SE +/- 0.03, N = 31.821.83-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 1.8 / MAX: 1.96-O2 - MIN: 1.76 / MAX: 2.711. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceClear Linux 34100Fedora Workstation 33246810Min: 1.81 / Avg: 1.82 / Max: 1.82Min: 1.78 / Avg: 1.83 / Max: 1.861. (CXX) g++ options: -rdynamic -lgomp -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 3380160240320400SE +/- 0.66, N = 3SE +/- 0.57, N = 3368.91370.23
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 3370140210280350Min: 367.59 / Avg: 368.91 / Max: 369.6Min: 369.54 / Avg: 370.23 / Max: 371.36

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Clear Linux 34100Fedora Workstation 33400K800K1200K1600K2000KSE +/- 2313.79, N = 3SE +/- 2996.90, N = 317906871797047
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Clear Linux 34100Fedora Workstation 33300K600K900K1200K1500KMin: 1786060 / Avg: 1790686.67 / Max: 1793080Min: 1791410 / Avg: 1797046.67 / Max: 1801630

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsClear Linux 34100Fedora Workstation 332004006008001000SE +/- 11.97, N = 3SE +/- 6.03, N = 3836.14833.88-fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-g21. (CXX) g++ options: -O3 -pipe -fPIC -pthread
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsClear Linux 34100Fedora Workstation 33150300450600750Min: 818.9 / Avg: 836.14 / Max: 859.14Min: 822.24 / Avg: 833.88 / Max: 842.41. (CXX) g++ options: -O3 -pipe -fPIC -pthread

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear Linux 34100Fedora Workstation 331.11742.23483.35224.46965.587SE +/- 0.00436, N = 3SE +/- 0.01365, N = 34.954004.96637-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear Linux 34100Fedora Workstation 33246810Min: 4.95 / Avg: 4.95 / Max: 4.96Min: 4.94 / Avg: 4.97 / Max: 4.981. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear Linux 34100Fedora Workstation 331122334455SE +/- 0.07, N = 3SE +/- 0.08, N = 349.5349.421. Clear Linux 34100: RawTherapee, version , command line.2. Fedora Workstation 33: RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear Linux 34100Fedora Workstation 331020304050Min: 49.43 / Avg: 49.53 / Max: 49.66Min: 49.28 / Avg: 49.42 / Max: 49.561. Clear Linux 34100: RawTherapee, version , command line.2. Fedora Workstation 33: RawTherapee, version 5.8, command line.

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncClear Linux 34100Fedora Workstation 33400K800K1200K1600K2000KSE +/- 15598.52, N = 7SE +/- 5894.66, N = 316535441650643-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncClear Linux 34100Fedora Workstation 33300K600K900K1200K1500KMin: 1596274 / Avg: 1653544.14 / Max: 1706487Min: 1639563 / Avg: 1650643 / Max: 16596711. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clear Linux 34100Fedora Workstation 3350100150200250SE +/- 2.18, N = 3SE +/- 2.26, N = 3211.42211.26-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 208.03 / MAX: 215.75-O2 - MIN: 206.71 / MAX: 214.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clear Linux 34100Fedora Workstation 334080120160200Min: 208.25 / Avg: 211.42 / Max: 215.6Min: 206.95 / Avg: 211.26 / Max: 214.61. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 330.90231.80462.70693.60924.5115SE +/- 0.01, N = 3SE +/- 0.04, N = 34.014.01-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.95 / MAX: 4.7-O2 - MIN: 3.94 / MAX: 4.741. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33246810Min: 4 / Avg: 4.01 / Max: 4.02Min: 3.96 / Avg: 4.01 / Max: 4.091. (CXX) g++ options: -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsClear Linux 341003691215SE +/- 0.01, N = 310.781. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDClear Linux 34100Fedora Workstation 330.78981.57962.36943.15923.949SE +/- 0.03, N = 3SE +/- 0.02, N = 123.510.95-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDClear Linux 34100Fedora Workstation 33246810Min: 3.47 / Avg: 3.51 / Max: 3.57Min: 0.9 / Avg: 0.95 / Max: 1.081. (CXX) g++ options: -pthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsClear Linux 34100Fedora Workstation 33110220330440550SE +/- 14.20, N = 15SE +/- 1.90, N = 3498.19460.79-fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-g21. (CXX) g++ options: -O3 -pipe -fPIC -pthread
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsClear Linux 34100Fedora Workstation 3390180270360450Min: 440.54 / Avg: 498.19 / Max: 559.37Min: 457.45 / Avg: 460.79 / Max: 464.031. (CXX) g++ options: -O3 -pipe -fPIC -pthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: HighestClear Linux 34100Fedora Workstation 33246810SE +/- 0.152, N = 15SE +/- 0.195, N = 157.4727.503-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: HighestClear Linux 34100Fedora Workstation 333691215Min: 7.3 / Avg: 7.47 / Max: 9.59Min: 7.25 / Avg: 7.5 / Max: 10.221. (CXX) g++ options: -O2 -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestClear Linux 34100Fedora Workstation 33246810SE +/- 0.158, N = 15SE +/- 0.198, N = 156.6736.748-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestClear Linux 34100Fedora Workstation 333691215Min: 6.5 / Avg: 6.67 / Max: 8.89Min: 6.5 / Avg: 6.75 / Max: 9.521. (CXX) g++ options: -O2 -lpthread -ldl

122 Results Shown

simdjson:
  Kostya
  PartialTweets
Incompact3D
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
  Rand Fill:
    MB/s
    Microseconds Per Op
  Overwrite:
    Microseconds Per Op
    MB/s
simdjson
x265
libavif avifenc:
  10
  8
LibRaw
PHPBench
x264
Redis
C-Blosc
GLmark2
Redis
rav1e
Timed MAFFT Alignment
Numpy Benchmark
Facebook RocksDB
DeepSpeech
SVT-AV1
SQLite Speedtest
x265
PyBench
Mobile Neural Network
Stockfish
rav1e
Timed Linux Kernel Compilation
Basis Universal
Mobile Neural Network
rav1e
Node.js V8 Web Tooling Benchmark
Mobile Neural Network
Facebook RocksDB
NCNN:
  Vulkan GPU - yolov4-tiny
  CPU - regnety_400m
  Vulkan GPU - regnety_400m
Mobile Neural Network
SVT-AV1
Mobile Neural Network
NCNN
FFTE
libavif avifenc
RealSR-NCNN
LeelaChessZero
NCNN
LZ4 Compression
Timed HMMer Search
NCNN
RNNoise
LZ4 Compression
RealSR-NCNN
GROMACS
LevelDB
NCNN
LevelDB
NCNN
Facebook RocksDB
NCNN:
  CPU - alexnet
  Vulkan GPU - alexnet
Himeno Benchmark
IndigoBench
ASTC Encoder
IndigoBench
LZ4 Compression
Zstd Compression
miniFE
WebP Image Encode
NCNN
asmFish
Chaos Group V-RAY
LZ4 Compression
Timed FFmpeg Compilation
NCNN
TNN
NCNN:
  CPU - blazeface
  CPU - resnet18
Coremark
Crafty
Dolfyn
NCNN
WebP Image Encode
NCNN:
  CPU - shufflenet-v2
  CPU - mnasnet
WebP Image Encode
NCNN:
  Vulkan GPU - googlenet
  Vulkan GPU - shufflenet-v2
  CPU - efficientnet-b0
  CPU - resnet50
  Vulkan GPU - resnet18
Zstd Compression
NCNN
Facebook RocksDB
NCNN:
  CPU - googlenet
  CPU - mobilenet
  Vulkan GPU - mobilenet
  Vulkan GPU - efficientnet-b0
ASTC Encoder
LAMMPS Molecular Dynamics Simulator
NCNN
LeelaChessZero
Blender
Waifu2x-NCNN Vulkan
NCNN
Blender
TensorFlow Lite
Crypto++
High Performance Conjugate Gradient
RawTherapee
Facebook RocksDB
TNN
NCNN
LAMMPS Molecular Dynamics Simulator
simdjson
Crypto++
Betsy GPU Compressor:
  ETC2 RGB - Highest
  ETC1 - Highest