Ryzen 9 5900X Clear Linux

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (2702 BIOS) and Sapphire AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB on ManjaroLinux 20.2 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012201-HA-2012192HA27
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 3 Tests
CPU Massive 20 Tests
Creator Workloads 16 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
Fortran Tests 5 Tests
Game Development 4 Tests
HPC - High Performance Computing 18 Tests
Imaging 4 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 8 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 9 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 2 Tests
Renderers 3 Tests
Scientific Computing 9 Tests
Server 7 Tests
Server CPU Tests 13 Tests
Single-Threaded 5 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 5 Tests
Vulkan Compute 4 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Clear Linux 34100
December 18 2020
  8 Hours, 34 Minutes
Fedora Workstation 33
December 18 2020
  6 Hours, 38 Minutes
openSUSE Tumbleweed
December 19 2020
  13 Hours, 43 Minutes
Manjaro Linux 20.2
December 19 2020
  9 Hours, 16 Minutes
Invert Hiding All Results Option
  9 Hours, 33 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5900X Clear LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (2702 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB (1780/875MHz)AMD Navi 10 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Clear Linux OS 341005.9.15-1008.native (x86_64)GNOME Shell 3.38.2X Server 1.20.10modesetting 1.20.104.6 Mesa 20.3.1 (LLVM 10.0.1)1.2.145GCC 10.2.1 20201217 releases/gcc-10.2.0-643-g7cbb07d2fc + Clang 10.0.1 + LLVM 10.0.1ext43840x21601000GB Sabrent Rocket 4.0 1TB + 15GB Ultra USB 3.0Fedora 335.9.14-200.fc33.x86_64 (x86_64)X Server + Wayland4.6 Mesa 20.2.4 (LLVM 11.0.0)GCC 10.2.1 20201125 + Clang 11.0.0btrfsAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB (1780/875MHz)openSUSE Tumbleweed 202012165.9.14-1-default (x86_64)KDE Plasma 5.20.4X Server 1.20.10amdgpu 19.1.01.2.131GCC 10.2.1 20201202 [revision e563687cf9d3d1278f45aaebd03e0f66531076c9]Sapphire AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 6GB (1780/875MHz)ManjaroLinux 20.25.9.11-3-MANJARO (x86_64)Xfce 4.14modesetting 1.20.104.6 Mesa 20.2.3 (LLVM 11.0.0)GCC 10.2.0ext4OpenBenchmarking.orgEnvironment Details- Clear Linux 34100: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags -Wa,-mbranches-within-32B-boundaries" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" Compiler Details- Clear Linux 34100: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-languages=c,c++,fortran,go --enable-ld=default --enable-libstdcxx-pch --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=westmere --with-gcc-major-version-only --with-glibc-version=2.19 --with-gnu-ld --with-isl --with-ppl=yes --with-tune=haswell - Fedora Workstation 33: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - openSUSE Tumbleweed: --build=x86_64-suse-linux --disable-libcc1 --disable-libssp --disable-libstdcxx-pch --disable-libvtv --disable-werror --enable-cet=auto --enable-checking=release --enable-gnu-indirect-function --enable-languages=c,c++,objc,fortran,obj-c++,ada,go,d --enable-libphobos --enable-libstdcxx-allocator=new --enable-link-mutex --enable-linux-futex --enable-multilib --enable-offload-targets=nvptx-none,amdgcn-amdhsa, --enable-plugin --enable-ssp --enable-version-specific-runtime-libs --host=x86_64-suse-linux --mandir=/usr/share/man --with-arch-32=x86-64 --with-build-config=bootstrap-lto-lean --with-gcc-major-version-only --with-slibdir=/lib64 --with-tune=generic --without-cuda-driver --without-system-libunwind - Manjaro Linux 20.2: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu Disk Details- Clear Linux 34100: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096- Fedora Workstation 33: NONE / relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096- openSUSE Tumbleweed: NONE / relatime,rw,space_cache,ssd,subvol=/@/home,subvolid=262 / Block Size: 4096- Manjaro Linux 20.2: NONE / noatime,rw / Block Size: 4096Processor Details- Clear Linux 34100: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Fedora Workstation 33: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009- openSUSE Tumbleweed: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009- Manjaro Linux 20.2: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Graphics Details- Clear Linux 34100, openSUSE Tumbleweed, Manjaro Linux 20.2: GLAMORPython Details- Clear Linux 34100: Python 3.9.1- Fedora Workstation 33: Python 3.9.0- openSUSE Tumbleweed: Python 3.8.6- Manjaro Linux 20.2: Python 3.8.6Security Details- Clear Linux 34100: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Fedora Workstation 33: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected - openSUSE Tumbleweed: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Manjaro Linux 20.2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2Logarithmic Result OverviewPhoronix Test SuiteLAMMPS Molecular Dynamics SimulatorsimdjsonIncompact3DLevelDBPHPBenchLibRawx265x264libavif avifencRedisTimed Linux Kernel CompilationC-BloscNumpy BenchmarkTimed MAFFT AlignmentSQLite SpeedtestDeepSpeechPyBenchrav1eMobile Neural NetworkSVT-AV1StockfishBasis UniversalTimed FFmpeg CompilationZstd CompressionNode.js V8 Web Tooling BenchmarkLeelaChessZeroTimed HMMer SearchFFTECraftyCrypto++RealSR-NCNNNCNNBetsy GPU CompressorGROMACSHimeno BenchmarkRawTherapeeIndigoBenchCoremarkWebP Image EncodeasmFishTNNChaos Group V-RAYASTC EncoderLZ4 CompressionDolfynHigh Performance Conjugate GradientBlenderWaifu2x-NCNN VulkanTensorFlow Lite

Ryzen 9 5900X Clear Linuxlammps: Rhodopsin Proteinlammps: 20k Atomsincompact3d: Cylinderleveldb: Seq Fillleveldb: Seq Fillleveldb: Rand Fillleveldb: Rand Fillleveldb: Overwriteleveldb: Overwritex265: Bosphorus 1080psimdjson: LargeRandphpbench: PHP Benchmark Suitelibraw: Post-Processing Benchmarkavifenc: 10avifenc: 8x264: H.264 Video Encodingbuild-linux-kernel: Time To Compileglmark2: 3840 x 2160blosc: blosclzredis: SETrav1e: 10numpy: mafft: Multiple Sequence Alignment - LSU RNAsqlite-speedtest: Timed Time - Size 1,000deepspeech: CPUrocksdb: Rand Readpybench: Total For Average Test Timessvt-av1: Enc Mode 8 - 1080pmnn: mobilenet-v1-1.0mnn: resnet-v2-50compress-zstd: 3x265: Bosphorus 4Kmnn: inception-v3ncnn: CPU - regnety_400mstockfish: Total Timerav1e: 5ncnn: Vulkan GPU - regnety_400mrav1e: 6basis: UASTC Level 3build-ffmpeg: Time To Compilencnn: CPU - yolov4-tinyncnn: Vulkan GPU - yolov4-tinymnn: MobileNetV2_224node-web-tooling: mnn: SqueezeNetV1.0rocksdb: Read While Writingncnn: Vulkan GPU - squeezenet_ssdleveldb: Rand Readleveldb: Hot Readncnn: Vulkan GPU - vgg16lczero: Eigenhmmer: Pfam Database Searchncnn: CPU - squeezenet_ssdsvt-av1: Enc Mode 4 - 1080pncnn: CPU - vgg16compress-lz4: 9 - Compression Speedncnn: Vulkan GPU - alexnetffte: N=256, 3D Complex FFT Routineavifenc: 2crafty: Elapsed Timencnn: CPU - alexnetrocksdb: Rand Fillncnn: CPU - resnet18ncnn: Vulkan GPU - resnet18realsr-ncnn: 4x - Yeslczero: BLAScompress-lz4: 9 - Decompression Speedncnn: CPU - efficientnet-b0ncnn: Vulkan GPU - resnet50compress-lz4: 3 - Decompression Speedncnn: Vulkan GPU - googlenetncnn: CPU - resnet50ncnn: Vulkan GPU - mobilenetwebp: Quality 100, Lossless, Highest Compressionncnn: Vulkan GPU - mnasnetrnnoise: realsr-ncnn: 4x - Noncnn: CPU - mobilenetncnn: Vulkan GPU - efficientnet-b0ncnn: CPU - googlenetindigobench: CPU - Supercargromacs: Water Benchmarkncnn: CPU - mnasnetncnn: CPU-v2-v2 - mobilenet-v2himeno: Poisson Pressure Solverrocksdb: Rand Fill Syncrawtherapee: Total Benchmark Timeindigobench: CPU - Bedroomrocksdb: Seq Fillcompress-lz4: 3 - Compression Speedncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - blazefacencnn: CPU - blazefacetnn: CPU - SqueezeNet v1.1astcenc: Thoroughwebp: Quality 100, Losslesscompress-zstd: 19minife: Smallncnn: Vulkan GPU-v2-v2 - mobilenet-v2coremark: CoreMark Size 666 - Iterations Per Secondncnn: Vulkan GPU-v3-v3 - mobilenet-v3asmfish: 1024 Hash Memory, 26 Depthwebp: Quality 100, Highest Compressionv-ray: CPUdolfyn: Computational Fluid Dynamicstnn: CPU - MobileNet v2cryptopp: Keyed Algorithmsncnn: CPU - shufflenet-v2hpcg: ncnn: Vulkan GPU - shufflenet-v2astcenc: Exhaustiveblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlywaifu2x-ncnn: 2x - 3 - Yestensorflow-lite: Inception V4redis: GETsimdjson: DistinctUserIDsimdjson: PartialTweetssimdjson: Kostyacryptopp: Unkeyed Algorithmsbetsy: ETC2 RGB - Highestbetsy: ETC1 - HighestClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.210.75410.77878.652323442.65262.260.443.94943.98360.386.351.09113309277.533.1613.287176.4744.397207515945.82883714.255.217637.357.21840.05954.5162212220068673756.8994.87823.6915038.524.9624.43815.62327944181.63915.592.21733.12730.73418.8518.832.84417.355.574356925813.2710.75210.67950.8057179.04813.396.73051.0069.989.9441134.19958889533.685117489399.98151488512.5912.6784.91154413921.75.2124.0513899.312.2024.1612.2726.6393.7114.51012.32412.265.2612.127.2131.2973.764.265152.284553165354449.5293.443171137072.104.081.821.83211.41615.0212.73943.74164.284.25657805.4920974.01501824805.5092699613.220210.686836.1373554.874.954004.84121.78278.82368.916.40317906873889653.103.513.412.66498.1915667.4726.67310.839206.19190094.29328.227.496.66596.61027.549.370.6281598150.984.8805.052127.6849.574265012305.42267746.964.183529.008.85646.69564.9808610160776283647.9195.53126.2395099.821.7426.44117.07368438361.49116.981.98036.88630.04020.2520.583.05815.796.115326283614.0711.20311.14152.9957583.42514.136.24053.0968.3010.3038377.54327549636.1011197653010.35145989712.8712.8379.93657713168.45.2823.6513253.712.0223.8412.4127.4813.8215.21911.78412.405.3111.986.9701.2423.824.215336.687469165064349.4173.335169154474.413.991.831.79211.25915.5112.95445.14296.564.22645061.6557644.01488736685.4252634412.993215.398833.8831434.794.966374.77122.86280.61370.236.36717970472924174.980.950.880.67460.7905737.5036.7481.1101.185217.16754661.26743.344.659.46260.20544.150.090.6371706253.044.0244.245139.9646.770204413460.22500491.504.142515.638.27148.51365.8427188049.1505.76727.8995666.123.6027.87217.58368446381.49017.342.00933.53530.14820.9220.893.13816.486.12214.4410.31410.25855.1557079.94214.466.39655.0073.6110.7040596.63505465635.2661218906210.6813.3913.4780.73213130.15.5224.9813178.812.6124.9412.7826.1393.9015.20911.83712.725.5012.527.1171.2753.924.385132.67257551.3223.31674.704.131.811.85211.55015.0212.54544.34.34656037.0711714.10498057965.4292657113.060214.815840.8249724.894.877374.86120.82278.31371.846.40318007403088595.601.081.050.85442.6904017.3266.55110.92610.891212.78995375.95834.934.976.05476.01534.948.050.6685552649.814.3174.415135.5557.71612762.12389612.534.157512.068.53543.97365.6903510449989373448.9925.64026.9585926.223.3626.81617.45356788161.46217.472.00733.99633.35019.8419.823.00616.226.025336906514.0410.55610.45853.3161885.68914.076.45853.2971.1110.2338722.68942840135.4991138558010.24142377412.8212.8379.90913271.95.3123.6513418.111.9723.6812.1527.2243.7911.77912.165.2911.986.9061.2973.804.225194.658571159053151.2923.341164827274.594.011.771.81204.73615.3812.83745.14287.124.21664622.8608083.99494005735.3702653913.309210.812852.3314654.834.976424.77122.41280.45370.256.37117937632978426.570.960.960.88498.6691827.6636.861OpenBenchmarking.org

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.020, N = 3SE +/- 0.069, N = 3SE +/- 0.012, N = 3SE +/- 0.004, N = 310.75410.8391.11010.926-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread-O2 -pthread-O3 -pthread1. (CXX) g++ options: -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 10.72 / Avg: 10.75 / Max: 10.78Min: 10.73 / Avg: 10.84 / Max: 10.97Min: 1.1 / Avg: 1.11 / Max: 1.13Min: 10.92 / Avg: 10.93 / Max: 10.931. (CXX) g++ options: -lm

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsClear Linux 34100openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.017, N = 310.7781.18510.891-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread-pthread1. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsClear Linux 34100openSUSE TumbleweedManjaro Linux 20.23691215Min: 10.76 / Avg: 10.78 / Max: 10.79Min: 1.18 / Avg: 1.19 / Max: 1.19Min: 10.86 / Avg: 10.89 / Max: 10.921. (CXX) g++ options: -O3 -lm

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.250100150200250SE +/- 1.07, N = 3SE +/- 0.25, N = 3SE +/- 2.97, N = 3SE +/- 0.60, N = 378.65206.19217.17212.791. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.24080120160200Min: 77.55 / Avg: 78.65 / Max: 80.8Min: 205.76 / Avg: 206.19 / Max: 206.64Min: 212.81 / Avg: 217.17 / Max: 222.84Min: 211.59 / Avg: 212.79 / Max: 213.481. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.02, N = 3SE +/- 0.85, N = 3SE +/- 0.27, N = 3SE +/- 0.25, N = 342.6594.2961.2775.96-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100Min: 42.61 / Avg: 42.65 / Max: 42.68Min: 93.16 / Avg: 94.29 / Max: 95.95Min: 60.81 / Avg: 61.27 / Max: 61.73Min: 75.51 / Avg: 75.96 / Max: 76.381. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21428425670SE +/- 0.03, N = 3SE +/- 0.24, N = 3SE +/- 0.17, N = 3SE +/- 0.15, N = 362.228.243.334.9-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860Min: 62.2 / Avg: 62.23 / Max: 62.3Min: 27.7 / Avg: 28.17 / Max: 28.5Min: 43 / Avg: 43.3 / Max: 43.6Min: 34.7 / Avg: 34.93 / Max: 35.21. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21428425670SE +/- 0.19, N = 3SE +/- 0.03, N = 3SE +/- 0.64, N = 3SE +/- 0.32, N = 360.427.444.634.9-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860Min: 60 / Avg: 60.37 / Max: 60.6Min: 27.4 / Avg: 27.43 / Max: 27.5Min: 43.5 / Avg: 44.63 / Max: 45.7Min: 34.5 / Avg: 34.87 / Max: 35.51. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.15, N = 3SE +/- 0.17, N = 3SE +/- 0.86, N = 3SE +/- 0.66, N = 343.9596.6759.4676.05-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100Min: 43.79 / Avg: 43.95 / Max: 44.26Min: 96.42 / Avg: 96.67 / Max: 96.99Min: 58.03 / Avg: 59.46 / Max: 61.01Min: 74.74 / Avg: 76.05 / Max: 76.811. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.61, N = 3SE +/- 0.47, N = 343.9896.6160.2176.02-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100Min: 43.83 / Avg: 43.98 / Max: 44.16Min: 96.43 / Avg: 96.61 / Max: 96.78Min: 59.41 / Avg: 60.21 / Max: 61.41Min: 75.22 / Avg: 76.02 / Max: 76.831. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21326395265SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.44, N = 3SE +/- 0.23, N = 360.327.544.134.9-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860Min: 60.1 / Avg: 60.3 / Max: 60.5Min: 27.4 / Avg: 27.47 / Max: 27.5Min: 43.2 / Avg: 44.07 / Max: 44.6Min: 34.5 / Avg: 34.9 / Max: 35.31. (CXX) g++ options: -lsnappy -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.13, N = 3SE +/- 0.19, N = 3SE +/- 0.40, N = 3SE +/- 0.13, N = 386.3549.3750.0948.05-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lnuma-O2-O3-O31. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21632486480Min: 86.16 / Avg: 86.35 / Max: 86.59Min: 49.17 / Avg: 49.37 / Max: 49.75Min: 49.34 / Avg: 50.09 / Max: 50.72Min: 47.91 / Avg: 48.05 / Max: 48.321. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.24530.49060.73590.98121.2265SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 15SE +/- 0.01, N = 31.090.620.630.66-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 1.08 / Avg: 1.09 / Max: 1.11Min: 0.61 / Avg: 0.62 / Max: 0.64Min: 0.61 / Avg: 0.63 / Max: 0.65Min: 0.64 / Avg: 0.66 / Max: 0.671. (CXX) g++ options: -pthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2200K400K600K800K1000KSE +/- 5322.04, N = 3SE +/- 7560.09, N = 6SE +/- 7938.18, N = 5SE +/- 9185.55, N = 31133092815981717062855526
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2200K400K600K800K1000KMin: 1122502 / Avg: 1133092 / Max: 1139315Min: 789210 / Avg: 815981.33 / Max: 840793Min: 691364 / Avg: 717061.8 / Max: 741464Min: 838656 / Avg: 855525.67 / Max: 870260

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.09, N = 3SE +/- 0.25, N = 3SE +/- 0.19, N = 3SE +/- 0.21, N = 377.5350.9853.0449.81-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lz-O2 -lz-O2 -lz-O2 -ljasper1. (CXX) g++ options: -fopenmp -ljpeg -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21530456075Min: 77.43 / Avg: 77.53 / Max: 77.71Min: 50.65 / Avg: 50.98 / Max: 51.47Min: 52.85 / Avg: 53.04 / Max: 53.42Min: 49.44 / Avg: 49.81 / Max: 50.161. (CXX) g++ options: -fopenmp -ljpeg -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.0982.1963.2944.3925.49SE +/- 0.007, N = 3SE +/- 0.010, N = 3SE +/- 0.023, N = 3SE +/- 0.039, N = 33.1614.8804.0244.3171. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.15 / Avg: 3.16 / Max: 3.18Min: 4.86 / Avg: 4.88 / Max: 4.89Min: 3.99 / Avg: 4.02 / Max: 4.07Min: 4.24 / Avg: 4.32 / Max: 4.361. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.13672.27343.41014.54685.6835SE +/- 0.033, N = 3SE +/- 0.028, N = 3SE +/- 0.026, N = 3SE +/- 0.040, N = 33.2875.0524.2454.4151. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.24 / Avg: 3.29 / Max: 3.35Min: 5.01 / Avg: 5.05 / Max: 5.11Min: 4.2 / Avg: 4.24 / Max: 4.28Min: 4.34 / Avg: 4.41 / Max: 4.481. (CXX) g++ options: -O3 -fPIC

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.24080120160200SE +/- 1.96, N = 3SE +/- 0.92, N = 3SE +/- 1.55, N = 15SE +/- 0.62, N = 3176.47127.68139.96135.55-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake-llsmash -lavformat -lavcodec -lswresample -lavutil -lbz2 -lz -lswscale1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2306090120150Min: 172.82 / Avg: 176.47 / Max: 179.54Min: 126.51 / Avg: 127.68 / Max: 129.5Min: 127.79 / Avg: 139.96 / Max: 148.55Min: 134.37 / Avg: 135.55 / Max: 136.451. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21326395265SE +/- 0.46, N = 5SE +/- 0.59, N = 4SE +/- 0.60, N = 3SE +/- 0.69, N = 444.4049.5746.7757.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21122334455Min: 43.61 / Avg: 44.4 / Max: 46.05Min: 48.67 / Avg: 49.57 / Max: 51.32Min: 45.67 / Avg: 46.77 / Max: 47.73Min: 56.84 / Avg: 57.72 / Max: 59.77

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 3840 x 2160Clear Linux 34100Fedora Workstation 33openSUSE Tumbleweed6001200180024003000207526502044

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23K6K9K12K15KSE +/- 67.29, N = 3SE +/- 16.50, N = 3SE +/- 104.04, N = 3SE +/- 33.34, N = 315945.812305.413460.212762.1-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23K6K9K12K15KMin: 15818.4 / Avg: 15945.77 / Max: 16047.1Min: 12272.9 / Avg: 12305.37 / Max: 12326.7Min: 13253.8 / Avg: 13460.23 / Max: 13586.1Min: 12695.4 / Avg: 12762.07 / Max: 12796.71. (CXX) g++ options: -rdynamic

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2600K1200K1800K2400K3000KSE +/- 26710.78, N = 7SE +/- 33629.52, N = 13SE +/- 15750.23, N = 3SE +/- 21549.23, N = 152883714.252267746.962500491.502389612.53-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2500K1000K1500K2000K2500KMin: 2785782.75 / Avg: 2883714.25 / Max: 2985170Min: 2101243.75 / Avg: 2267746.96 / Max: 2457317Min: 2469214.75 / Avg: 2500491.5 / Max: 2519375.25Min: 2257336.5 / Avg: 2389612.53 / Max: 25911711. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.17382.34763.52144.69525.869SE +/- 0.034, N = 3SE +/- 0.015, N = 3SE +/- 0.032, N = 15SE +/- 0.044, N = 55.2174.1834.1424.157
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 5.17 / Avg: 5.22 / Max: 5.28Min: 4.15 / Avg: 4.18 / Max: 4.2Min: 3.98 / Avg: 4.14 / Max: 4.4Min: 4.04 / Avg: 4.16 / Max: 4.3

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2140280420560700SE +/- 6.77, N = 3SE +/- 3.81, N = 3SE +/- 2.19, N = 3SE +/- 6.13, N = 3637.35529.00515.63512.06
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2110220330440550Min: 627.44 / Avg: 637.35 / Max: 650.3Min: 524.01 / Avg: 529 / Max: 536.49Min: 512.69 / Avg: 515.63 / Max: 519.91Min: 504.15 / Avg: 512.06 / Max: 524.12

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.025, N = 3SE +/- 0.092, N = 5SE +/- 0.079, N = 3SE +/- 0.049, N = 37.2188.8568.2718.5351. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 7.19 / Avg: 7.22 / Max: 7.27Min: 8.68 / Avg: 8.86 / Max: 9.21Min: 8.12 / Avg: 8.27 / Max: 8.38Min: 8.44 / Avg: 8.54 / Max: 8.61. (CC) gcc options: -std=c99 -O3 -lm -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21122334455SE +/- 0.26, N = 3SE +/- 0.40, N = 3SE +/- 0.39, N = 3SE +/- 0.37, N = 340.0646.7048.5143.97-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O2-O21. (CC) gcc options: -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21020304050Min: 39.54 / Avg: 40.06 / Max: 40.37Min: 45.94 / Avg: 46.7 / Max: 47.28Min: 47.75 / Avg: 48.51 / Max: 49.02Min: 43.34 / Avg: 43.97 / Max: 44.611. (CC) gcc options: -ldl -lz -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21530456075SE +/- 0.28, N = 3SE +/- 0.30, N = 3SE +/- 0.23, N = 3SE +/- 0.52, N = 354.5264.9865.8465.69
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21326395265Min: 53.97 / Avg: 54.52 / Max: 54.9Min: 64.41 / Avg: 64.98 / Max: 65.4Min: 65.39 / Avg: 65.84 / Max: 66.17Min: 64.75 / Avg: 65.69 / Max: 66.57

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadClear Linux 34100Fedora Workstation 33Manjaro Linux 20.230M60M90M120M150MSE +/- 125078.44, N = 3SE +/- 778043.91, N = 15SE +/- 496140.82, N = 3122200686101607762104499893-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadClear Linux 34100Fedora Workstation 33Manjaro Linux 20.220M40M60M80M100MMin: 121969407 / Avg: 122200685.67 / Max: 122398887Min: 98595008 / Avg: 101607762 / Max: 105890201Min: 103587153 / Avg: 104499892.67 / Max: 1052933781. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.22004006008001000SE +/- 7.22, N = 3SE +/- 6.94, N = 15SE +/- 4.67, N = 3737836880734
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2150300450600750Min: 724 / Avg: 736.67 / Max: 749Min: 847 / Avg: 879.87 / Max: 928Min: 725 / Avg: 733.67 / Max: 741

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21326395265SE +/- 0.03, N = 3SE +/- 0.26, N = 3SE +/- 0.49, N = 3SE +/- 0.27, N = 356.9047.9249.1548.991. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21122334455Min: 56.87 / Avg: 56.9 / Max: 56.96Min: 47.39 / Avg: 47.92 / Max: 48.22Min: 48.54 / Avg: 49.15 / Max: 50.13Min: 48.54 / Avg: 48.99 / Max: 49.461. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.29762.59523.89285.19046.488SE +/- 0.014, N = 12SE +/- 0.016, N = 3SE +/- 0.012, N = 15SE +/- 0.013, N = 154.8785.5315.7675.640-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.75 / MAX: 12.01MIN: 5.47 / MAX: 11.63MIN: 5.53 / MAX: 7.31MIN: 5.53 / MAX: 6.151. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.77 / Avg: 4.88 / Max: 4.99Min: 5.5 / Avg: 5.53 / Max: 5.56Min: 5.71 / Avg: 5.77 / Max: 5.85Min: 5.56 / Avg: 5.64 / Max: 5.751. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2714212835SE +/- 0.19, N = 12SE +/- 0.13, N = 3SE +/- 0.12, N = 15SE +/- 0.16, N = 1523.6926.2427.9026.96-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 22.84 / MAX: 30.95MIN: 25.85 / MAX: 32.13MIN: 26.26 / MAX: 32.15MIN: 25.75 / MAX: 28.441. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 23.04 / Avg: 23.69 / Max: 24.99Min: 26.01 / Avg: 26.24 / Max: 26.47Min: 27.38 / Avg: 27.9 / Max: 29.06Min: 25.97 / Avg: 26.96 / Max: 281. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.213002600390052006500SE +/- 15.97, N = 3SE +/- 42.48, N = 3SE +/- 14.87, N = 3SE +/- 25.96, N = 35038.55099.85666.15926.2-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-llzma -llz41. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.210002000300040005000Min: 5010.5 / Avg: 5038.53 / Max: 5065.8Min: 5049.9 / Avg: 5099.8 / Max: 5184.3Min: 5636.9 / Avg: 5666.13 / Max: 5685.5Min: 5874.5 / Avg: 5926.23 / Max: 59561. (CC) gcc options: -O3 -pthread -lz

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 324.9621.7423.6023.36-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lnuma-O2-O3-O31. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 24.84 / Avg: 24.96 / Max: 25.2Min: 21.68 / Avg: 21.74 / Max: 21.82Min: 23.42 / Avg: 23.6 / Max: 23.83Min: 23.3 / Avg: 23.36 / Max: 23.431. (CXX) g++ options: -rdynamic -lpthread -lrt -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2714212835SE +/- 0.20, N = 12SE +/- 0.16, N = 3SE +/- 0.14, N = 15SE +/- 0.17, N = 1524.4426.4427.8726.82-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 22.88 / MAX: 33.11MIN: 25.92 / MAX: 31.94MIN: 26.23 / MAX: 61.98MIN: 25.69 / MAX: 28.821. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 23.14 / Avg: 24.44 / Max: 25.7Min: 26.13 / Avg: 26.44 / Max: 26.61Min: 27.22 / Avg: 27.87 / Max: 29.21Min: 25.84 / Avg: 26.82 / Max: 28.321. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 315.6217.0717.5817.45-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 15.47 / MAX: 15.86-O2 - MIN: 16.77 / MAX: 18.29-O3 - MIN: 16.86 / MAX: 19.25-O3 - MIN: 17.19 / MAX: 17.821. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 15.51 / Avg: 15.62 / Max: 15.74Min: 16.84 / Avg: 17.07 / Max: 17.37Min: 17.43 / Avg: 17.58 / Max: 17.68Min: 17.27 / Avg: 17.45 / Max: 17.611. (CXX) g++ options: -rdynamic -lgomp -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.28M16M24M32M40MSE +/- 392223.91, N = 3SE +/- 175887.84, N = 3SE +/- 247663.48, N = 15SE +/- 324696.44, N = 732794418368438363684463835678816-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.26M12M18M24M30MMin: 32046589 / Avg: 32794418 / Max: 33373473Min: 36622103 / Avg: 36843835.67 / Max: 37191209Min: 34601659 / Avg: 36844637.6 / Max: 38716380Min: 34576271 / Avg: 35678816.43 / Max: 369591681. (CXX) g++ options: -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.36880.73761.10641.47521.844SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.005, N = 3SE +/- 0.014, N = 51.6391.4911.4901.462
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 1.64 / Avg: 1.64 / Max: 1.64Min: 1.48 / Avg: 1.49 / Max: 1.5Min: 1.48 / Avg: 1.49 / Max: 1.5Min: 1.41 / Avg: 1.46 / Max: 1.5

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.33, N = 3SE +/- 0.09, N = 315.5916.9817.3417.47-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 15.53 / MAX: 15.8-O2 - MIN: 16.82 / MAX: 18.78-O3 - MIN: 16.22 / MAX: 18.78-O3 - MIN: 17.24 / MAX: 18.491. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 15.59 / Avg: 15.59 / Max: 15.6Min: 16.9 / Avg: 16.98 / Max: 17.03Min: 16.71 / Avg: 17.34 / Max: 17.82Min: 17.32 / Avg: 17.47 / Max: 17.621. (CXX) g++ options: -rdynamic -lgomp -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.49880.99761.49641.99522.494SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.028, N = 3SE +/- 0.011, N = 32.2171.9802.0092.007
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 2.21 / Avg: 2.22 / Max: 2.22Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 1.97 / Avg: 2.01 / Max: 2.06Min: 1.99 / Avg: 2.01 / Max: 2.03

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2816243240SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 333.1336.8933.5434.00-O3-O2-O3-O31. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2816243240Min: 33.11 / Avg: 33.13 / Max: 33.15Min: 36.77 / Avg: 36.89 / Max: 36.96Min: 33.46 / Avg: 33.54 / Max: 33.58Min: 33.93 / Avg: 34 / Max: 34.111. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -rdynamic -lm -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2816243240SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 330.7330.0430.1533.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2714212835Min: 30.6 / Avg: 30.73 / Max: 30.81Min: 30.01 / Avg: 30.04 / Max: 30.07Min: 29.95 / Avg: 30.15 / Max: 30.28Min: 33.23 / Avg: 33.35 / Max: 33.46

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.41, N = 3SE +/- 0.02, N = 318.8520.2520.9219.84-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 18.67 / MAX: 19.82-O2 - MIN: 20.08 / MAX: 29.3-O3 - MIN: 19.68 / MAX: 22.55-O3 - MIN: 19.65 / MAX: 20.261. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2510152025Min: 18.81 / Avg: 18.85 / Max: 18.88Min: 20.22 / Avg: 20.25 / Max: 20.29Min: 20.44 / Avg: 20.92 / Max: 21.73Min: 19.81 / Avg: 19.84 / Max: 19.881. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2510152025SE +/- 0.08, N = 3SE +/- 0.43, N = 3SE +/- 0.49, N = 3SE +/- 0.03, N = 318.8320.5820.8919.82-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 18.62 / MAX: 19.15-O2 - MIN: 19.83 / MAX: 22.44-O3 - MIN: 19.47 / MAX: 22.86-O3 - MIN: 19.57 / MAX: 20.221. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2510152025Min: 18.73 / Avg: 18.83 / Max: 18.98Min: 19.97 / Avg: 20.58 / Max: 21.42Min: 20.22 / Avg: 20.89 / Max: 21.84Min: 19.78 / Avg: 19.82 / Max: 19.891. (CXX) g++ options: -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.70611.41222.11832.82443.5305SE +/- 0.020, N = 12SE +/- 0.020, N = 3SE +/- 0.014, N = 15SE +/- 0.015, N = 152.8443.0583.1383.006-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 2.64 / MAX: 10.76MIN: 2.97 / MAX: 3.98MIN: 2.9 / MAX: 4.82MIN: 2.84 / MAX: 3.341. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 2.71 / Avg: 2.84 / Max: 2.94Min: 3.03 / Avg: 3.06 / Max: 3.1Min: 3.07 / Avg: 3.14 / Max: 3.28Min: 2.91 / Avg: 3.01 / Max: 3.131. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.19, N = 317.3515.7916.4816.221. Clear Linux 34100: Nodejs v14.15.12. Fedora Workstation 33: Nodejs v14.15.13. openSUSE Tumbleweed: Nodejs v14.15.14. Manjaro Linux 20.2: Nodejs v15.3.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 17.22 / Avg: 17.35 / Max: 17.44Min: 15.68 / Avg: 15.79 / Max: 15.91Min: 16.27 / Avg: 16.48 / Max: 16.79Min: 16 / Avg: 16.22 / Max: 16.611. Clear Linux 34100: Nodejs v14.15.12. Fedora Workstation 33: Nodejs v14.15.13. openSUSE Tumbleweed: Nodejs v14.15.14. Manjaro Linux 20.2: Nodejs v15.3.0

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.078, N = 12SE +/- 0.034, N = 3SE +/- 0.048, N = 15SE +/- 0.082, N = 155.5746.1156.1226.025-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.11 / MAX: 6.49MIN: 5.95 / MAX: 6.97MIN: 5.52 / MAX: 10.54MIN: 5.16 / MAX: 6.581. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 5.16 / Avg: 5.57 / Max: 6.05Min: 6.05 / Avg: 6.12 / Max: 6.17Min: 5.76 / Avg: 6.12 / Max: 6.48Min: 5.22 / Avg: 6.03 / Max: 6.381. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2800K1600K2400K3200K4000KSE +/- 28070.38, N = 15SE +/- 35265.20, N = 4SE +/- 25174.97, N = 11356925832628363369065-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2600K1200K1800K2400K3000KMin: 3437965 / Avg: 3569257.6 / Max: 3796429Min: 3179976 / Avg: 3262836 / Max: 3349878Min: 3265079 / Avg: 3369065.45 / Max: 35147821. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 313.2714.0714.4414.04-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 13.07 / MAX: 13.53-O2 - MIN: 13.77 / MAX: 15.14-O3 - MIN: 13.43 / MAX: 16.35-O3 - MIN: 13.85 / MAX: 14.471. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 13.21 / Avg: 13.27 / Max: 13.32Min: 13.88 / Avg: 14.07 / Max: 14.22Min: 14.13 / Avg: 14.44 / Max: 14.79Min: 14.01 / Avg: 14.04 / Max: 14.071. (CXX) g++ options: -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 310.7511.2010.3110.56-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 10.66 / Avg: 10.75 / Max: 10.81Min: 11.09 / Avg: 11.2 / Max: 11.38Min: 10.26 / Avg: 10.31 / Max: 10.35Min: 10.45 / Avg: 10.56 / Max: 10.621. (CXX) g++ options: -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 310.6811.1410.2610.46-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 10.56 / Avg: 10.68 / Max: 10.91Min: 11.09 / Avg: 11.14 / Max: 11.23Min: 10.05 / Avg: 10.26 / Max: 10.39Min: 10.41 / Avg: 10.46 / Max: 10.521. (CXX) g++ options: -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860SE +/- 0.20, N = 3SE +/- 0.30, N = 3SE +/- 0.35, N = 3SE +/- 0.04, N = 350.8052.9955.1553.31-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 50 / MAX: 53.22-O2 - MIN: 51.94 / MAX: 61.72-O3 - MIN: 53.27 / MAX: 62.63-O3 - MIN: 52.72 / MAX: 54.061. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21122334455Min: 50.4 / Avg: 50.8 / Max: 51.04Min: 52.39 / Avg: 52.99 / Max: 53.31Min: 54.66 / Avg: 55.15 / Max: 55.83Min: 53.22 / Avg: 53.31 / Max: 53.351. (CXX) g++ options: -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2130260390520650SE +/- 1.73, N = 3SE +/- 5.55, N = 3571575570618-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2110220330440550Min: 572 / Avg: 575 / Max: 578Min: 560 / Avg: 570.33 / Max: 5791. (CXX) g++ options: -flto -pthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 379.0583.4379.9485.69-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21632486480Min: 78.98 / Avg: 79.05 / Max: 79.09Min: 83.34 / Avg: 83.43 / Max: 83.56Min: 79.88 / Avg: 79.94 / Max: 80.02Min: 85.61 / Avg: 85.69 / Max: 85.81. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 313.3914.1314.4614.07-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 13.2 / MAX: 13.65-O2 - MIN: 13.96 / MAX: 14.42-O3 - MIN: 13.7 / MAX: 15.76-O3 - MIN: 13.91 / MAX: 15.461. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 13.36 / Avg: 13.39 / Max: 13.4Min: 14.1 / Avg: 14.13 / Max: 14.18Min: 14.36 / Avg: 14.46 / Max: 14.6Min: 14.05 / Avg: 14.07 / Max: 14.081. (CXX) g++ options: -rdynamic -lgomp -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.019, N = 3SE +/- 0.004, N = 3SE +/- 0.008, N = 3SE +/- 0.026, N = 36.7306.2406.3966.4581. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 6.69 / Avg: 6.73 / Max: 6.75Min: 6.23 / Avg: 6.24 / Max: 6.24Min: 6.38 / Avg: 6.4 / Max: 6.41Min: 6.42 / Avg: 6.46 / Max: 6.511. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.25, N = 3SE +/- 0.04, N = 351.0053.0955.0053.29-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 50.51 / MAX: 51.55-O2 - MIN: 52.31 / MAX: 55.65-O3 - MIN: 53.48 / MAX: 59.66-O3 - MIN: 52.8 / MAX: 54.071. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21122334455Min: 50.98 / Avg: 51 / Max: 51.03Min: 52.8 / Avg: 53.09 / Max: 53.25Min: 54.67 / Avg: 55 / Max: 55.49Min: 53.2 / Avg: 53.29 / Max: 53.341. (CXX) g++ options: -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21632486480SE +/- 0.99, N = 3SE +/- 0.44, N = 3SE +/- 0.78, N = 3SE +/- 0.36, N = 369.9868.3073.6171.111. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21428425670Min: 68.93 / Avg: 69.98 / Max: 71.95Min: 67.55 / Avg: 68.3 / Max: 69.07Min: 72.09 / Avg: 73.61 / Max: 74.68Min: 70.41 / Avg: 71.11 / Max: 71.591. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 39.9410.3010.7010.23-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 9.85 / MAX: 10.09-O2 - MIN: 10.2 / MAX: 10.51-O3 - MIN: 10.2 / MAX: 11.82-O3 - MIN: 10.17 / MAX: 11.351. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 9.9 / Avg: 9.94 / Max: 10.02Min: 10.26 / Avg: 10.3 / Max: 10.32Min: 10.64 / Avg: 10.7 / Max: 10.75Min: 10.22 / Avg: 10.23 / Max: 10.251. (CXX) g++ options: -rdynamic -lgomp -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.29K18K27K36K45KSE +/- 91.08, N = 3SE +/- 23.97, N = 3SE +/- 182.66, N = 3SE +/- 14.31, N = 341134.2038377.5440596.6438722.691. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.27K14K21K28K35KMin: 41006.55 / Avg: 41134.2 / Max: 41310.57Min: 38341.06 / Avg: 38377.54 / Max: 38422.72Min: 40272.22 / Avg: 40596.64 / Max: 40904.29Min: 38700.52 / Avg: 38722.69 / Max: 38749.441. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2816243240SE +/- 0.33, N = 6SE +/- 0.22, N = 3SE +/- 0.13, N = 3SE +/- 0.29, N = 333.6936.1035.2735.501. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2816243240Min: 33.17 / Avg: 33.68 / Max: 35.25Min: 35.7 / Avg: 36.1 / Max: 36.46Min: 35 / Avg: 35.27 / Max: 35.43Min: 35.12 / Avg: 35.5 / Max: 36.071. (CXX) g++ options: -O3 -fPIC

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23M6M9M12M15MSE +/- 145230.66, N = 3SE +/- 45937.01, N = 3SE +/- 114094.23, N = 3SE +/- 64897.72, N = 3117489391197653012189062113855801. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.22M4M6M8M10MMin: 11525989 / Avg: 11748938.67 / Max: 12021646Min: 11884788 / Avg: 11976530 / Max: 12026665Min: 11970058 / Avg: 12189061.67 / Max: 12354066Min: 11255798 / Avg: 11385580 / Max: 114520881. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 39.9810.3510.6810.24-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 9.87 / MAX: 10.17-O2 - MIN: 10.26 / MAX: 10.53-O3 - MIN: 10.17 / MAX: 11.99-O3 - MIN: 10.14 / MAX: 10.541. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 9.91 / Avg: 9.98 / Max: 10.08Min: 10.3 / Avg: 10.35 / Max: 10.42Min: 10.64 / Avg: 10.68 / Max: 10.71Min: 10.23 / Avg: 10.24 / Max: 10.271. (CXX) g++ options: -rdynamic -lgomp -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2300K600K900K1200K1500KSE +/- 1801.14, N = 3SE +/- 771.67, N = 3SE +/- 13401.89, N = 3151488514598971423774-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2300K600K900K1200K1500KMin: 1511682 / Avg: 1514885.33 / Max: 1517914Min: 1458369 / Avg: 1459897 / Max: 1460849Min: 1407545 / Avg: 1423774.33 / Max: 14503631. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 312.5912.8713.3912.82-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.5 / MAX: 12.77-O2 - MIN: 12.71 / MAX: 13.86-O3 - MIN: 12.74 / MAX: 14.33-O3 - MIN: 12.73 / MAX: 13.211. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.57 / Avg: 12.59 / Max: 12.62Min: 12.79 / Avg: 12.87 / Max: 12.97Min: 13.35 / Avg: 13.39 / Max: 13.41Min: 12.81 / Avg: 12.82 / Max: 12.831. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 312.6712.8313.4712.83-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.49 / MAX: 12.92-O2 - MIN: 12.66 / MAX: 13.76-O3 - MIN: 12.81 / MAX: 14.88-O3 - MIN: 12.74 / MAX: 13.211. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.57 / Avg: 12.67 / Max: 12.86Min: 12.75 / Avg: 12.83 / Max: 12.87Min: 13.41 / Avg: 13.47 / Max: 13.59Min: 12.8 / Avg: 12.83 / Max: 12.861. (CXX) g++ options: -rdynamic -lgomp -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 384.9179.9480.7379.91
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21632486480Min: 84.87 / Avg: 84.91 / Max: 84.94Min: 79.89 / Avg: 79.94 / Max: 80.02Min: 80.48 / Avg: 80.73 / Max: 80.88Min: 79.88 / Avg: 79.91 / Max: 79.93

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASClear Linux 34100Fedora Workstation 33120240360480600SE +/- 2.33, N = 3544577-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASClear Linux 34100Fedora Workstation 33100200300400500Min: 572 / Avg: 576.67 / Max: 5791. (CXX) g++ options: -flto -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23K6K9K12K15KSE +/- 40.00, N = 3SE +/- 37.15, N = 3SE +/- 52.10, N = 3SE +/- 38.21, N = 313921.713168.413130.113271.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.22K4K6K8K10KMin: 13877.9 / Avg: 13921.73 / Max: 14001.6Min: 13094.7 / Avg: 13168.43 / Max: 13213.3Min: 13033.2 / Avg: 13130.13 / Max: 13211.7Min: 13197 / Avg: 13271.87 / Max: 13322.61. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.2422.4843.7264.9686.21SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 35.215.285.525.31-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.17 / MAX: 5.42-O2 - MIN: 5.22 / MAX: 5.64-O3 - MIN: 5.22 / MAX: 6.73-O3 - MIN: 5.28 / MAX: 5.571. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 5.19 / Avg: 5.21 / Max: 5.25Min: 5.25 / Avg: 5.28 / Max: 5.32Min: 5.46 / Avg: 5.52 / Max: 5.58Min: 5.3 / Avg: 5.31 / Max: 5.341. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430SE +/- 0.08, N = 3SE +/- 0.18, N = 3SE +/- 0.14, N = 3SE +/- 0.02, N = 324.0523.6524.9823.65-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 23.68 / MAX: 24.47-O2 - MIN: 23.13 / MAX: 24.84-O3 - MIN: 23.66 / MAX: 28.98-O3 - MIN: 23.49 / MAX: 24.231. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 23.95 / Avg: 24.05 / Max: 24.2Min: 23.3 / Avg: 23.65 / Max: 23.89Min: 24.72 / Avg: 24.98 / Max: 25.2Min: 23.62 / Avg: 23.65 / Max: 23.671. (CXX) g++ options: -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23K6K9K12K15KSE +/- 32.58, N = 3SE +/- 8.58, N = 3SE +/- 35.67, N = 3SE +/- 16.97, N = 1113899.313253.713178.813418.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.22K4K6K8K10KMin: 13836.4 / Avg: 13899.33 / Max: 13945.4Min: 13236.7 / Avg: 13253.73 / Max: 13264.1Min: 13135.6 / Avg: 13178.83 / Max: 13249.6Min: 13312.6 / Avg: 13418.12 / Max: 13483.91. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.13, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 312.2012.0212.6111.97-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 11.96 / MAX: 19.56-O2 - MIN: 11.89 / MAX: 18.24-O3 - MIN: 11.96 / MAX: 14.13-O3 - MIN: 11.85 / MAX: 13.171. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.04 / Avg: 12.2 / Max: 12.46Min: 11.96 / Avg: 12.02 / Max: 12.13Min: 12.53 / Avg: 12.61 / Max: 12.66Min: 11.94 / Avg: 11.97 / Max: 11.981. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 324.1623.8424.9423.68-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 23.93 / MAX: 24.64-O2 - MIN: 23.59 / MAX: 24.79-O3 - MIN: 23.76 / MAX: 26.47-O3 - MIN: 23.47 / MAX: 24.271. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 24.07 / Avg: 24.16 / Max: 24.22Min: 23.8 / Avg: 23.84 / Max: 23.88Min: 24.79 / Avg: 24.94 / Max: 25.22Min: 23.65 / Avg: 23.68 / Max: 23.731. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 312.2712.4112.7812.15-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.09 / MAX: 12.49-O2 - MIN: 12.21 / MAX: 13.61-O3 - MIN: 11.95 / MAX: 13.91-O3 - MIN: 12.01 / MAX: 12.591. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.21 / Avg: 12.27 / Max: 12.37Min: 12.28 / Avg: 12.41 / Max: 12.59Min: 12.49 / Avg: 12.78 / Max: 12.99Min: 12.1 / Avg: 12.15 / Max: 12.21. (CXX) g++ options: -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 326.6427.4826.1427.22-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O2-O2-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2612182430Min: 26.49 / Avg: 26.64 / Max: 26.89Min: 27.18 / Avg: 27.48 / Max: 27.66Min: 26.03 / Avg: 26.14 / Max: 26.33Min: 27.12 / Avg: 27.22 / Max: 27.431. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.87751.7552.63253.514.3875SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 33.713.823.903.79-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.69 / MAX: 3.9-O2 - MIN: 3.75 / MAX: 4.34-O3 - MIN: 3.71 / MAX: 4.74-O3 - MIN: 3.75 / MAX: 4.041. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.7 / Avg: 3.71 / Max: 3.73Min: 3.79 / Avg: 3.82 / Max: 3.85Min: 3.85 / Avg: 3.9 / Max: 3.93Min: 3.77 / Avg: 3.79 / Max: 3.811. (CXX) g++ options: -rdynamic -lgomp -lpthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Clear Linux 34100Fedora Workstation 33openSUSE Tumbleweed48121620SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.10, N = 314.5115.2215.21-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O21. (CC) gcc options: -pedantic -fvisibility=hidden -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Clear Linux 34100Fedora Workstation 33openSUSE Tumbleweed48121620Min: 14.27 / Avg: 14.51 / Max: 14.64Min: 14.84 / Avg: 15.22 / Max: 15.47Min: 15.05 / Avg: 15.21 / Max: 15.41. (CC) gcc options: -pedantic -fvisibility=hidden -lm

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 312.3211.7811.8411.78
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.29 / Avg: 12.32 / Max: 12.37Min: 11.78 / Avg: 11.78 / Max: 11.79Min: 11.79 / Avg: 11.84 / Max: 11.89Min: 11.75 / Avg: 11.78 / Max: 11.82

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 312.2612.4012.7212.16-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 12.1 / MAX: 12.52-O2 - MIN: 12.19 / MAX: 13.56-O3 - MIN: 11.93 / MAX: 14.36-O3 - MIN: 12.05 / MAX: 12.571. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.21 / Avg: 12.26 / Max: 12.37Min: 12.3 / Avg: 12.4 / Max: 12.52Min: 12.53 / Avg: 12.72 / Max: 12.86Min: 12.13 / Avg: 12.16 / Max: 12.181. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.23752.4753.71254.956.1875SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 35.265.315.505.29-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 5.15 / MAX: 25.33-O2 - MIN: 5.24 / MAX: 6.31-O3 - MIN: 5.23 / MAX: 6.81-O3 - MIN: 5.24 / MAX: 6.131. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 5.17 / Avg: 5.26 / Max: 5.41Min: 5.29 / Avg: 5.31 / Max: 5.36Min: 5.46 / Avg: 5.5 / Max: 5.54Min: 5.26 / Avg: 5.29 / Max: 5.311. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 2SE +/- 0.00, N = 312.1211.9812.5211.98-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 11.99 / MAX: 12.53-O2 - MIN: 11.8 / MAX: 12.27-O3 - MIN: 11.83 / MAX: 13.59-O3 - MIN: 11.89 / MAX: 12.381. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.07 / Avg: 12.12 / Max: 12.22Min: 11.88 / Avg: 11.98 / Max: 12.06Min: 12.5 / Avg: 12.52 / Max: 12.53Min: 11.98 / Avg: 11.98 / Max: 11.991. (CXX) g++ options: -rdynamic -lgomp -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.008, N = 3SE +/- 0.022, N = 3SE +/- 0.038, N = 3SE +/- 0.018, N = 37.2136.9707.1176.906
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 7.2 / Avg: 7.21 / Max: 7.23Min: 6.94 / Avg: 6.97 / Max: 7.01Min: 7.05 / Avg: 7.12 / Max: 7.18Min: 6.87 / Avg: 6.91 / Max: 6.93

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.29180.58360.87541.16721.459SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.005, N = 31.2971.2421.2751.297-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ldl-O2 -ldl-O3-O31. (CXX) g++ options: -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 1.29 / Avg: 1.3 / Max: 1.3Min: 1.24 / Avg: 1.24 / Max: 1.25Min: 1.27 / Avg: 1.28 / Max: 1.28Min: 1.29 / Avg: 1.3 / Max: 1.311. (CXX) g++ options: -pthread -lrt -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.8821.7642.6463.5284.41SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.763.823.923.80-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.7 / MAX: 3.97-O2 - MIN: 3.75 / MAX: 4.9-O3 - MIN: 3.76 / MAX: 4.9-O3 - MIN: 3.75 / MAX: 4.031. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.72 / Avg: 3.76 / Max: 3.82Min: 3.79 / Avg: 3.82 / Max: 3.85Min: 3.91 / Avg: 3.92 / Max: 3.93Min: 3.79 / Avg: 3.8 / Max: 3.821. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.98551.9712.95653.9424.9275SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 34.264.214.384.22-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.21 / MAX: 4.43-O2 - MIN: 4.14 / MAX: 4.86-O3 - MIN: 4.13 / MAX: 5.83-O3 - MIN: 4.16 / MAX: 4.491. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.26 / Avg: 4.26 / Max: 4.27Min: 4.18 / Avg: 4.21 / Max: 4.24Min: 4.35 / Avg: 4.38 / Max: 4.44Min: 4.21 / Avg: 4.22 / Max: 4.221. (CXX) g++ options: -rdynamic -lgomp -lpthread

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.211002200330044005500SE +/- 64.56, N = 15SE +/- 75.77, N = 15SE +/- 61.66, N = 15SE +/- 78.13, N = 155152.285336.695132.675194.66-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.29001800270036004500Min: 4834.94 / Avg: 5152.28 / Max: 5704.92Min: 4932.09 / Avg: 5336.69 / Max: 5809.03Min: 4761.21 / Avg: 5132.67 / Max: 5503.96Min: 4821.53 / Avg: 5194.66 / Max: 5694.721. (CC) gcc options: -O3 -mavx2

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2400K800K1200K1600K2000KSE +/- 15598.52, N = 7SE +/- 5894.66, N = 3SE +/- 9801.98, N = 3165354416506431590531-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2300K600K900K1200K1500KMin: 1596274 / Avg: 1653544.14 / Max: 1706487Min: 1639563 / Avg: 1650643 / Max: 1659671Min: 1574628 / Avg: 1590530.67 / Max: 16084101. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21224364860SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 349.5349.4251.3251.291. Clear Linux 34100: RawTherapee, version , command line.2. Fedora Workstation 33: RawTherapee, version 5.8, command line.3. openSUSE Tumbleweed: RawTherapee, version 5.8, command line.4. Manjaro Linux 20.2: RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21020304050Min: 49.43 / Avg: 49.53 / Max: 49.66Min: 49.28 / Avg: 49.42 / Max: 49.56Min: 51.13 / Avg: 51.32 / Max: 51.5Min: 51.17 / Avg: 51.29 / Max: 51.361. Clear Linux 34100: RawTherapee, version , command line.2. Fedora Workstation 33: RawTherapee, version 5.8, command line.3. openSUSE Tumbleweed: RawTherapee, version 5.8, command line.4. Manjaro Linux 20.2: RawTherapee, version 5.8, command line.

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.77471.54942.32413.09883.8735SE +/- 0.004, N = 3SE +/- 0.012, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 33.4433.3353.3163.341
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.31 / Avg: 3.34 / Max: 3.35Min: 3.31 / Avg: 3.32 / Max: 3.32Min: 3.33 / Avg: 3.34 / Max: 3.35

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2400K800K1200K1600K2000KSE +/- 20216.20, N = 3SE +/- 13510.56, N = 3SE +/- 6507.68, N = 3171137016915441648272-O21. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillClear Linux 34100Fedora Workstation 33Manjaro Linux 20.2300K600K900K1200K1500KMin: 1671952 / Avg: 1711369.67 / Max: 1738874Min: 1671558 / Avg: 1691544.33 / Max: 1717286Min: 1641537 / Avg: 1648272.33 / Max: 16612851. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100SE +/- 0.81, N = 3SE +/- 0.44, N = 3SE +/- 0.59, N = 3SE +/- 0.55, N = 1172.1074.4174.7074.591. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21428425670Min: 70.96 / Avg: 72.1 / Max: 73.67Min: 73.57 / Avg: 74.41 / Max: 75.07Min: 73.57 / Avg: 74.7 / Max: 75.57Min: 72.43 / Avg: 74.59 / Max: 76.591. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.92931.85862.78793.71724.6465SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 34.083.994.134.01-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.94 / MAX: 16.5-O2 - MIN: 3.92 / MAX: 4.57-O3 - MIN: 3.86 / MAX: 5.79-O3 - MIN: 3.93 / MAX: 4.281. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.01 / Avg: 4.08 / Max: 4.13Min: 3.94 / Avg: 3.99 / Max: 4.02Min: 4.02 / Avg: 4.13 / Max: 4.2Min: 3.97 / Avg: 4.01 / Max: 4.11. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.41180.82361.23541.64722.059SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 31.821.831.811.77-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 1.8 / MAX: 1.96-O2 - MIN: 1.76 / MAX: 2.71-O3 - MIN: 1.71 / MAX: 2.62-O3 - MIN: 1.74 / MAX: 2.021. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 1.81 / Avg: 1.82 / Max: 1.82Min: 1.78 / Avg: 1.83 / Max: 1.86Min: 1.77 / Avg: 1.81 / Max: 1.84Min: 1.76 / Avg: 1.77 / Max: 1.781. (CXX) g++ options: -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.41630.83261.24891.66522.0815SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 31.831.791.851.81-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 1.8 / MAX: 2.01-O2 - MIN: 1.76 / MAX: 2.46-O3 - MIN: 1.75 / MAX: 2.87-O3 - MIN: 1.77 / MAX: 2.021. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 1.82 / Avg: 1.83 / Max: 1.84Min: 1.78 / Avg: 1.79 / Max: 1.8Min: 1.82 / Avg: 1.85 / Max: 1.91Min: 1.78 / Avg: 1.81 / Max: 1.851. (CXX) g++ options: -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.250100150200250SE +/- 2.18, N = 3SE +/- 2.26, N = 3SE +/- 2.12, N = 3SE +/- 0.04, N = 3211.42211.26211.55204.74-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 208.03 / MAX: 215.75-O2 - MIN: 206.71 / MAX: 214.84-O3 - MIN: 204.46 / MAX: 216.72-O3 - MIN: 204.38 / MAX: 205.021. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.24080120160200Min: 208.25 / Avg: 211.42 / Max: 215.6Min: 206.95 / Avg: 211.26 / Max: 214.6Min: 208.01 / Avg: 211.55 / Max: 215.34Min: 204.65 / Avg: 204.74 / Max: 204.791. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.0215.5115.0215.381. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 15 / Avg: 15.02 / Max: 15.06Min: 15.49 / Avg: 15.51 / Max: 15.54Min: 15.01 / Avg: 15.02 / Max: 15.05Min: 15.37 / Avg: 15.38 / Max: 15.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.16, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 712.7412.9512.5512.84-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O2-O2-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.42 / Avg: 12.74 / Max: 12.92Min: 12.65 / Avg: 12.95 / Max: 13.11Min: 12.44 / Avg: 12.54 / Max: 12.67Min: 12.49 / Avg: 12.84 / Max: 13.491. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21020304050SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 343.745.144.345.1-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-llzma -llz41. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2918273645Min: 43.5 / Avg: 43.7 / Max: 43.8Min: 45.1 / Avg: 45.13 / Max: 45.2Min: 44.1 / Avg: 44.3 / Max: 44.4Min: 45.1 / Avg: 45.13 / Max: 45.21. (CC) gcc options: -O3 -pthread -lz

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallClear Linux 34100Fedora Workstation 33Manjaro Linux 20.29001800270036004500SE +/- 1.02, N = 3SE +/- 1.15, N = 3SE +/- 14.68, N = 34164.284296.564287.12-lmpi_cxx-lmpi_cxx1. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallClear Linux 34100Fedora Workstation 33Manjaro Linux 20.27001400210028003500Min: 4163.2 / Avg: 4164.28 / Max: 4166.31Min: 4294.83 / Avg: 4296.56 / Max: 4298.74Min: 4257.92 / Avg: 4287.12 / Max: 4304.331. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.97651.9532.92953.9064.8825SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 34.254.224.344.21-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.19 / MAX: 4.43-O2 - MIN: 4.15 / MAX: 5.18-O3 - MIN: 4.08 / MAX: 5.33-O3 - MIN: 4.14 / MAX: 4.511. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.24 / Avg: 4.25 / Max: 4.26Min: 4.22 / Avg: 4.22 / Max: 4.22Min: 4.26 / Avg: 4.34 / Max: 4.41Min: 4.18 / Avg: 4.21 / Max: 4.231. (CXX) g++ options: -rdynamic -lgomp -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2140K280K420K560K700KSE +/- 9204.52, N = 3SE +/- 1428.64, N = 3SE +/- 452.73, N = 3SE +/- 1576.18, N = 3657805.49645061.66656037.07664622.86-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2120K240K360K480K600KMin: 639403.22 / Avg: 657805.49 / Max: 667439.17Min: 642426.95 / Avg: 645061.66 / Max: 647336.48Min: 655319.92 / Avg: 656037.07 / Max: 656874.37Min: 661886.38 / Avg: 664622.86 / Max: 667346.371. (CC) gcc options: -O2 -lrt" -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.92251.8452.76753.694.6125SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.01, N = 34.014.014.103.99-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 3.95 / MAX: 4.7-O2 - MIN: 3.94 / MAX: 4.74-O3 - MIN: 3.75 / MAX: 5.44-O3 - MIN: 3.96 / MAX: 4.241. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4 / Avg: 4.01 / Max: 4.02Min: 3.96 / Avg: 4.01 / Max: 4.09Min: 3.9 / Avg: 4.1 / Max: 4.22Min: 3.98 / Avg: 3.99 / Max: 4.011. (CXX) g++ options: -rdynamic -lgomp -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.211M22M33M44M55MSE +/- 565989.30, N = 4SE +/- 218738.44, N = 3SE +/- 107149.97, N = 3SE +/- 557666.32, N = 350182480488736684980579649400573
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.29M18M27M36M45MMin: 48950805 / Avg: 50182479.5 / Max: 51694980Min: 48484829 / Avg: 48873668 / Max: 49241704Min: 49678081 / Avg: 49805796.33 / Max: 50018684Min: 48497467 / Avg: 49400572.67 / Max: 50418937

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.23952.4793.71854.9586.1975SE +/- 0.056, N = 3SE +/- 0.010, N = 3SE +/- 0.013, N = 3SE +/- 0.056, N = 35.5095.4255.4295.370-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ltiff-O2-O2-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 5.4 / Avg: 5.51 / Max: 5.58Min: 5.41 / Avg: 5.43 / Max: 5.45Min: 5.4 / Avg: 5.43 / Max: 5.45Min: 5.27 / Avg: 5.37 / Max: 5.471. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg -lpng16

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.26K12K18K24K30KSE +/- 88.64, N = 3SE +/- 117.22, N = 3SE +/- 114.91, N = 3SE +/- 84.01, N = 326996263442657126539
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.25K10K15K20K25KMin: 26876 / Avg: 26996 / Max: 27169Min: 26177 / Avg: 26344 / Max: 26570Min: 26343 / Avg: 26571 / Max: 26710Min: 26387 / Avg: 26539 / Max: 26677

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.19, N = 313.2212.9913.0613.31
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.248121620Min: 12.97 / Avg: 13.22 / Max: 13.35Min: 12.76 / Avg: 12.99 / Max: 13.27Min: 12.77 / Avg: 13.06 / Max: 13.23Min: 13.03 / Avg: 13.31 / Max: 13.66

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.250100150200250SE +/- 1.24, N = 3SE +/- 2.37, N = 3SE +/- 2.08, N = 3SE +/- 1.32, N = 3210.69215.40214.82210.81-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 205.04 / MAX: 218.74-O2 - MIN: 209.05 / MAX: 237.63-O3 - MIN: 209.08 / MAX: 253.05-O3 - MIN: 208.71 / MAX: 224.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.24080120160200Min: 209.35 / Avg: 210.69 / Max: 213.16Min: 212.9 / Avg: 215.4 / Max: 220.14Min: 210.94 / Avg: 214.82 / Max: 218.07Min: 209.12 / Avg: 210.81 / Max: 213.411. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.22004006008001000SE +/- 11.97, N = 3SE +/- 6.03, N = 3SE +/- 5.38, N = 3SE +/- 4.37, N = 3836.14833.88840.82852.33-fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-g2-g2-g21. (CXX) g++ options: -O3 -pipe -fPIC -pthread
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2150300450600750Min: 818.9 / Avg: 836.14 / Max: 859.14Min: 822.24 / Avg: 833.88 / Max: 842.4Min: 831.18 / Avg: 840.82 / Max: 849.79Min: 845.25 / Avg: 852.33 / Max: 860.311. (CXX) g++ options: -O3 -pipe -fPIC -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.10032.20063.30094.40125.5015SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 34.874.794.894.83-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.81 / MAX: 5.05-O2 - MIN: 4.74 / MAX: 5.2-O3 - MIN: 4.69 / MAX: 5.59-O3 - MIN: 4.77 / MAX: 5.021. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.85 / Avg: 4.87 / Max: 4.9Min: 4.78 / Avg: 4.79 / Max: 4.82Min: 4.86 / Avg: 4.89 / Max: 4.91Min: 4.8 / Avg: 4.83 / Max: 4.861. (CXX) g++ options: -rdynamic -lgomp -lpthread

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.11972.23943.35914.47885.5985SE +/- 0.00436, N = 3SE +/- 0.01365, N = 3SE +/- 0.00725, N = 3SE +/- 0.00250, N = 34.954004.966374.877374.97642-lmpi_cxx-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.95 / Avg: 4.95 / Max: 4.96Min: 4.94 / Avg: 4.97 / Max: 4.98Min: 4.86 / Avg: 4.88 / Max: 4.89Min: 4.97 / Avg: 4.98 / Max: 4.981. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.21.09352.1873.28054.3745.4675SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 34.844.774.864.77-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake - MIN: 4.78 / MAX: 4.97-O2 - MIN: 4.69 / MAX: 5.17-O3 - MIN: 4.57 / MAX: 5.83-O3 - MIN: 4.73 / MAX: 4.951. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 4.82 / Avg: 4.84 / Max: 4.86Min: 4.71 / Avg: 4.77 / Max: 4.83Min: 4.74 / Avg: 4.86 / Max: 4.94Min: 4.75 / Avg: 4.77 / Max: 4.81. (CXX) g++ options: -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2306090120150SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.15, N = 3121.78122.86120.82122.411. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.220406080100Min: 121.51 / Avg: 121.78 / Max: 121.96Min: 122.55 / Avg: 122.86 / Max: 123.12Min: 120.59 / Avg: 120.82 / Max: 120.95Min: 122.11 / Avg: 122.41 / Max: 122.591. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.260120180240300SE +/- 0.40, N = 3SE +/- 0.34, N = 3SE +/- 0.69, N = 3SE +/- 0.08, N = 3278.82280.61278.31280.45
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.250100150200250Min: 278.12 / Avg: 278.82 / Max: 279.52Min: 279.95 / Avg: 280.61 / Max: 281.08Min: 277.13 / Avg: 278.31 / Max: 279.51Min: 280.31 / Avg: 280.45 / Max: 280.6

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.280160240320400SE +/- 0.66, N = 3SE +/- 0.57, N = 3SE +/- 0.60, N = 3SE +/- 0.17, N = 3368.91370.23371.84370.25
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.270140210280350Min: 367.59 / Avg: 368.91 / Max: 369.6Min: 369.54 / Avg: 370.23 / Max: 371.36Min: 370.63 / Avg: 371.84 / Max: 372.5Min: 369.91 / Avg: 370.25 / Max: 370.46

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.016, N = 3SE +/- 0.029, N = 3SE +/- 0.009, N = 3SE +/- 0.013, N = 36.4036.3676.4036.371
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 6.38 / Avg: 6.4 / Max: 6.43Min: 6.31 / Avg: 6.37 / Max: 6.4Min: 6.39 / Avg: 6.4 / Max: 6.42Min: 6.35 / Avg: 6.37 / Max: 6.4

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2400K800K1200K1600K2000KSE +/- 2313.79, N = 3SE +/- 2996.90, N = 3SE +/- 2174.70, N = 3SE +/- 1706.75, N = 31790687179704718007401793763
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Clear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2300K600K900K1200K1500KMin: 1786060 / Avg: 1790686.67 / Max: 1793080Min: 1791410 / Avg: 1797046.67 / Max: 1801630Min: 1797040 / Avg: 1800740 / Max: 1804570Min: 1790350 / Avg: 1793763.33 / Max: 1795500

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2800K1600K2400K3200K4000KSE +/- 60192.77, N = 15SE +/- 40605.62, N = 15SE +/- 76386.02, N = 12SE +/- 40325.61, N = 153889653.102924174.983088595.602978426.57-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2700K1400K2100K2800K3500KMin: 3559630 / Avg: 3889653.1 / Max: 4292944.5Min: 2688258 / Avg: 2924174.98 / Max: 3175212.75Min: 2801748 / Avg: 3088595.6 / Max: 3650686.25Min: 2793743 / Avg: 2978426.57 / Max: 32470651. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.78981.57962.36943.15923.949SE +/- 0.03, N = 3SE +/- 0.02, N = 12SE +/- 0.00, N = 3SE +/- 0.04, N = 123.510.951.080.96-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.47 / Avg: 3.51 / Max: 3.57Min: 0.9 / Avg: 0.95 / Max: 1.08Min: 1.08 / Avg: 1.08 / Max: 1.08Min: 0.56 / Avg: 0.96 / Max: 1.131. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.76731.53462.30193.06923.8365SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 153.410.881.050.96-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 3.37 / Avg: 3.41 / Max: 3.46Min: 0.87 / Avg: 0.88 / Max: 0.9Min: 1.05 / Avg: 1.05 / Max: 1.05Min: 0.91 / Avg: 0.96 / Max: 1.071. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.20.59851.1971.79552.3942.9925SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.03, N = 152.660.670.850.88-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O2-O3-O31. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810Min: 2.63 / Avg: 2.66 / Max: 2.73Min: 0.66 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.85 / Max: 0.97Min: 0.7 / Avg: 0.88 / Max: 1.121. (CXX) g++ options: -pthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2110220330440550SE +/- 14.20, N = 15SE +/- 1.90, N = 3SE +/- 0.39, N = 3SE +/- 12.24, N = 15498.19460.79442.69498.67-fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-g2-g2-g21. (CXX) g++ options: -O3 -pipe -fPIC -pthread
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.290180270360450Min: 440.54 / Avg: 498.19 / Max: 559.37Min: 457.45 / Avg: 460.79 / Max: 464.03Min: 441.91 / Avg: 442.69 / Max: 443.13Min: 437.83 / Avg: 498.67 / Max: 550.311. (CXX) g++ options: -O3 -pipe -fPIC -pthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: HighestClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.152, N = 15SE +/- 0.195, N = 15SE +/- 0.182, N = 15SE +/- 0.179, N = 157.4727.5037.3267.663-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O3-O31. (CXX) g++ options: -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: HighestClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 7.3 / Avg: 7.47 / Max: 9.59Min: 7.25 / Avg: 7.5 / Max: 10.22Min: 7.05 / Avg: 7.33 / Max: 9.86Min: 7.45 / Avg: 7.66 / Max: 10.181. (CXX) g++ options: -O2 -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.2246810SE +/- 0.158, N = 15SE +/- 0.198, N = 15SE +/- 0.183, N = 15SE +/- 0.186, N = 156.6736.7486.5516.861-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake-O3-O31. (CXX) g++ options: -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestClear Linux 34100Fedora Workstation 33openSUSE TumbleweedManjaro Linux 20.23691215Min: 6.5 / Avg: 6.67 / Max: 8.89Min: 6.5 / Avg: 6.75 / Max: 9.52Min: 6.3 / Avg: 6.55 / Max: 9.12Min: 6.63 / Avg: 6.86 / Max: 9.461. (CXX) g++ options: -O2 -lpthread -ldl

122 Results Shown

LAMMPS Molecular Dynamics Simulator:
  Rhodopsin Protein
  20k Atoms
Incompact3D
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
  Rand Fill:
    MB/s
    Microseconds Per Op
  Overwrite:
    Microseconds Per Op
    MB/s
x265
simdjson
PHPBench
LibRaw
libavif avifenc:
  10
  8
x264
Timed Linux Kernel Compilation
GLmark2
C-Blosc
Redis
rav1e
Numpy Benchmark
Timed MAFFT Alignment
SQLite Speedtest
DeepSpeech
Facebook RocksDB
PyBench
SVT-AV1
Mobile Neural Network:
  mobilenet-v1-1.0
  resnet-v2-50
Zstd Compression
x265
Mobile Neural Network
NCNN
Stockfish
rav1e
NCNN
rav1e
Basis Universal
Timed FFmpeg Compilation
NCNN:
  CPU - yolov4-tiny
  Vulkan GPU - yolov4-tiny
Mobile Neural Network
Node.js V8 Web Tooling Benchmark
Mobile Neural Network
Facebook RocksDB
NCNN
LevelDB:
  Rand Read
  Hot Read
NCNN
LeelaChessZero
Timed HMMer Search
NCNN
SVT-AV1
NCNN
LZ4 Compression
NCNN
FFTE
libavif avifenc
Crafty
NCNN
Facebook RocksDB
NCNN:
  CPU - resnet18
  Vulkan GPU - resnet18
RealSR-NCNN
LeelaChessZero
LZ4 Compression
NCNN:
  CPU - efficientnet-b0
  Vulkan GPU - resnet50
LZ4 Compression
NCNN:
  Vulkan GPU - googlenet
  CPU - resnet50
  Vulkan GPU - mobilenet
WebP Image Encode
NCNN
RNNoise
RealSR-NCNN
NCNN:
  CPU - mobilenet
  Vulkan GPU - efficientnet-b0
  CPU - googlenet
IndigoBench
GROMACS
NCNN:
  CPU - mnasnet
  CPU-v2-v2 - mobilenet-v2
Himeno Benchmark
Facebook RocksDB
RawTherapee
IndigoBench
Facebook RocksDB
LZ4 Compression
NCNN:
  CPU-v3-v3 - mobilenet-v3
  Vulkan GPU - blazeface
  CPU - blazeface
TNN
ASTC Encoder
WebP Image Encode
Zstd Compression
miniFE
NCNN
Coremark
NCNN
asmFish
WebP Image Encode
Chaos Group V-RAY
Dolfyn
TNN
Crypto++
NCNN
High Performance Conjugate Gradient
NCNN
ASTC Encoder
Blender:
  Classroom - CPU-Only
  Barbershop - CPU-Only
Waifu2x-NCNN Vulkan
TensorFlow Lite
Redis
simdjson:
  DistinctUserID
  PartialTweets
  Kostya
Crypto++
Betsy GPU Compressor:
  ETC2 RGB - Highest
  ETC1 - Highest