Fedora 32 vs. Fedora 33 Beta Benchmarks

Intel Core i9-10900K testing on Fedora 32 and Fedora 33 Beta by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009290-FI-FEDORA85348
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
F32 Workstation
September 26 2020
  9 Hours, 27 Minutes
F32 Workstation Updated
September 27 2020
  9 Hours, 35 Minutes
F33 Workstation Beta
September 28 2020
  9 Hours, 51 Minutes
Invert Behavior (Only Show Selected Data)
  9 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Fedora 32 vs. Fedora 33 Beta BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDisplay DriverF32 WorkstationF32 Workstation UpdatedF33 Workstation BetaIntel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F3 BIOS)Intel Comet Lake PCH16GBSamsung SSD 970 EVO 250GBGigabyte AMD Radeon RX 5500/5500M / Pro 5500M 8GB (1890/875MHz)Realtek ALC1220DELL P2415QIntel Device 15f3 + Intel Wi-Fi 6 AX201Fedora 325.6.6-300.fc32.x86_64 (x86_64)GNOME Shell 3.36.1X Server + Wayland4.6 Mesa 20.0.4 (LLVM 10.0.0)GCC 10.0.1 20200328 + Clang 10.0.1ext43840x2160Intel + Intel-AC 9462/95605.8.11-200.fc32.x86_64 (x86_64)GNOME Shell 3.36.6X Server 1.20.8 + Waylandmodesetting 1.20.84.6 Mesa 20.1.8 (LLVM 10.0.1)GCC 10.2.1 20200723 + Clang 10.0.1Fedora 335.8.11-300.fc33.x86_64 (x86_64)GNOME Shell 3.38.0X Server + Wayland4.6 Mesa 20.2.0-rc4 (LLVM 11.0.0)GCC 10.2.1 20200826 + Clang 11.0.0btrfsOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Disk Details- F32 Workstation: NONE / relatime,rw,seclabel- F32 Workstation Updated: NONE / relatime,rw,seclabel- F33 Workstation Beta: NONE / relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xc8Java Details- F32 Workstation: OpenJDK Runtime Environment (build 1.8.0_242-b08)- F32 Workstation Updated: OpenJDK Runtime Environment (build 1.8.0_265-b01)- F33 Workstation Beta: OpenJDK Runtime Environment 18.9 (build 11.0.9-ea+6)Python Details- F32 Workstation: Python 3.8.2- F32 Workstation Updated: Python 3.8.5- F33 Workstation Beta: Python 3.9.0rc2Security Details- F32 Workstation: SELinux + itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Not affected - F32 Workstation Updated: SELinux + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - F33 Workstation Beta: SELinux + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

F32 WorkstationF32 Workstation UpdatedF33 Workstation BetaResult OverviewPhoronix Test Suite100%158%217%275%334%SQLiteRealSR-NCNNLevelDBSQLite SpeedtestLibRawG'MICSystemd Total Boot TimeHierarchical INTegrationDaCapo BenchmarkRenaissanceTimed FFmpeg CompilationBork File EncrypterLibreOfficeGIMPTSCPPerl BenchmarksDolfynPHPBenchTimed Linux Kernel CompilationCraftyBYTE Unix BenchmarkPyPerformanceTimed MPlayer CompilationPyBenchMobile Neural NetworkGitTimed MAFFT AlignmentOpenVKLStockfishHuginlibavif avifencASTC EncoderTesseract OCRTimed GDB GNU Debugger CompilationHigh Performance Conjugate Gradientglibc benchBasemark GPUTimed Apache CompilationZstd Compressiondav1dSVT-AV1Intel Open Image DenoiseRodiniaRawTherapeeUnigine SuperpositionSystem GZIP DecompressionTimed PHP CompilationLuxCoreRenderDeepSpeechNAMDTNNUnigine HeavenDarktableGROMACSx265NCNNLAMMPS Molecular Dynamics SimulatorBlenderWebP Image EncodeIncompact3DEmbreeCaffeTensorFlow Lite

Fedora 32 vs. Fedora 33 Beta Benchmarkssqlite: 1realsr-ncnn: 4x - Noleveldb: Seq Fillleveldb: Seq Fillleveldb: Rand Deleteleveldb: Rand Fillleveldb: Rand Fillsystemd-boot-total: Userspaceleveldb: Overwriteleveldb: Overwritegmic: Plotting Isosurface Of A 3D Volume, 1000 Timesrenaissance: Scala Dottysystemd-boot-total: Totalrenaissance: Savina Reactors.IOsqlite-speedtest: Timed Time - Size 1,000libraw: Post-Processing Benchmarkrenaissance: Twitter HTTP Requestsglibc-bench: expsystemd-boot-total: Firmwareglibc-bench: ffsglibc-bench: modfpyperformance: json_loadsglibc-bench: sqrthint: FLOATpyperformance: pathlibdacapobench: Tradebeansbuild-ffmpeg: Time To Compilerenaissance: Apache Spark ALSrenaissance: Rand Forestdacapobench: Jythonbork: File Encryption Timepyperformance: crypto_pyaeslibreoffice: 20 Documents To PDFsystemd-boot-total: Loaderrenaissance: Genetic Algorithm Using Jenetics + Futuresgimp: auto-levelsgimp: unsharp-maskgimp: resizetscp: AI Chess Performanceperl-benchmark: Pod2htmlperl-benchmark: Interpretermnn: SqueezeNetV1.0dolfyn: Computational Fluid Dynamicsgmic: 2D Function Plotting, 1000 Timesphpbench: PHP Benchmark Suitegimp: rotatebuild-linux-kernel: Time To Compilepyperformance: godacapobench: Tradesoapncnn: CPU - blazefacepyperformance: 2to3pyperformance: nbodycrafty: Elapsed Timeavifenc: 8astcenc: Fastrodinia: OpenMP HotSpot3Dbyte: Dhrystone 2ncnn: CPU - mnasnetncnn: CPU - googlenetrenaissance: Akka Unbalanced Cobwebbed Treeai-benchmark: Device Inference Scoreavifenc: 10glibc-bench: sinhmnn: resnet-v2-50ncnn: CPU - alexnetncnn: CPU - vgg16cryptsetup: PBKDF2-sha512build-mplayer: Time To Compilepyperformance: pickle_pure_pythonncnn: CPU - efficientnet-b0ai-benchmark: Device AI Scorepyperformance: floatncnn: CPU - yolov4-tinybasemark: Vulkan - 3840 x 2160 - Mediumcaffe: AlexNet - CPU - 200dav1d: Chimera 1080pluxcorerender: Rainbow Colors and Prismleveldb: Hot Readdav1d: Summer Nature 1080pwebp: Quality 100, Losslessgit: Time To Complete Common Git Commandsmafft: Multiple Sequence Alignment - LSU RNApyperformance: python_startupwebp: Quality 100, Lossless, Highest Compressionpyperformance: django_templateopenvkl: vklBenchmarkai-benchmark: Device Training Scorecompress-zstd: 3stockfish: Total Timepyperformance: chaospybench: Total For Average Test Timeswebp: Quality 100svt-av1: Enc Mode 0 - 1080pwebp: Quality 100, Highest Compressionastcenc: Mediumtnn: CPU - MobileNet v2hugin: Panorama Photo Assistant + Stitching Timedarktable: Server Rack - CPU-onlyglibc-bench: sincosgmic: 3D Elevated Function In Rand Colors, 100 Timesrodinia: OpenMP Leukocytewebp: Defaultncnn: CPU - resnet18tesseract-ocr: Time To OCR 7 Imageshpcg: build-gdb: Time To Compilepyperformance: raytraceleveldb: Seek Randleveldb: Rand Readsvt-av1: Enc Mode 4 - 1080pcaffe: GoogleNet - CPU - 100glibc-bench: ffsllbuild-apache: Time To Compileunigine-super: 1920 x 1080 - Fullscreen - Low - OpenGLdarktable: Server Room - CPU-onlyglibc-bench: asinhcryptsetup: PBKDF2-whirlpoolpyperformance: regex_compilerodinia: OpenMP LavaMDoidn: Memorialmnn: inception-v3caffe: GoogleNet - CPU - 200rodinia: OpenMP CFD Solverncnn: CPU - mobilenetsystemd-boot-total: Kernelrawtherapee: Total Benchmark Timencnn: CPU-v2-v2 - mobilenet-v2mnn: mobilenet-v1-1.0astcenc: Thoroughncnn: CPU - resnet50system-decompress-gzip: namd: ATPase Simulation - 327,506 Atomsdeepspeech: CPUbuild-php: Time To Compileunigine-heaven: 1920 x 1080 - Fullscreen - OpenGLglibc-bench: log2avifenc: 2luxcorerender: DLSCdav1d: Chimera 1080p 10-bitsvt-av1: Enc Mode 8 - 1080punigine-super: 1920 x 1080 - Fullscreen - High - OpenGLtensorflow-lite: NASNet Mobilegromacs: Water Benchmarkavifenc: 0blender: Fishy Cat - CPU-Onlyastcenc: Exhaustivex265: H.265 1080p Video Encodingncnn: CPU - squeezenetlammps: Rhodopsin Proteincompress-zstd: 19rodinia: OpenMP Streamclusterblender: BMW27 - CPU-Onlytensorflow-lite: Mobilenet Quantglibc-bench: pthread_oncencnn: CPU-v3-v3 - mobilenet-v3embree: Pathtracer ISPC - Asian Dragoncaffe: AlexNet - CPU - 100tnn: CPU - SqueezeNet v1.1darktable: Boat - CPU-onlyglibc-bench: cosincompact3d: Cylinderembree: Pathtracer - Asian Dragonbasemark: Vulkan - 3840 x 2160 - Highglibc-bench: sindarktable: Masskrug - CPU-onlytensorflow-lite: Mobilenet Floatdav1d: Summer Nature 4Kglibc-bench: tanhtensorflow-lite: SqueezeNetglibc-bench: atanhtensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2ncnn: CPU - shufflenet-v2renaissance: Apache Spark PageRankF32 WorkstationF32 Workstation UpdatedF33 Workstation BetaFedora Workstation 32 Updated43.33625.19540.66954.440.69552.841.834801052.542.02216.6741013.0361181114395.21043.67645.921881.0304.9223081351.309881.5170522.31.70406467186547.1196915.9230544.0261647.2891508.21933177.1551006.68818721244.3689.50711.6236.10116275970.099593760.000812104.21415.59492.9238484159.53461.67621529991.31264108103565655.1334.5282.34552011851.23.8713.749211.34714564.9656.7908824.90615.5070.1227.7253715.85261695.925.50221.19217352778.802.508.800726.1614.79140.8118.6146.9430.50744.4192.6111602894.83707314694.88931.9620.1675.9526.67270.61038.6820.17612.408055.39484.7431.26313.8620.0524.3505776.07842310.7208.8744.5542413041.3180017.365131.63.6017.70467880663150173.76711.0329.43248282518.21316.48380155.1855.022.88620.9424.812.4811.2110579.9875246.88689.56565.9871343.6232.17133.7642.03445.11488490.96373.054160.54168.7572.7614.188.45230.317.355114.031180591.319844.0220.3281108663260.72914.16638.7503238.62257417.665145.4538.39474.074114879185.4310.40571679779.40863237865021447073.233732.06843.76325.16442.61851.942.76351.043.297850351.243.16016.4221007.0911228611682.14644.91738.441918.0924.4533781021.304151.7339323.31.50804451902708.2097715.6229444.1561642.1121512.18933137.1211016.57817771284.8969.34111.5895.95716720680.097626250.000794014.39515.83393.1288538659.37862.74121829301.33265108104525675.2094.6384.32951604321.43.8413.439004.04114245.0416.8036025.42515.4870.41207777227.5053785.88256996.425.06219.01217359783.942.488.758736.8814.85840.4458.5617.0130.92744.6191.4211452900.53754456396.08971.9440.1655.8816.75273.81638.3640.17812.275555.78985.3271.25013.7220.2504.3447976.19842710.6878.7964.5412415381.3075017.445132.33.6277.66474880666150173.84611.1029.55148336918.10016.38378355.1805.052.89021.0624.672.4671.2135579.8938346.83790.03435.9571443.5882.17133.1541.84745.21489540.96373.353161.16168.8972.7814.168.42430.217.352114.331177871.316414.0320.3780108780260.93014.14938.8184239.08667517.672745.3738.32844.073114742185.5210.40521679059.40694237794721445703.203559.826144.40615.45261.82335.861.26536.859.9101137937.159.44011.9271364.5001515713907.62852.42542.232229.4054.2291793831.500541.5383420.51.50804497803514.8792914.5249240.8571772.4081622.64435577.60595.86.93217801310.7339.82712.1536.23717009910.101928160.000828464.27816.23489.6998793249.71660.68021130271.35272111106251505.2654.5983.26650797941.33.9313.469163.8175.0746.9385225.20115.1969.03204268227.1983745.9694.725.23222.83213793791.702.468.660737.4115.00740.2268.4916.9130.88945.0193.942863.33708106696.08861.9380.1665.8816.71273.58538.8010.17812.293555.98184.4461.25013.8320.1684.3874075.46542510.6258.8004.5162432551.3083317.503132.63.6287.64781874299149172.71611.0929.36548587518.10016.41377855.5115.022.87321.0324.782.4691.2067979.5426946.63089.68525.9779443.4112.18133.1641.85245.31483090.95973.090161.08169.3973.0314.138.43030.217.301114.041180951.318434.0220.3689108919260.32014.17938.7425238.74986317.696545.4138.32964.067114727185.5810.39801679259.41062237867721451473.203086.4982083292OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 343.3443.76144.411. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta612182430SE +/- 0.20, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 325.2025.1615.45

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1428425670SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.86, N = 440.6742.6261.821. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.50, N = 454.451.935.81. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1428425670SE +/- 0.33, N = 3SE +/- 0.26, N = 3SE +/- 0.40, N = 340.7042.7661.271. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.12, N = 3SE +/- 0.39, N = 3SE +/- 0.12, N = 352.851.036.81. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.09, N = 3SE +/- 0.33, N = 3SE +/- 0.19, N = 341.8343.3059.911. (CXX) g++ options: -O2 -lsnappy -lpthread

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: UserspaceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2K4K6K8K10K8010850311379

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.52, N = 3SE +/- 0.13, N = 3SE +/- 0.38, N = 352.551.237.11. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.42, N = 3SE +/- 0.12, N = 3SE +/- 0.61, N = 342.0243.1659.441. (CXX) g++ options: -O2 -lsnappy -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 316.6716.4211.931. Version 2.9.0

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30060090012001500SE +/- 12.25, N = 5SE +/- 8.96, N = 5SE +/- 7.12, N = 51013.041007.091364.50

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: TotalF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3K6K9K12K15K118111228615157

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IOF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3K6K9K12K15KSE +/- 118.55, N = 5SE +/- 62.43, N = 5SE +/- 150.42, N = 2014395.2111682.1513907.63

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.01, N = 3SE +/- 0.13, N = 3SE +/- 0.24, N = 343.6844.9252.431. (CC) gcc options: -O2 -ldl -lz -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 0.17, N = 345.9238.4442.23-llcms21. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta5001000150020002500SE +/- 3.54, N = 5SE +/- 8.57, N = 5SE +/- 11.76, N = 51881.031918.092229.41

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.10752.2153.32254.435.5375SE +/- 0.00966, N = 3SE +/- 0.05716, N = 3SE +/- 0.05192, N = 134.922304.453374.22917

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: FirmwareF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2K4K6K8K10K813581029383

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.33760.67521.01281.35041.688SE +/- 0.00037, N = 3SE +/- 0.00223, N = 3SE +/- 0.00080, N = 31.309881.304151.50054

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.39010.78021.17031.56041.9505SE +/- 0.00073, N = 3SE +/- 0.00166, N = 3SE +/- 0.00134, N = 31.517051.733931.53834

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta612182430SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 322.323.320.5

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.38340.76681.15021.53361.917SE +/- 0.00081, N = 3SE +/- 0.00204, N = 3SE +/- 0.00096, N = 31.704061.508041.50804

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta110M220M330M440M550MSE +/- 662338.05, N = 3SE +/- 65392.17, N = 3SE +/- 518748.60, N = 3467186547.12451902708.21497803514.881. (CC) gcc options: -O3 -march=native -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 315.915.614.5

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta5001000150020002500SE +/- 25.55, N = 4SE +/- 4.70, N = 4SE +/- 24.83, N = 4230522942492

Timed FFmpeg Compilation

This test times how long it takes to build FFmpeg. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 344.0344.1640.86

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta400800120016002000SE +/- 3.97, N = 5SE +/- 13.90, N = 5SE +/- 9.62, N = 51647.291642.111772.41

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30060090012001500SE +/- 15.97, N = 5SE +/- 16.39, N = 5SE +/- 15.99, N = 51508.221512.191622.64

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta8001600240032004000SE +/- 12.70, N = 4SE +/- 15.35, N = 4SE +/- 27.37, N = 4331733133557

Bork File Encrypter

Bork is a small, cross-platform file encryption utility. It is written in Java and designed to be included along with the files it encrypts for long-term storage. This test measures the amount of time it takes to encrypt a sample file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBork File Encrypter 1.4File Encryption TimeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.050, N = 3SE +/- 0.018, N = 3SE +/- 0.019, N = 37.1557.1217.605

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.03, N = 3100.0101.095.8

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.053, N = 20SE +/- 0.034, N = 25SE +/- 0.057, N = 56.6886.5786.9321. F32 Workstation: LibreOffice 6.4.2.2 40(Build:2)2. F32 Workstation Updated: LibreOffice 6.4.6.2 40(Build:2)3. F33 Workstation Beta: LibreOffice 7.0.1.2 00(Build:2)

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: LoaderF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta400800120016002000187217771780

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + FuturesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30060090012001500SE +/- 13.46, N = 25SE +/- 9.32, N = 5SE +/- 10.40, N = 251244.371284.901310.73

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: auto-levelsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.013, N = 3SE +/- 0.023, N = 3SE +/- 0.018, N = 39.5079.3419.827

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: unsharp-maskF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 311.6211.5912.15

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: resizeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.082, N = 3SE +/- 0.064, N = 3SE +/- 0.050, N = 36.1015.9576.237

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta400K800K1200K1600K2000KSE +/- 1177.47, N = 5SE +/- 1242.63, N = 5SE +/- 1964.11, N = 51627597167206817009911. (CC) gcc options: -O3 -march=native

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.02290.04580.06870.09160.1145SE +/- 0.00036967, N = 3SE +/- 0.00021074, N = 3SE +/- 0.00060182, N = 30.099593760.097626250.10192816

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.00020.00040.00060.00080.001SE +/- 0.00000261, N = 3SE +/- 0.00000392, N = 3SE +/- 0.00001305, N = 30.000812100.000794010.00082846

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.98891.97782.96673.95564.9445SE +/- 0.038, N = 3SE +/- 0.027, N = 3SE +/- 0.016, N = 34.2144.3954.278MIN: 4.06 / MAX: 17.74MIN: 4.28 / MAX: 17.83MIN: 4.16 / MAX: 10.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 315.5915.8316.23

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.14, N = 3SE +/- 0.43, N = 3SE +/- 0.48, N = 392.9293.1389.701. Version 2.9.0

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. The number of iterations used is 1,000,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta200K400K600K800K1000KSE +/- 2757.69, N = 3SE +/- 4199.16, N = 3SE +/- 800.41, N = 3848415853865879324

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: rotateF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.076, N = 3SE +/- 0.011, N = 3SE +/- 0.060, N = 39.5349.3789.716

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1428425670SE +/- 0.33, N = 3SE +/- 1.06, N = 3SE +/- 0.60, N = 361.6862.7460.68

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50100150200250SE +/- 1.00, N = 3SE +/- 0.33, N = 3215218211

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta6001200180024003000SE +/- 30.37, N = 4SE +/- 19.64, N = 19SE +/- 18.63, N = 4299929303027

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.30380.60760.91141.21521.519SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 31.311.331.35MIN: 1.23 / MAX: 1.36MIN: 1.24 / MAX: 2.53MIN: 1.28 / MAX: 4.891. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta60120180240300264265272

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100108108111

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2M4M6M8M10MSE +/- 5404.30, N = 3SE +/- 13836.18, N = 3SE +/- 21467.07, N = 31035656510452567106251501. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.18462.36923.55384.73845.923SE +/- 0.008, N = 3SE +/- 0.020, N = 3SE +/- 0.002, N = 35.1335.2095.2651. (CXX) g++ options: -O3 -fPIC

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.04182.08363.12544.16725.209SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 34.524.634.591. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.36, N = 382.3584.3383.271. (CXX) g++ options: -O2 -lOpenCL

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta11M22M33M44M55MSE +/- 727737.47, N = 3SE +/- 135665.15, N = 3SE +/- 8678.89, N = 352011851.251604321.450797941.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.88431.76862.65293.53724.4215SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 33.873.843.93MIN: 3.77 / MAX: 10.47MIN: 3.65 / MAX: 5.23MIN: 3.65 / MAX: 7.471. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.28, N = 3SE +/- 0.01, N = 3SE +/- 0.29, N = 313.7413.4313.46MIN: 13.08 / MAX: 32.76MIN: 12.99 / MAX: 26.07MIN: 12.81 / MAX: 18.221. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2K4K6K8K10KSE +/- 99.99, N = 5SE +/- 75.72, N = 5SE +/- 91.23, N = 99211.359004.049163.82

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreF32 WorkstationF32 Workstation Updated3006009001200150014561424

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.14172.28343.42514.56685.7085SE +/- 0.021, N = 3SE +/- 0.034, N = 3SE +/- 0.008, N = 34.9655.0415.0741. (CXX) g++ options: -O3 -fPIC

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.00115, N = 3SE +/- 0.00079, N = 3SE +/- 0.00116, N = 36.790886.803606.93852

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta612182430SE +/- 0.01, N = 3SE +/- 0.26, N = 3SE +/- 0.24, N = 324.9125.4325.20MIN: 24.66 / MAX: 38.14MIN: 24.69 / MAX: 39.57MIN: 24.79 / MAX: 32.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 315.5015.4815.19MIN: 15.3 / MAX: 24.64MIN: 15.29 / MAX: 23.87MIN: 15.01 / MAX: 22.421. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1632486480SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.17, N = 370.1270.4169.03MIN: 69.69 / MAX: 82.73MIN: 69.98 / MAX: 80.65MIN: 68.48 / MAX: 76.071. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.4PBKDF2-sha512F32 Workstation UpdatedF33 Workstation BetaFedora Workstation 32 Updated400K800K1200K1600K2000KSE +/- 3634.51, N = 3SE +/- 1325.33, N = 3SE +/- 5504.00, N = 3207777220426822083292

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta714212835SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 327.7327.5127.20

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta80160240320400SE +/- 0.33, N = 3SE +/- 0.33, N = 3371378374

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.3412.6824.0235.3646.705SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 35.855.885.96MIN: 5.63 / MAX: 18.12MIN: 5.68 / MAX: 7.26MIN: 5.76 / MAX: 8.061. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreF32 WorkstationF32 Workstation Updated600120018002400300026162569

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.17, N = 3SE +/- 0.22, N = 3SE +/- 0.09, N = 395.996.494.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta612182430SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.14, N = 325.5025.0625.23MIN: 24.82 / MAX: 38.76MIN: 24.6 / MAX: 33.92MIN: 24.69 / MAX: 31.61. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Basemark GPU

This is a benchmark of Basemark GPU. For this test profile to work, you must have a valid license/copy of BasemarkGPU in your Phoronix Test Suite download cache. This test profile simply automates the execution of BasemarkGPU and you must already have the Windows .zip or Linux .tar.gz in the download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterBasemark GPU 1.2Renderer: Vulkan - Resolution: 3840 x 2160 - Graphics Preset: MediumF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50100150200250SE +/- 0.07, N = 3SE +/- 0.27, N = 3SE +/- 0.13, N = 3221.19219.01222.83MIN: 140.91 / MAX: 370.01MIN: 140.81 / MAX: 371.24MIN: 141.32 / MAX: 370.49

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50K100K150K200K250KSE +/- 96.33, N = 3SE +/- 162.52, N = 3SE +/- 34.91, N = 32173522173592137931. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2004006008001000SE +/- 1.16, N = 3SE +/- 0.79, N = 3SE +/- 0.59, N = 3778.80783.94791.70MIN: 607.84 / MAX: 1013.58MIN: 609.29 / MAX: 1041.56MIN: 610.59 / MAX: 1080.541. (CC) gcc options: -pthread

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.56251.1251.68752.252.8125SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.502.482.46MIN: 2.44 / MAX: 2.53MIN: 2.43 / MAX: 2.51MIN: 2.41 / MAX: 2.51

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.098, N = 7SE +/- 0.089, N = 8SE +/- 0.119, N = 38.8008.7588.6601. (CXX) g++ options: -O2 -lsnappy -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta160320480640800SE +/- 7.11, N = 3SE +/- 0.93, N = 3SE +/- 0.76, N = 3726.16736.88737.41MIN: 483.53 / MAX: 795.12MIN: 621.65 / MAX: 800.57MIN: 625.08 / MAX: 802.041. (CC) gcc options: -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 314.7914.8615.011. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta918273645SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.13, N = 340.8140.4540.231. F32 Workstation: git version 2.26.02. F32 Workstation Updated: git version 2.26.03. F33 Workstation Beta: git version 2.28.0

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.047, N = 3SE +/- 0.050, N = 3SE +/- 0.045, N = 38.6148.5618.4911. (CC) gcc options: -std=c99 -O3 -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.947.016.91

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta714212835SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 330.5130.9330.891. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.18, N = 3SE +/- 0.13, N = 3SE +/- 0.20, N = 344.444.645.0

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.81, N = 3SE +/- 1.62, N = 3SE +/- 0.25, N = 3192.61191.42193.94MIN: 1 / MAX: 771MIN: 1 / MAX: 781MIN: 1 / MAX: 778

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreF32 WorkstationF32 Workstation Updated200400600800100011601145

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta6001200180024003000SE +/- 24.28, N = 3SE +/- 10.98, N = 3SE +/- 16.00, N = 32894.82900.52863.31. (CC) gcc options: -O3 -pthread -lz -llzma

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta8M16M24M32M40MSE +/- 180894.45, N = 3SE +/- 201253.35, N = 3SE +/- 265859.79, N = 33707314637544563370810661. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 394.896.096.0

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2004006008001000SE +/- 0.88, N = 3SE +/- 1.73, N = 3SE +/- 9.68, N = 3893897886

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.44150.8831.32451.7662.2075SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.002, N = 31.9621.9441.9381. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.03760.07520.11280.15040.188SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1670.1650.1661. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.33922.67844.01765.35686.696SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.010, N = 35.9525.8815.8811. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.676.756.711. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta60120180240300SE +/- 1.46, N = 3SE +/- 0.64, N = 3SE +/- 1.11, N = 3270.61273.82273.59MIN: 263.97 / MAX: 283.59MIN: 271.95 / MAX: 283.74MIN: 270.76 / MAX: 280.551. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta918273645SE +/- 0.20, N = 3SE +/- 0.23, N = 3SE +/- 0.17, N = 338.6838.3638.80

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.04010.08020.12030.16040.2005SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1760.1780.178

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 312.4112.2812.29

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.53, N = 3SE +/- 0.11, N = 3SE +/- 0.29, N = 355.3955.7955.981. Version 2.9.0

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 1.12, N = 3SE +/- 0.76, N = 3SE +/- 0.56, N = 384.7485.3384.451. (CXX) g++ options: -O2 -lOpenCL

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.28420.56840.85261.13681.421SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.2631.2501.2501. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.28, N = 3SE +/- 0.04, N = 3SE +/- 0.28, N = 313.8613.7213.83MIN: 13.38 / MAX: 22.35MIN: 13.41 / MAX: 32.26MIN: 13.31 / MAX: 17.981. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta510152025SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 320.0520.2520.17

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.98721.97442.96163.94884.936SE +/- 0.00174, N = 3SE +/- 0.00338, N = 3SE +/- 0.00255, N = 34.350574.344794.387401. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.37, N = 3SE +/- 0.35, N = 3SE +/- 0.16, N = 376.0876.2075.47

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta90180270360450SE +/- 0.58, N = 3423427425

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 310.7210.6910.631. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.061, N = 15SE +/- 0.083, N = 12SE +/- 0.005, N = 38.8748.7968.8001. (CXX) g++ options: -O2 -lsnappy -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.02472.04943.07414.09885.1235SE +/- 0.017, N = 3SE +/- 0.014, N = 3SE +/- 0.022, N = 34.5544.5414.5161. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50K100K150K200K250KSE +/- 98.00, N = 3SE +/- 67.52, N = 3SE +/- 213.53, N = 32413042415382432551. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.29660.59320.88981.18641.483SE +/- 0.00039, N = 3SE +/- 0.00037, N = 3SE +/- 0.00037, N = 31.318001.307501.30833

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.01, N = 317.3717.4517.50

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.30, N = 3131.6132.3132.6MAX: 179.9MAX: 190.3MAX: 191.4

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.81631.63262.44893.26524.0815SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 33.6013.6273.628

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.01979, N = 3SE +/- 0.00926, N = 3SE +/- 0.01673, N = 37.704677.664747.64781

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta200K400K600K800K1000KSE +/- 493.00, N = 3SE +/- 1303.47, N = 3SE +/- 486.33, N = 3880663880666874299

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150150150149

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.62, N = 3SE +/- 0.47, N = 3SE +/- 0.00, N = 3173.77173.85172.721. (CXX) g++ options: -O2 -lOpenCL

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 311.0311.1011.09

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta714212835SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 329.4329.5529.37MIN: 28.89 / MAX: 54.96MIN: 29 / MAX: 44.23MIN: 28.95 / MAX: 35.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta100K200K300K400K500KSE +/- 123.79, N = 3SE +/- 466.97, N = 3SE +/- 459.90, N = 34828254833694858751. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 318.2118.1018.101. (CXX) g++ options: -O2 -lOpenCL

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 316.4816.3816.41MIN: 16.28 / MAX: 27.01MIN: 15.94 / MAX: 30.2MIN: 16.2 / MAX: 22.331. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: KernelF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta8001600240032004000380137833778

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 355.1955.1855.511. RawTherapee, version 5.8, command line.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.13632.27263.40894.54525.6815SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 35.025.055.02MIN: 4.83 / MAX: 11.82MIN: 4.84 / MAX: 6.64MIN: 4.77 / MAX: 7.311. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.65031.30061.95092.60123.2515SE +/- 0.000, N = 3SE +/- 0.015, N = 3SE +/- 0.003, N = 32.8862.8902.873MIN: 2.81 / MAX: 12.64MIN: 2.84 / MAX: 15.86MIN: 2.84 / MAX: 10.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 320.9421.0621.031. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta612182430SE +/- 0.29, N = 3SE +/- 0.08, N = 3SE +/- 0.29, N = 324.8124.6724.78MIN: 23.98 / MAX: 33.44MIN: 23.86 / MAX: 39.81MIN: 23.59 / MAX: 29.871. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.55821.11641.67462.23282.791SE +/- 0.019, N = 3SE +/- 0.007, N = 3SE +/- 0.011, N = 32.4812.4672.469

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.2730.5460.8191.0921.365SE +/- 0.00506, N = 3SE +/- 0.00037, N = 3SE +/- 0.00563, N = 31.211051.213551.20679

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.73, N = 3SE +/- 0.74, N = 3SE +/- 0.24, N = 379.9979.8979.54

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1122334455SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 0.22, N = 346.8946.8446.63

Unigine Heaven

This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.16, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 389.5790.0389.69

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.34712.69424.04135.38846.7355SE +/- 0.00206, N = 3SE +/- 0.00325, N = 3SE +/- 0.00132, N = 35.987135.957145.97794

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.11, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 343.6243.5943.411. (CXX) g++ options: -O3 -fPIC

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.49050.9811.47151.9622.4525SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 32.172.172.18MIN: 2.08 / MAX: 2.25MIN: 2.08 / MAX: 2.25MIN: 2.09 / MAX: 2.25

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.29, N = 3133.76133.15133.16MIN: 86.26 / MAX: 301.72MIN: 85.51 / MAX: 306.09MIN: 85.58 / MAX: 304.151. (CC) gcc options: -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 342.0341.8541.851. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 345.145.245.3MAX: 53.8MAX: 54MAX: 54.1

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30K60K90K120K150KSE +/- 319.30, N = 3SE +/- 84.31, N = 3SE +/- 66.24, N = 3148849148954148309

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.21670.43340.65010.86681.0835SE +/- 0.003, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 30.9630.9630.959-ldl-ldl1. (CXX) g++ options: -O2 -pthread -lrt -lpthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1632486480SE +/- 0.17, N = 3SE +/- 0.54, N = 3SE +/- 0.12, N = 373.0573.3573.091. (CXX) g++ options: -O3 -fPIC

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.40, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3160.54161.16161.08

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.36, N = 3168.75168.89169.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.1.2H.265 1080p Video EncodingF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1632486480SE +/- 0.20, N = 3SE +/- 0.53, N = 3SE +/- 0.10, N = 372.7672.7873.031. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 314.1814.1614.13MIN: 13.9 / MAX: 23.63MIN: 13.79 / MAX: 30.35MIN: 13.69 / MAX: 19.581. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.006, N = 3SE +/- 0.032, N = 3SE +/- 0.078, N = 38.4528.4248.4301. (CXX) g++ options: -O2 -pthread -lm

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta714212835SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 330.330.230.21. (CC) gcc options: -O3 -pthread -lz -llzma

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 317.3617.3517.301. (CXX) g++ options: -O2 -lOpenCL

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150SE +/- 0.15, N = 3SE +/- 0.18, N = 3SE +/- 0.29, N = 3114.03114.33114.04

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30K60K90K120K150KSE +/- 16.09, N = 3SE +/- 101.41, N = 3SE +/- 27.09, N = 3118059117787118095

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.2970.5940.8911.1881.485SE +/- 0.00022, N = 3SE +/- 0.00312, N = 3SE +/- 0.00071, N = 31.319841.316411.31843

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.90681.81362.72043.62724.534SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 34.024.034.02MIN: 3.82 / MAX: 6.61MIN: 3.8 / MAX: 11.57MIN: 3.75 / MAX: 8.851. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta510152025SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 320.3320.3820.37MIN: 20.15 / MAX: 20.73MIN: 20.24 / MAX: 20.85MIN: 20.19 / MAX: 20.81

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20K40K60K80K100KSE +/- 6.96, N = 3SE +/- 38.37, N = 3SE +/- 72.57, N = 31086631087801089191. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta60120180240300SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3260.73260.93260.32MIN: 259.81 / MAX: 265.49MIN: 259.87 / MAX: 265.66MIN: 259.47 / MAX: 266.971. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 314.1714.1514.18

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta918273645SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 338.7538.8238.74

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50100150200250SE +/- 0.07, N = 3SE +/- 0.49, N = 3SE +/- 0.22, N = 3238.62239.09238.751. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 317.6717.6717.70MIN: 17.39 / MAX: 18.1MIN: 17.42 / MAX: 18.09MIN: 17.48 / MAX: 18.12

Basemark GPU

This is a benchmark of Basemark GPU. For this test profile to work, you must have a valid license/copy of BasemarkGPU in your Phoronix Test Suite download cache. This test profile simply automates the execution of BasemarkGPU and you must already have the Windows .zip or Linux .tar.gz in the download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterBasemark GPU 1.2Renderer: Vulkan - Resolution: 3840 x 2160 - Graphics Preset: HighF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 345.4545.3745.41MIN: 38.39 / MAX: 62.12MIN: 38.08 / MAX: 61.86MIN: 37.47 / MAX: 61.91

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta918273645SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 338.3938.3338.33

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.91671.83342.75013.66684.5835SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 34.0744.0734.067

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20K40K60K80K100KSE +/- 2.96, N = 3SE +/- 29.24, N = 3SE +/- 80.77, N = 3114879114742114727

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.06, N = 3185.43185.52185.58MIN: 160.86 / MAX: 193.12MIN: 159.04 / MAX: 193.53MIN: 158.75 / MAX: 192.961. (CC) gcc options: -pthread

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 310.4110.4110.40

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta40K80K120K160K200KSE +/- 22.43, N = 3SE +/- 24.58, N = 3SE +/- 22.10, N = 3167977167905167925

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3691215SE +/- 0.00186, N = 3SE +/- 0.00341, N = 3SE +/- 0.00191, N = 39.408639.406949.41062

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta500K1000K1500K2000K2500KSE +/- 101.49, N = 3SE +/- 187.47, N = 3SE +/- 452.30, N = 3237865023779472378677

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta500K1000K1500K2000K2500KSE +/- 138.60, N = 3SE +/- 145.03, N = 3SE +/- 385.59, N = 3214470721445702145147

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.72681.45362.18042.90723.634SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 33.233.203.20MIN: 2.95 / MAX: 9.29MIN: 2.97 / MAX: 9.82MIN: 2.88 / MAX: 7.091. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta8001600240032004000SE +/- 44.56, N = 5SE +/- 33.46, N = 25SE +/- 38.90, N = 243732.073559.833086.50

174 Results Shown

SQLite
RealSR-NCNN
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
  Rand Delete:
    Microseconds Per Op
  Rand Fill:
    MB/s
    Microseconds Per Op
Systemd Total Boot Time
LevelDB:
  Overwrite:
    MB/s
    Microseconds Per Op
G'MIC
Renaissance
Systemd Total Boot Time
Renaissance
SQLite Speedtest
LibRaw
Renaissance
glibc bench
Systemd Total Boot Time
glibc bench:
  ffs
  modf
PyPerformance
glibc bench
Hierarchical INTegration
PyPerformance
DaCapo Benchmark
Timed FFmpeg Compilation
Renaissance:
  Apache Spark ALS
  Rand Forest
DaCapo Benchmark
Bork File Encrypter
PyPerformance
LibreOffice
Systemd Total Boot Time
Renaissance
GIMP:
  auto-levels
  unsharp-mask
  resize
TSCP
Perl Benchmarks:
  Pod2html
  Interpreter
Mobile Neural Network
Dolfyn
G'MIC
PHPBench
GIMP
Timed Linux Kernel Compilation
PyPerformance
DaCapo Benchmark
NCNN
PyPerformance:
  2to3
  nbody
Crafty
libavif avifenc
ASTC Encoder
Rodinia
BYTE Unix Benchmark
NCNN:
  CPU - mnasnet
  CPU - googlenet
Renaissance
AI Benchmark Alpha
libavif avifenc
glibc bench
Mobile Neural Network
NCNN:
  CPU - alexnet
  CPU - vgg16
Cryptsetup
Timed MPlayer Compilation
PyPerformance
NCNN
AI Benchmark Alpha
PyPerformance
NCNN
Basemark GPU
Caffe
dav1d
LuxCoreRender
LevelDB
dav1d
WebP Image Encode
Git
Timed MAFFT Alignment
PyPerformance
WebP Image Encode
PyPerformance
OpenVKL
AI Benchmark Alpha
Zstd Compression
Stockfish
PyPerformance
PyBench
WebP Image Encode
SVT-AV1
WebP Image Encode
ASTC Encoder
TNN
Hugin
Darktable
glibc bench
G'MIC
Rodinia
WebP Image Encode
NCNN
Tesseract OCR
High Performance Conjugate Gradient
Timed GDB GNU Debugger Compilation
PyPerformance
LevelDB:
  Seek Rand
  Rand Read
SVT-AV1
Caffe
glibc bench
Timed Apache Compilation
Unigine Superposition
Darktable
glibc bench
Cryptsetup
PyPerformance
Rodinia
Intel Open Image Denoise
Mobile Neural Network
Caffe
Rodinia
NCNN
Systemd Total Boot Time
RawTherapee
NCNN
Mobile Neural Network
ASTC Encoder
NCNN
System GZIP Decompression
NAMD
DeepSpeech
Timed PHP Compilation
Unigine Heaven
glibc bench
libavif avifenc
LuxCoreRender
dav1d
SVT-AV1
Unigine Superposition
TensorFlow Lite
GROMACS
libavif avifenc
Blender
ASTC Encoder
x265
NCNN
LAMMPS Molecular Dynamics Simulator
Zstd Compression
Rodinia
Blender
TensorFlow Lite
glibc bench
NCNN
Embree
Caffe
TNN
Darktable
glibc bench
Incompact3D
Embree
Basemark GPU
glibc bench
Darktable
TensorFlow Lite
dav1d
glibc bench
TensorFlow Lite
glibc bench
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
NCNN
Renaissance