Fedora 32 vs. Fedora 33 Beta Benchmarks

Intel Core i9-10900K testing on Fedora 32 and Fedora 33 Beta by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009290-FI-FEDORA85348
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
F32 Workstation
September 26 2020
  9 Hours, 27 Minutes
F32 Workstation Updated
September 27 2020
  9 Hours, 35 Minutes
F33 Workstation Beta
September 28 2020
  9 Hours, 51 Minutes
Invert Behavior (Only Show Selected Data)
  9 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Fedora 32 vs. Fedora 33 Beta BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDisplay DriverF32 WorkstationF32 Workstation UpdatedF33 Workstation BetaIntel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F3 BIOS)Intel Comet Lake PCH16GBSamsung SSD 970 EVO 250GBGigabyte AMD Radeon RX 5500/5500M / Pro 5500M 8GB (1890/875MHz)Realtek ALC1220DELL P2415QIntel Device 15f3 + Intel Wi-Fi 6 AX201Fedora 325.6.6-300.fc32.x86_64 (x86_64)GNOME Shell 3.36.1X Server + Wayland4.6 Mesa 20.0.4 (LLVM 10.0.0)GCC 10.0.1 20200328 + Clang 10.0.1ext43840x2160Intel + Intel-AC 9462/95605.8.11-200.fc32.x86_64 (x86_64)GNOME Shell 3.36.6X Server 1.20.8 + Waylandmodesetting 1.20.84.6 Mesa 20.1.8 (LLVM 10.0.1)GCC 10.2.1 20200723 + Clang 10.0.1Fedora 335.8.11-300.fc33.x86_64 (x86_64)GNOME Shell 3.38.0X Server + Wayland4.6 Mesa 20.2.0-rc4 (LLVM 11.0.0)GCC 10.2.1 20200826 + Clang 11.0.0btrfsOpenBenchmarking.orgCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Disk Details- F32 Workstation: NONE / relatime,rw,seclabel- F32 Workstation Updated: NONE / relatime,rw,seclabel- F33 Workstation Beta: NONE / relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xc8Java Details- F32 Workstation: OpenJDK Runtime Environment (build 1.8.0_242-b08)- F32 Workstation Updated: OpenJDK Runtime Environment (build 1.8.0_265-b01)- F33 Workstation Beta: OpenJDK Runtime Environment 18.9 (build 11.0.9-ea+6)Python Details- F32 Workstation: Python 3.8.2- F32 Workstation Updated: Python 3.8.5- F33 Workstation Beta: Python 3.9.0rc2Security Details- F32 Workstation: SELinux + itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Not affected - F32 Workstation Updated: SELinux + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - F33 Workstation Beta: SELinux + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

F32 WorkstationF32 Workstation UpdatedF33 Workstation BetaResult OverviewPhoronix Test Suite100%158%217%275%334%SQLiteRealSR-NCNNLevelDBSQLite SpeedtestLibRawG'MICSystemd Total Boot TimeHierarchical INTegrationDaCapo BenchmarkRenaissanceTimed FFmpeg CompilationBork File EncrypterLibreOfficeGIMPTSCPPerl BenchmarksDolfynPHPBenchTimed Linux Kernel CompilationCraftyBYTE Unix BenchmarkPyPerformanceTimed MPlayer CompilationPyBenchMobile Neural NetworkGitTimed MAFFT AlignmentOpenVKLStockfishHuginlibavif avifencASTC EncoderTesseract OCRTimed GDB GNU Debugger CompilationHigh Performance Conjugate Gradientglibc benchBasemark GPUTimed Apache CompilationZstd Compressiondav1dSVT-AV1Intel Open Image DenoiseRodiniaRawTherapeeUnigine SuperpositionSystem GZIP DecompressionTimed PHP CompilationLuxCoreRenderDeepSpeechNAMDTNNUnigine HeavenDarktableGROMACSx265NCNNLAMMPS Molecular Dynamics SimulatorBlenderWebP Image EncodeIncompact3DEmbreeCaffeTensorFlow Lite

Fedora 32 vs. Fedora 33 Beta Benchmarksdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitbasemark: Vulkan - 3840 x 2160 - Highbasemark: Vulkan - 3840 x 2160 - Mediumunigine-heaven: 1920 x 1080 - Fullscreen - OpenGLunigine-super: 1920 x 1080 - Fullscreen - Low - OpenGLunigine-super: 1920 x 1080 - Fullscreen - High - OpenGLembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragonsvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080px265: H.265 1080p Video Encodinghpcg: oidn: Memorialopenvkl: vklBenchmarkcryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512byte: Dhrystone 2luxcorerender: DLSCluxcorerender: Rainbow Colors and Prismcompress-zstd: 3compress-zstd: 19leveldb: Overwriteleveldb: Rand Fillleveldb: Seq Filllibraw: Post-Processing Benchmarkcrafty: Elapsed Timetscp: AI Chess Performancestockfish: Total Timegromacs: Water Benchmarklammps: Rhodopsin Proteinhint: FLOATai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scorephpbench: PHP Benchmark Suitenamd: ATPase Simulation - 327,506 Atomswebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionleveldb: Hot Readleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Readleveldb: Seek Randleveldb: Rand Deleteleveldb: Seq Filltensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2caffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200pybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: Apache Spark ALSrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Twitter HTTP Requestsrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresmnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinytnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1systemd-boot-total: Totalsystemd-boot-total: Kernelsystemd-boot-total: Loadersystemd-boot-total: Firmwaresystemd-boot-total: Userspacedacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansglibc-bench: cosglibc-bench: expglibc-bench: ffsglibc-bench: singlibc-bench: log2glibc-bench: modfglibc-bench: sinhglibc-bench: sqrtglibc-bench: tanhglibc-bench: asinhglibc-bench: atanhglibc-bench: ffsllglibc-bench: sincosglibc-bench: pthread_oncesqlite: 1realsr-ncnn: 4x - Norodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterincompact3d: Cylindermafft: Multiple Sequence Alignment - LSU RNAbork: File Encryption Timeavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-ffmpeg: Time To Compilebuild-gdb: Time To Compilebuild-linux-kernel: Time To Compilebuild-mplayer: Time To Compilebuild-php: Time To Compiledeepspeech: CPUsystem-decompress-gzip: astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivesqlite-speedtest: Timed Time - Size 1,000darktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Rack - CPU-onlydarktable: Server Room - CPU-onlygimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskgmic: 2D Function Plotting, 1000 Timesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timesgmic: 3D Elevated Function In Rand Colors, 100 Timeshugin: Panorama Photo Assistant + Stitching Timelibreoffice: 20 Documents To PDFrawtherapee: Total Benchmark Timeblender: BMW27 - CPU-Onlyblender: Fishy Cat - CPU-Onlygit: Time To Complete Common Git Commandstesseract-ocr: Time To OCR 7 Imagesdolfyn: Computational Fluid Dynamicsperl-benchmark: Pod2htmlperl-benchmark: InterpreterF32 WorkstationF32 Workstation UpdatedF33 Workstation BetaFedora Workstation 32 Updated778.80185.43726.16133.7645.45221.1989.5656131.645.117.665120.32810.1674.55442.03472.764.3505711.03192.6188066352011851.22.172.502894.830.352.552.854.445.92103565651627597370731460.9638.452467186547.119691456116026168484151.211051.2631.96214.7915.95230.5078.80042.02241.8348.87410.72040.69540.6691679772378650148849114879118059214470710866321735224130448282589321526494.895.910815.942322.31001506.9444.43711013.0361508.2191647.28914395.2103732.0681881.0309211.3471244.3684.21424.9062.88629.43214.1816.485.024.023.233.875.851.3113.7470.1213.8615.5024.8125.50270.610260.72911811380118728135801033172999230538.75034.922301.3098838.39475.987131.517056.790881.7040610.40577.704679.408631.3180012.40801.3198443.33625.195173.76782.34584.74318.21317.355238.6225748.6147.15573.05443.6235.1334.96517.36544.02676.07861.67627.72546.88679.987522.4814.526.6720.94168.7543.67614.1664.0740.1763.6016.1019.5349.50711.62392.92316.67455.39438.6826.68855.185114.03160.5440.81120.05215.5940.099593760.00081210783.94185.52736.88133.1545.37219.0190.0343132.345.217.672720.37800.1654.54141.84772.784.3447911.10191.42880666207777251604321.42.172.482900.530.251.251.051.938.44104525671672068375445630.9638.424451902708.209771424114525698538651.213551.2501.94414.8585.88130.9278.75843.16043.2978.79610.68742.76342.6181679052377947148954114742117787214457010878021735924153848336989721826596.096.410815.642723.31011507.0144.63781007.0911512.1891642.11211682.1463559.8261918.0929004.0411284.8964.39525.4252.89029.55114.1616.385.054.033.203.845.881.3313.4370.4113.7215.4824.6725.06273.816260.93012286378317778102850333132930229438.81844.453371.3041538.32845.957141.733936.803601.5080410.40527.664749.406941.3075012.27551.3164143.76325.164173.84684.32985.32718.10017.352239.0866758.5617.12173.35343.5885.2095.04117.44544.15676.19862.74127.50546.83779.893832.4674.636.7521.06168.8944.91714.1494.0730.1783.6275.9579.3789.34111.58993.12816.42255.78938.3646.57855.180114.33161.1640.44520.25015.8330.097626250.00079401791.70185.58737.41133.1645.41222.8389.6852132.645.317.696520.36890.1664.51641.85273.034.3874011.09193.94874299204268250797941.32.182.462863.330.237.136.835.842.23106251501700991370810660.9598.430497803514.879298793241.206791.2501.93815.0075.88130.8898.66059.44059.9108.80010.62561.26561.8231679252378677148309114727118095214514710891921379324325548587588621127296.094.711114.542520.595.81496.9145.03741364.5001622.6441772.40813907.6283086.4982229.4059163.8171310.7334.27825.2012.87329.36514.1316.415.024.023.203.935.961.3513.4669.0313.8315.1924.7825.23273.585260.320151573778178093831137935573027249238.74254.229171.5005438.32965.977941.538346.938521.5080410.39807.647819.410621.3083312.29351.31843144.40615.452172.71683.26684.44618.10017.301238.7498638.4917.60573.09043.4115.2655.07417.50340.85775.46560.68027.19846.63079.542692.4694.596.7121.03169.3952.42514.1794.0670.1783.6286.2379.7169.82712.15389.69911.92755.98138.8016.93255.511114.04161.0840.22620.16816.2340.101928160.000828462083292OpenBenchmarking.org

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pF33 Workstation BetaF32 Workstation UpdatedF32 Workstation2004006008001000SE +/- 0.59, N = 3SE +/- 0.79, N = 3SE +/- 1.16, N = 3791.70783.94778.80MIN: 610.59 / MAX: 1080.54MIN: 609.29 / MAX: 1041.56MIN: 607.84 / MAX: 1013.581. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KF33 Workstation BetaF32 Workstation UpdatedF32 Workstation4080120160200SE +/- 0.06, N = 3SE +/- 0.17, N = 3SE +/- 0.19, N = 3185.58185.52185.43MIN: 158.75 / MAX: 192.96MIN: 159.04 / MAX: 193.53MIN: 160.86 / MAX: 193.121. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pF33 Workstation BetaF32 Workstation UpdatedF32 Workstation160320480640800SE +/- 0.76, N = 3SE +/- 0.93, N = 3SE +/- 7.11, N = 3737.41736.88726.16MIN: 625.08 / MAX: 802.04MIN: 621.65 / MAX: 800.57MIN: 483.53 / MAX: 795.121. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitF32 WorkstationF33 Workstation BetaF32 Workstation Updated306090120150SE +/- 0.07, N = 3SE +/- 0.29, N = 3SE +/- 0.15, N = 3133.76133.16133.15MIN: 86.26 / MAX: 301.72MIN: 85.58 / MAX: 304.15MIN: 85.51 / MAX: 306.091. (CC) gcc options: -pthread

Basemark GPU

This is a benchmark of Basemark GPU. For this test profile to work, you must have a valid license/copy of BasemarkGPU in your Phoronix Test Suite download cache. This test profile simply automates the execution of BasemarkGPU and you must already have the Windows .zip or Linux .tar.gz in the download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterBasemark GPU 1.2Renderer: Vulkan - Resolution: 3840 x 2160 - Graphics Preset: HighF32 WorkstationF33 Workstation BetaF32 Workstation Updated1020304050SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 345.4545.4145.37MIN: 38.39 / MAX: 62.12MIN: 37.47 / MAX: 61.91MIN: 38.08 / MAX: 61.86

OpenBenchmarking.orgFPS, More Is BetterBasemark GPU 1.2Renderer: Vulkan - Resolution: 3840 x 2160 - Graphics Preset: MediumF33 Workstation BetaF32 WorkstationF32 Workstation Updated50100150200250SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.27, N = 3222.83221.19219.01MIN: 141.32 / MAX: 370.49MIN: 140.91 / MAX: 370.01MIN: 140.81 / MAX: 371.24

Unigine Heaven

This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLF32 Workstation UpdatedF33 Workstation BetaF32 Workstation20406080100SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 390.0389.6989.57

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLF33 Workstation BetaF32 Workstation UpdatedF32 Workstation306090120150SE +/- 0.30, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3132.6132.3131.6MAX: 191.4MAX: 190.3MAX: 179.9

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLF33 Workstation BetaF32 Workstation UpdatedF32 Workstation1020304050SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 345.345.245.1MAX: 54.1MAX: 54MAX: 53.8

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 317.7017.6717.67MIN: 17.48 / MAX: 18.12MIN: 17.42 / MAX: 18.09MIN: 17.39 / MAX: 18.1

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonF32 Workstation UpdatedF33 Workstation BetaF32 Workstation510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 320.3820.3720.33MIN: 20.24 / MAX: 20.85MIN: 20.19 / MAX: 20.81MIN: 20.15 / MAX: 20.73

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pF32 WorkstationF33 Workstation BetaF32 Workstation Updated0.03760.07520.11280.15040.188SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.1670.1660.1651. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.02472.04943.07414.09885.1235SE +/- 0.017, N = 3SE +/- 0.014, N = 3SE +/- 0.022, N = 34.5544.5414.5161. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pF32 WorkstationF33 Workstation BetaF32 Workstation Updated1020304050SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 342.0341.8541.851. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

x265

This is a simple test of the x265 encoder run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.1.2H.265 1080p Video EncodingF33 Workstation BetaF32 Workstation UpdatedF32 Workstation1632486480SE +/- 0.10, N = 3SE +/- 0.53, N = 3SE +/- 0.20, N = 373.0372.7872.761. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1F33 Workstation BetaF32 WorkstationF32 Workstation Updated0.98721.97442.96163.94884.936SE +/- 0.00255, N = 3SE +/- 0.00174, N = 3SE +/- 0.00338, N = 34.387404.350574.344791. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialF32 Workstation UpdatedF33 Workstation BetaF32 Workstation3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 311.1011.0911.03

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkF33 Workstation BetaF32 WorkstationF32 Workstation Updated4080120160200SE +/- 0.25, N = 3SE +/- 0.81, N = 3SE +/- 1.62, N = 3193.94192.61191.42MIN: 1 / MAX: 778MIN: 1 / MAX: 771MIN: 1 / MAX: 781

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta200K400K600K800K1000KSE +/- 1303.47, N = 3SE +/- 493.00, N = 3SE +/- 486.33, N = 3880666880663874299

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetup 2.3.4PBKDF2-sha512Fedora Workstation 32 UpdatedF32 Workstation UpdatedF33 Workstation Beta400K800K1200K1600K2000KSE +/- 5504.00, N = 3SE +/- 3634.51, N = 3SE +/- 1331.00, N = 3208329220777722045338

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta11M22M33M44M55MSE +/- 727737.47, N = 3SE +/- 135665.15, N = 3SE +/- 8678.89, N = 352011851.251604321.450797941.3

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCF33 Workstation BetaF32 Workstation UpdatedF32 Workstation0.49050.9811.47151.9622.4525SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 32.182.172.17MIN: 2.09 / MAX: 2.25MIN: 2.08 / MAX: 2.25MIN: 2.08 / MAX: 2.25

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.56251.1251.68752.252.8125SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.502.482.46MIN: 2.44 / MAX: 2.53MIN: 2.43 / MAX: 2.51MIN: 2.41 / MAX: 2.51

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3F32 Workstation UpdatedF32 WorkstationF33 Workstation Beta6001200180024003000SE +/- 10.98, N = 3SE +/- 24.28, N = 3SE +/- 16.00, N = 32900.52894.82863.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19F32 WorkstationF33 Workstation BetaF32 Workstation Updated714212835SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 330.330.230.21. (CC) gcc options: -O3 -pthread -lz -llzma

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.52, N = 3SE +/- 0.13, N = 3SE +/- 0.38, N = 352.551.237.11. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.12, N = 3SE +/- 0.39, N = 3SE +/- 0.12, N = 352.851.036.81. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.50, N = 454.451.935.81. (CXX) g++ options: -O2 -lsnappy -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkF32 WorkstationF33 Workstation BetaF32 Workstation Updated1020304050SE +/- 0.06, N = 3SE +/- 0.17, N = 3SE +/- 0.14, N = 345.9242.2338.44-llcms21. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeF33 Workstation BetaF32 Workstation UpdatedF32 Workstation2M4M6M8M10MSE +/- 21467.07, N = 3SE +/- 13836.18, N = 3SE +/- 5404.30, N = 31062515010452567103565651. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceF33 Workstation BetaF32 Workstation UpdatedF32 Workstation400K800K1200K1600K2000KSE +/- 1964.11, N = 5SE +/- 1242.63, N = 5SE +/- 1177.47, N = 51700991167206816275971. (CC) gcc options: -O3 -march=native

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeF32 Workstation UpdatedF33 Workstation BetaF32 Workstation8M16M24M32M40MSE +/- 201253.35, N = 3SE +/- 265859.79, N = 3SE +/- 180894.45, N = 33754456337081066370731461. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta0.21670.43340.65010.86681.0835SE +/- 0.003, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 30.9630.9630.959-ldl-ldl1. (CXX) g++ options: -O2 -pthread -lrt -lpthread -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinF32 WorkstationF33 Workstation BetaF32 Workstation Updated246810SE +/- 0.006, N = 3SE +/- 0.078, N = 3SE +/- 0.032, N = 38.4528.4308.4241. (CXX) g++ options: -O2 -pthread -lm

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATF33 Workstation BetaF32 WorkstationF32 Workstation Updated110M220M330M440M550MSE +/- 518748.60, N = 3SE +/- 662338.05, N = 3SE +/- 65392.17, N = 3497803514.88467186547.12451902708.211. (CC) gcc options: -O3 -march=native -lm

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreF32 WorkstationF32 Workstation Updated3006009001200150014561424

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreF32 WorkstationF32 Workstation Updated200400600800100011601145

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreF32 WorkstationF32 Workstation Updated600120018002400300026162569

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. The number of iterations used is 1,000,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteF33 Workstation BetaF32 Workstation UpdatedF32 Workstation200K400K600K800K1000KSE +/- 800.41, N = 3SE +/- 4199.16, N = 3SE +/- 2757.69, N = 3879324853865848415

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsF33 Workstation BetaF32 WorkstationF32 Workstation Updated0.2730.5460.8191.0921.365SE +/- 0.00563, N = 3SE +/- 0.00506, N = 3SE +/- 0.00037, N = 31.206791.211051.21355

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultF32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.28420.56840.85261.13681.421SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 31.2501.2501.2631. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100F33 Workstation BetaF32 Workstation UpdatedF32 Workstation0.44150.8831.32451.7662.2075SE +/- 0.002, N = 3SE +/- 0.005, N = 3SE +/- 0.004, N = 31.9381.9441.9621. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 314.7914.8615.011. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionF32 Workstation UpdatedF33 Workstation BetaF32 Workstation1.33922.67844.01765.35686.696SE +/- 0.004, N = 3SE +/- 0.010, N = 3SE +/- 0.007, N = 35.8815.8815.9521. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionF32 WorkstationF33 Workstation BetaF32 Workstation Updated714212835SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 330.5130.8930.931. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadF33 Workstation BetaF32 Workstation UpdatedF32 Workstation246810SE +/- 0.119, N = 3SE +/- 0.089, N = 8SE +/- 0.098, N = 78.6608.7588.8001. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.42, N = 3SE +/- 0.12, N = 3SE +/- 0.61, N = 342.0243.1659.441. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.09, N = 3SE +/- 0.33, N = 3SE +/- 0.19, N = 341.8343.3059.911. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadF32 Workstation UpdatedF33 Workstation BetaF32 Workstation246810SE +/- 0.083, N = 12SE +/- 0.005, N = 3SE +/- 0.061, N = 158.7968.8008.8741. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomF33 Workstation BetaF32 Workstation UpdatedF32 Workstation3691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 310.6310.6910.721. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1428425670SE +/- 0.33, N = 3SE +/- 0.26, N = 3SE +/- 0.40, N = 340.7042.7661.271. (CXX) g++ options: -O2 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1428425670SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.86, N = 440.6742.6261.821. (CXX) g++ options: -O2 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetF32 Workstation UpdatedF33 Workstation BetaF32 Workstation40K80K120K160K200KSE +/- 24.58, N = 3SE +/- 22.10, N = 3SE +/- 22.43, N = 3167905167925167977

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4F32 Workstation UpdatedF32 WorkstationF33 Workstation Beta500K1000K1500K2000K2500KSE +/- 187.47, N = 3SE +/- 101.49, N = 3SE +/- 452.30, N = 3237794723786502378677

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileF33 Workstation BetaF32 WorkstationF32 Workstation Updated30K60K90K120K150KSE +/- 66.24, N = 3SE +/- 319.30, N = 3SE +/- 84.31, N = 3148309148849148954

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatF33 Workstation BetaF32 Workstation UpdatedF32 Workstation20K40K60K80K100KSE +/- 80.77, N = 3SE +/- 29.24, N = 3SE +/- 2.96, N = 3114727114742114879

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta30K60K90K120K150KSE +/- 101.41, N = 3SE +/- 16.09, N = 3SE +/- 27.09, N = 3117787118059118095

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2F32 Workstation UpdatedF32 WorkstationF33 Workstation Beta500K1000K1500K2000K2500KSE +/- 145.03, N = 3SE +/- 138.60, N = 3SE +/- 385.59, N = 3214457021447072145147

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20K40K60K80K100KSE +/- 6.96, N = 3SE +/- 38.37, N = 3SE +/- 72.57, N = 31086631087801089191. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200F33 Workstation BetaF32 WorkstationF32 Workstation Updated50K100K150K200K250KSE +/- 34.91, N = 3SE +/- 96.33, N = 3SE +/- 162.52, N = 32137932173522173591. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta50K100K150K200K250KSE +/- 98.00, N = 3SE +/- 67.52, N = 3SE +/- 213.53, N = 32413042415382432551. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta100K200K300K400K500KSE +/- 123.79, N = 3SE +/- 466.97, N = 3SE +/- 459.90, N = 34828254833694858751. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesF33 Workstation BetaF32 WorkstationF32 Workstation Updated2004006008001000SE +/- 9.68, N = 3SE +/- 0.88, N = 3SE +/- 1.73, N = 3886893897

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goF33 Workstation BetaF32 WorkstationF32 Workstation Updated50100150200250SE +/- 0.33, N = 3SE +/- 1.00, N = 3211215218

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta60120180240300264265272

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 394.896.096.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatF33 Workstation BetaF32 WorkstationF32 Workstation Updated20406080100SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.22, N = 394.795.996.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta20406080100108108111

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 314.515.615.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceF32 WorkstationF33 Workstation BetaF32 Workstation Updated90180270360450SE +/- 0.58, N = 3423425427

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsF33 Workstation BetaF32 WorkstationF32 Workstation Updated612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 320.522.323.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesF33 Workstation BetaF32 WorkstationF32 Workstation Updated20406080100SE +/- 0.03, N = 395.8100.0101.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileF33 Workstation BetaF32 WorkstationF32 Workstation Updated306090120150149150150

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupF33 Workstation BetaF32 WorkstationF32 Workstation Updated246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.916.947.01

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1020304050SE +/- 0.18, N = 3SE +/- 0.13, N = 3SE +/- 0.20, N = 344.444.645.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonF32 WorkstationF33 Workstation BetaF32 Workstation Updated80160240320400SE +/- 0.33, N = 3SE +/- 0.33, N = 3371374378

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta30060090012001500SE +/- 8.96, N = 5SE +/- 12.25, N = 5SE +/- 7.12, N = 51007.091013.041364.50

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30060090012001500SE +/- 15.97, N = 5SE +/- 16.39, N = 5SE +/- 15.99, N = 51508.221512.191622.64

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta400800120016002000SE +/- 13.90, N = 5SE +/- 3.97, N = 5SE +/- 9.62, N = 51642.111647.291772.41

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IOF32 Workstation UpdatedF33 Workstation BetaF32 Workstation3K6K9K12K15KSE +/- 62.43, N = 5SE +/- 150.42, N = 20SE +/- 118.55, N = 511682.1513907.6314395.21

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankF33 Workstation BetaF32 Workstation UpdatedF32 Workstation8001600240032004000SE +/- 38.90, N = 24SE +/- 33.46, N = 25SE +/- 44.56, N = 53086.503559.833732.07

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta5001000150020002500SE +/- 3.54, N = 5SE +/- 8.57, N = 5SE +/- 11.76, N = 51881.031918.092229.41

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeF32 Workstation UpdatedF33 Workstation BetaF32 Workstation2K4K6K8K10KSE +/- 75.72, N = 5SE +/- 91.23, N = 9SE +/- 99.99, N = 59004.049163.829211.35

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + FuturesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta30060090012001500SE +/- 13.46, N = 25SE +/- 9.32, N = 5SE +/- 10.40, N = 251244.371284.901310.73

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0F32 WorkstationF33 Workstation BetaF32 Workstation Updated0.98891.97782.96673.95564.9445SE +/- 0.038, N = 3SE +/- 0.016, N = 3SE +/- 0.027, N = 34.2144.2784.395MIN: 4.06 / MAX: 17.74MIN: 4.16 / MAX: 10.02MIN: 4.28 / MAX: 17.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50F32 WorkstationF33 Workstation BetaF32 Workstation Updated612182430SE +/- 0.01, N = 3SE +/- 0.24, N = 3SE +/- 0.26, N = 324.9125.2025.43MIN: 24.66 / MAX: 38.14MIN: 24.79 / MAX: 32.12MIN: 24.69 / MAX: 39.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0F33 Workstation BetaF32 WorkstationF32 Workstation Updated0.65031.30061.95092.60123.2515SE +/- 0.003, N = 3SE +/- 0.000, N = 3SE +/- 0.015, N = 32.8732.8862.890MIN: 2.84 / MAX: 10.88MIN: 2.81 / MAX: 12.64MIN: 2.84 / MAX: 15.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3F33 Workstation BetaF32 WorkstationF32 Workstation Updated714212835SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 329.3729.4329.55MIN: 28.95 / MAX: 35.62MIN: 28.89 / MAX: 54.96MIN: 29 / MAX: 44.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 314.1314.1614.18MIN: 13.69 / MAX: 19.58MIN: 13.79 / MAX: 30.35MIN: 13.9 / MAX: 23.631. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetF32 Workstation UpdatedF33 Workstation BetaF32 Workstation48121620SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 316.3816.4116.48MIN: 15.94 / MAX: 30.2MIN: 16.2 / MAX: 22.33MIN: 16.28 / MAX: 27.011. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2F32 WorkstationF33 Workstation BetaF32 Workstation Updated1.13632.27263.40894.54525.6815SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 35.025.025.05MIN: 4.83 / MAX: 11.82MIN: 4.77 / MAX: 7.31MIN: 4.84 / MAX: 6.641. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3F32 WorkstationF33 Workstation BetaF32 Workstation Updated0.90681.81362.72043.62724.534SE +/- 0.02, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 34.024.024.03MIN: 3.82 / MAX: 6.61MIN: 3.75 / MAX: 8.85MIN: 3.8 / MAX: 11.571. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2F32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.72681.45362.18042.90723.634SE +/- 0.08, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 33.203.203.23MIN: 2.97 / MAX: 9.82MIN: 2.88 / MAX: 7.09MIN: 2.95 / MAX: 9.291. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta0.88431.76862.65293.53724.4215SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 33.843.873.93MIN: 3.65 / MAX: 5.23MIN: 3.77 / MAX: 10.47MIN: 3.65 / MAX: 7.471. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.3412.6824.0235.3646.705SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 35.855.885.96MIN: 5.63 / MAX: 18.12MIN: 5.68 / MAX: 7.26MIN: 5.76 / MAX: 8.061. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.30380.60760.91141.21521.519SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 31.311.331.35MIN: 1.23 / MAX: 1.36MIN: 1.24 / MAX: 2.53MIN: 1.28 / MAX: 4.891. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetF32 Workstation UpdatedF33 Workstation BetaF32 Workstation48121620SE +/- 0.01, N = 3SE +/- 0.29, N = 3SE +/- 0.28, N = 313.4313.4613.74MIN: 12.99 / MAX: 26.07MIN: 12.81 / MAX: 18.22MIN: 13.08 / MAX: 32.761. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16F33 Workstation BetaF32 WorkstationF32 Workstation Updated1632486480SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 369.0370.1270.41MIN: 68.48 / MAX: 76.07MIN: 69.69 / MAX: 82.73MIN: 69.98 / MAX: 80.651. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18F32 Workstation UpdatedF33 Workstation BetaF32 Workstation48121620SE +/- 0.04, N = 3SE +/- 0.28, N = 3SE +/- 0.28, N = 313.7213.8313.86MIN: 13.41 / MAX: 32.26MIN: 13.31 / MAX: 17.98MIN: 13.38 / MAX: 22.351. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 315.1915.4815.50MIN: 15.01 / MAX: 22.42MIN: 15.29 / MAX: 23.87MIN: 15.3 / MAX: 24.641. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50F32 Workstation UpdatedF33 Workstation BetaF32 Workstation612182430SE +/- 0.08, N = 3SE +/- 0.29, N = 3SE +/- 0.29, N = 324.6724.7824.81MIN: 23.86 / MAX: 39.81MIN: 23.59 / MAX: 29.87MIN: 23.98 / MAX: 33.441. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyF32 Workstation UpdatedF33 Workstation BetaF32 Workstation612182430SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 325.0625.2325.50MIN: 24.6 / MAX: 33.92MIN: 24.69 / MAX: 31.6MIN: 24.82 / MAX: 38.761. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2F32 WorkstationF33 Workstation BetaF32 Workstation Updated60120180240300SE +/- 1.46, N = 3SE +/- 1.11, N = 3SE +/- 0.64, N = 3270.61273.59273.82MIN: 263.97 / MAX: 283.59MIN: 270.76 / MAX: 280.55MIN: 271.95 / MAX: 283.741. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1F33 Workstation BetaF32 WorkstationF32 Workstation Updated60120180240300SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 3260.32260.73260.93MIN: 259.47 / MAX: 266.97MIN: 259.81 / MAX: 265.49MIN: 259.87 / MAX: 265.661. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

Systemd Total Boot Time

This test uses systemd-analyze to report the entire boot time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: TotalF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta3K6K9K12K15K118111228615157

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: KernelF33 Workstation BetaF32 Workstation UpdatedF32 Workstation8001600240032004000377837833801

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: LoaderF32 Workstation UpdatedF33 Workstation BetaF32 Workstation400800120016002000177717801872

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: FirmwareF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta2K4K6K8K10K810281359383

OpenBenchmarking.orgms, Fewer Is BetterSystemd Total Boot TimeTest: UserspaceF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta2K4K6K8K10K8010850311379

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta8001600240032004000SE +/- 15.35, N = 4SE +/- 12.70, N = 4SE +/- 27.37, N = 4331333173557

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta6001200180024003000SE +/- 19.64, N = 19SE +/- 30.37, N = 4SE +/- 18.63, N = 4293029993027

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta5001000150020002500SE +/- 4.70, N = 4SE +/- 25.55, N = 4SE +/- 24.83, N = 4229423052492

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosF33 Workstation BetaF32 WorkstationF32 Workstation Updated918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 338.7438.7538.82

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expF33 Workstation BetaF32 Workstation UpdatedF32 Workstation1.10752.2153.32254.435.5375SE +/- 0.05192, N = 13SE +/- 0.05716, N = 3SE +/- 0.00966, N = 34.229174.453374.92230

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta0.33760.67521.01281.35041.688SE +/- 0.00223, N = 3SE +/- 0.00037, N = 3SE +/- 0.00080, N = 31.304151.309881.50054

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinF32 Workstation UpdatedF33 Workstation BetaF32 Workstation918273645SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 338.3338.3338.39

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2F32 Workstation UpdatedF33 Workstation BetaF32 Workstation1.34712.69424.04135.38846.7355SE +/- 0.00325, N = 3SE +/- 0.00132, N = 3SE +/- 0.00206, N = 35.957145.977945.98713

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfF32 WorkstationF33 Workstation BetaF32 Workstation Updated0.39010.78021.17031.56041.9505SE +/- 0.00073, N = 3SE +/- 0.00134, N = 3SE +/- 0.00166, N = 31.517051.538341.73393

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta246810SE +/- 0.00115, N = 3SE +/- 0.00079, N = 3SE +/- 0.00116, N = 36.790886.803606.93852

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtF32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.38340.76681.15021.53361.917SE +/- 0.00204, N = 3SE +/- 0.00096, N = 3SE +/- 0.00081, N = 31.508041.508041.70406

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhF33 Workstation BetaF32 Workstation UpdatedF32 Workstation3691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.4010.4110.41

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhF33 Workstation BetaF32 Workstation UpdatedF32 Workstation246810SE +/- 0.01673, N = 3SE +/- 0.00926, N = 3SE +/- 0.01979, N = 37.647817.664747.70467

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta3691215SE +/- 0.00341, N = 3SE +/- 0.00186, N = 3SE +/- 0.00191, N = 39.406949.408639.41062

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllF32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.29660.59320.88981.18641.483SE +/- 0.00037, N = 3SE +/- 0.00037, N = 3SE +/- 0.00039, N = 31.307501.308331.31800

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosF32 Workstation UpdatedF33 Workstation BetaF32 Workstation3691215SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 312.2812.2912.41

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceF32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.2970.5940.8911.1881.485SE +/- 0.00312, N = 3SE +/- 0.00071, N = 3SE +/- 0.00022, N = 31.316411.318431.31984

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta306090120150SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 343.3443.76144.411. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoF33 Workstation BetaF32 Workstation UpdatedF32 Workstation612182430SE +/- 0.04, N = 3SE +/- 0.17, N = 3SE +/- 0.20, N = 315.4525.1625.20

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDF33 Workstation BetaF32 WorkstationF32 Workstation Updated4080120160200SE +/- 0.00, N = 3SE +/- 0.62, N = 3SE +/- 0.47, N = 3172.72173.77173.851. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DF32 WorkstationF33 Workstation BetaF32 Workstation Updated20406080100SE +/- 0.05, N = 3SE +/- 0.36, N = 3SE +/- 0.07, N = 382.3583.2784.331. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteF33 Workstation BetaF32 WorkstationF32 Workstation Updated20406080100SE +/- 0.56, N = 3SE +/- 1.12, N = 3SE +/- 0.76, N = 384.4584.7485.331. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverF32 Workstation UpdatedF33 Workstation BetaF32 Workstation48121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 318.1018.1018.211. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 317.3017.3517.361. (CXX) g++ options: -O2 -lOpenCL

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderF32 WorkstationF33 Workstation BetaF32 Workstation Updated50100150200250SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.49, N = 3238.62238.75239.091. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAF33 Workstation BetaF32 Workstation UpdatedF32 Workstation246810SE +/- 0.045, N = 3SE +/- 0.050, N = 3SE +/- 0.047, N = 38.4918.5618.6141. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Bork File Encrypter

Bork is a small, cross-platform file encryption utility. It is written in Java and designed to be included along with the files it encrypts for long-term storage. This test measures the amount of time it takes to encrypt a sample file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBork File Encrypter 1.4File Encryption TimeF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta246810SE +/- 0.018, N = 3SE +/- 0.050, N = 3SE +/- 0.019, N = 37.1217.1557.605

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0F32 WorkstationF33 Workstation BetaF32 Workstation Updated1632486480SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.54, N = 373.0573.0973.351. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2F33 Workstation BetaF32 Workstation UpdatedF32 Workstation1020304050SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.11, N = 343.4143.5943.621. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.18462.36923.55384.73845.923SE +/- 0.008, N = 3SE +/- 0.020, N = 3SE +/- 0.002, N = 35.1335.2095.2651. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1.14172.28343.42514.56685.7085SE +/- 0.021, N = 3SE +/- 0.034, N = 3SE +/- 0.008, N = 34.9655.0415.0741. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.01, N = 317.3717.4517.50

Timed FFmpeg Compilation

This test times how long it takes to build FFmpeg. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileF33 Workstation BetaF32 WorkstationF32 Workstation Updated1020304050SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 340.8644.0344.16

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileF33 Workstation BetaF32 WorkstationF32 Workstation Updated20406080100SE +/- 0.16, N = 3SE +/- 0.37, N = 3SE +/- 0.35, N = 375.4776.0876.20

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileF33 Workstation BetaF32 WorkstationF32 Workstation Updated1428425670SE +/- 0.60, N = 3SE +/- 0.33, N = 3SE +/- 1.06, N = 360.6861.6862.74

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileF33 Workstation BetaF32 Workstation UpdatedF32 Workstation714212835SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 327.2027.5127.73

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileF33 Workstation BetaF32 Workstation UpdatedF32 Workstation1122334455SE +/- 0.22, N = 3SE +/- 0.19, N = 3SE +/- 0.08, N = 346.6346.8446.89

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUF33 Workstation BetaF32 Workstation UpdatedF32 Workstation20406080100SE +/- 0.24, N = 3SE +/- 0.74, N = 3SE +/- 0.73, N = 379.5479.8979.99

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionF32 Workstation UpdatedF33 Workstation BetaF32 Workstation0.55821.11641.67462.23282.791SE +/- 0.007, N = 3SE +/- 0.011, N = 3SE +/- 0.019, N = 32.4672.4692.481

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastF32 WorkstationF33 Workstation BetaF32 Workstation Updated1.04182.08363.12544.16725.209SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.524.594.631. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumF32 WorkstationF33 Workstation BetaF32 Workstation Updated246810SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.676.716.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughF32 WorkstationF33 Workstation BetaF32 Workstation Updated510152025SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 320.9421.0321.061. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta4080120160200SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.36, N = 3168.75168.89169.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000F32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1224364860SE +/- 0.01, N = 3SE +/- 0.13, N = 3SE +/- 0.24, N = 343.6844.9252.431. (CC) gcc options: -O2 -ldl -lz -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 314.1514.1714.18

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyF33 Workstation BetaF32 Workstation UpdatedF32 Workstation0.91671.83342.75013.66684.5835SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 34.0674.0734.074

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.04010.08020.12030.16040.2005SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1760.1780.178

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta0.81631.63262.44893.26524.0815SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 33.6013.6273.628

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: resizeF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta246810SE +/- 0.064, N = 3SE +/- 0.082, N = 3SE +/- 0.050, N = 35.9576.1016.237

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: rotateF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta3691215SE +/- 0.011, N = 3SE +/- 0.076, N = 3SE +/- 0.060, N = 39.3789.5349.716

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: auto-levelsF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta3691215SE +/- 0.023, N = 3SE +/- 0.013, N = 3SE +/- 0.018, N = 39.3419.5079.827

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.20Test: unsharp-maskF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta3691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 311.5911.6212.15

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesF33 Workstation BetaF32 WorkstationF32 Workstation Updated20406080100SE +/- 0.48, N = 3SE +/- 0.14, N = 3SE +/- 0.43, N = 389.7092.9293.131. Version 2.9.0

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesF33 Workstation BetaF32 Workstation UpdatedF32 Workstation48121620SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.01, N = 311.9316.4216.671. Version 2.9.0

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta1326395265SE +/- 0.53, N = 3SE +/- 0.11, N = 3SE +/- 0.29, N = 355.3955.7955.981. Version 2.9.0

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta918273645SE +/- 0.23, N = 3SE +/- 0.20, N = 3SE +/- 0.17, N = 338.3638.6838.80

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta246810SE +/- 0.034, N = 25SE +/- 0.053, N = 20SE +/- 0.057, N = 56.5786.6886.9321. F32 Workstation Updated: LibreOffice 6.4.6.2 40(Build:2)2. F32 Workstation: LibreOffice 6.4.2.2 40(Build:2)3. F33 Workstation Beta: LibreOffice 7.0.1.2 00(Build:2)

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta1224364860SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 355.1855.1955.511. RawTherapee, version 5.8, command line.

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyF32 WorkstationF33 Workstation BetaF32 Workstation Updated306090120150SE +/- 0.15, N = 3SE +/- 0.29, N = 3SE +/- 0.18, N = 3114.03114.04114.33

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyF32 WorkstationF33 Workstation BetaF32 Workstation Updated4080120160200SE +/- 0.40, N = 3SE +/- 0.07, N = 3SE +/- 0.22, N = 3160.54161.08161.16

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsF33 Workstation BetaF32 Workstation UpdatedF32 Workstation918273645SE +/- 0.13, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 340.2340.4540.811. F33 Workstation Beta: git version 2.28.02. F32 Workstation Updated: git version 2.26.03. F32 Workstation: git version 2.26.0

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesF32 WorkstationF33 Workstation BetaF32 Workstation Updated510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 320.0520.1720.25

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsF32 WorkstationF32 Workstation UpdatedF33 Workstation Beta48121620SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 315.5915.8316.23

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta0.02290.04580.06870.09160.1145SE +/- 0.00021074, N = 3SE +/- 0.00036967, N = 3SE +/- 0.00060182, N = 30.097626250.099593760.10192816

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterF32 Workstation UpdatedF32 WorkstationF33 Workstation Beta0.00020.00040.00060.00080.001SE +/- 0.00000392, N = 3SE +/- 0.00000261, N = 3SE +/- 0.00001305, N = 30.000794010.000812100.00082846

174 Results Shown

dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
Basemark GPU:
  Vulkan - 3840 x 2160 - High
  Vulkan - 3840 x 2160 - Medium
Unigine Heaven
Unigine Superposition:
  1920 x 1080 - Fullscreen - Low - OpenGL
  1920 x 1080 - Fullscreen - High - OpenGL
Embree:
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
x265
High Performance Conjugate Gradient
Intel Open Image Denoise
OpenVKL
Cryptsetup:
  PBKDF2-whirlpool
  PBKDF2-sha512
BYTE Unix Benchmark
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
Zstd Compression:
  3
  19
LevelDB:
  Overwrite
  Rand Fill
  Seq Fill
LibRaw
Crafty
TSCP
Stockfish
GROMACS
LAMMPS Molecular Dynamics Simulator
Hierarchical INTegration
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
PHPBench
NAMD
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
LevelDB:
  Hot Read
  Overwrite
  Rand Fill
  Rand Read
  Seek Rand
  Rand Delete
  Seq Fill
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
Caffe:
  AlexNet - CPU - 100
  AlexNet - CPU - 200
  GoogleNet - CPU - 100
  GoogleNet - CPU - 200
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Renaissance:
  Scala Dotty
  Rand Forest
  Apache Spark ALS
  Savina Reactors.IO
  Apache Spark PageRank
  Twitter HTTP Requests
  Akka Unbalanced Cobwebbed Tree
  Genetic Algorithm Using Jenetics + Futures
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - squeezenet
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Systemd Total Boot Time:
  Total
  Kernel
  Loader
  Firmware
  Userspace
DaCapo Benchmark:
  Jython
  Tradesoap
  Tradebeans
glibc bench:
  cos
  exp
  ffs
  sin
  log2
  modf
  sinh
  sqrt
  tanh
  asinh
  atanh
  ffsll
  sincos
  pthread_once
SQLite
RealSR-NCNN
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
Incompact3D
Timed MAFFT Alignment
Bork File Encrypter
libavif avifenc:
  0
  2
  8
  10
Timed Apache Compilation
Timed FFmpeg Compilation
Timed GDB GNU Debugger Compilation
Timed Linux Kernel Compilation
Timed MPlayer Compilation
Timed PHP Compilation
DeepSpeech
System GZIP Decompression
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
SQLite Speedtest
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Rack - CPU-only
  Server Room - CPU-only
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
G'MIC:
  2D Function Plotting, 1000 Times
  Plotting Isosurface Of A 3D Volume, 1000 Times
  3D Elevated Function In Rand Colors, 100 Times
Hugin
LibreOffice
RawTherapee
Blender:
  BMW27 - CPU-Only
  Fishy Cat - CPU-Only
Git
Tesseract OCR
Dolfyn
Perl Benchmarks:
  Pod2html
  Interpreter