5950x sundat

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS) and llvmpipe on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302265-NE-5950XSUND37
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 5 Tests
CPU Massive 4 Tests
Creator Workloads 8 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
HPC - High Performance Computing 2 Tests
Multi-Core 10 Tests
Intel oneAPI 2 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Server 4 Tests
Server CPU Tests 2 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 26 2023
  4 Hours, 23 Minutes
b
February 26 2023
  4 Hours, 21 Minutes
Invert Hiding All Results Option
  4 Hours, 22 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5950x sundatOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS)AMD Starship/Matisse32GB500GB Western Digital WDS500G3X0C-00SJG0 + 2000GBllvmpipeIntel Device 4f92Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server 1.21.1.44.5 Mesa 22.2.1 (LLVM 15.0.2 256 bits)1.3.224GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution5950x Sundat BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201025 - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+15.2%+15.2%+30.4%+30.4%+45.6%+45.6%45.6%32.5%21.5%12.7%6.8%6.3%5.4%4.9%4.1%3.7%3.5%3.5%2.9%2.7%2.7%2.4%2.3%2.3%2.1%2%2%45.6%2.4%32.5%21.5%12.7%2.1%super-resolution-10 - CPU - Standard60.7%fcn-resnet101-11 - CPU - Standardyolov4 - CPU - StandardArcFace ResNet-100 - CPU - StandardCaffeNet 12-int8 - CPU - Standard1000000 - 100 - B.I.J.T.T10.7%Speed 8 Realtime - Bosphorus 4K8.9%KV, 10% Reads - 12810000000 - 100 - I.J.T.T20000000 - 100 - S.5.B.T40000000 - 100 - I.J.T.TSpeed 9 Realtime - Bosphorus 1080p4.7%Speed 6 Realtime - Bosphorus 1080p4.4%KV, 60% Reads - 1024Speed 10 Realtime - Bosphorus 4K20000000 - 100 - R.T.T3.6%1.R.H.D.F.R.C.C10000000 - 100 - B.I.J.T.T20000000 - 100 - I.J.T.T3.2%CaffeNet 12-int8 - CPU - Parallel2.9%KV, 60% Reads - 1283 - D.S10000000 - 100 - Group By Test TimeGPT-2 - CPU - Standardallmodconfig2.3%Pathtracer - Asian Dragon3, Long Mode - D.SSpeed 10 Realtime - Bosphorus 1080p2.2%R.v.1.i - CPU - Standarddefconfig2%Speed 9 Realtime - Bosphorus 4KKV, 50% Reads - 128fcn-resnet101-11 - CPU - StandardGPT-2 - CPU - Standardyolov4 - CPU - StandardArcFace ResNet-100 - CPU - StandardCaffeNet 12-int8 - CPU - Parallel2.9%CaffeNet 12-int8 - CPU - StandardR.v.1.i - CPU - Standardsuper-resolution-10 - CPU - Standard60.7%ONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeApache SparkAOM AV1CockroachDBApache SparkApache SparkApache SparkAOM AV1AOM AV1CockroachDBAOM AV1Apache SparkClickHouseApache SparkApache SparkONNX RuntimeCockroachDBZstd CompressionApache SparkONNX RuntimeTimed Linux Kernel CompilationEmbreeZstd CompressionAOM AV1ONNX RuntimeTimed Linux Kernel CompilationAOM AV1CockroachDBONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX Runtimeab

5950x sundatbuild-linux-kernel: allmodconfigbrl-cad: VGR Performance Metricopenvkl: vklBenchmark Scalaropenvkl: vklBenchmark ISPCclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - SHA-512 Benchmark Timegromacs: MPI CPU - water_GMX50_barevvenc: Bosphorus 4K - Fastspark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timecockroach: KV, 60% Reads - 1024cockroach: KV, 50% Reads - 1024cockroach: KV, 95% Reads - 1024cockroach: KV, 10% Reads - 1024cockroach: KV, 10% Reads - 128cockroach: KV, 60% Reads - 128cockroach: KV, 95% Reads - 128cockroach: KV, 50% Reads - 128cockroach: MoVR - 128cockroach: MoVR - 1024onnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardaom-av1: Speed 4 Two-Pass - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 4Kcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedbuild-linux-kernel: defconfigcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedaom-av1: Speed 0 Two-Pass - Bosphorus 4Kvvenc: Bosphorus 4K - Fastercompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speeduvg266: Bosphorus 4K - Slowrocksdb: Rand Fill Syncrocksdb: Rand Fillrocksdb: Update Randrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readuvg266: Bosphorus 4K - Mediumvvenc: Bosphorus 1080p - Fastaom-av1: Speed 6 Two-Pass - Bosphorus 4Kkvazaar: Bosphorus 4K - Slowkvazaar: Bosphorus 4K - Mediumembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragon Objvpxenc: Speed 0 - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080pvpxenc: Speed 5 - Bosphorus 4Kembree: Pathtracer ISPC - Crownembree: Pathtracer - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragonvvenc: Bosphorus 1080p - Fasterrocksdb: Seq Fillaom-av1: Speed 0 Two-Pass - Bosphorus 1080puvg266: Bosphorus 4K - Very Fastaom-av1: Speed 6 Two-Pass - Bosphorus 1080puvg266: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Very Fastvpxenc: Speed 5 - Bosphorus 1080puvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 1080p - Slowkvazaar: Bosphorus 4K - Super Fastuvg266: Bosphorus 1080p - Mediumaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kkvazaar: Bosphorus 4K - Ultra Fastaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080puvg266: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Super Fastkvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 1080p - Super Fastkvazaar: Bosphorus 1080p - Ultra Fastab812.174273690142227134.32130.72118.2244.50198061445.15417536140.27621142330.3109314125.0987.07532092758.62636893522.02075944221.6414307719.78507158210.3438228655.0986.46643761831.53018705711.43143434711.97627951710.056.915.0586.30911357815.8556565741.274.4771.401.6490082431.983.485.0786.5223656133.2051077.949144.671285.141319.95093071625.195581.266349.7729.5737.4723.9721.381269.25704107.977682.9771.4641774.479213.42628.33864119.883119.3948.3754958.892616.979535.891727.8609129.4987.7219937.571426.614924.184541.345427.742436.04371.44574691.2441.40262712.7014.28386233.3984.24607235.4879.84858101.5296.00923166.4019.718.621617.91069.9671701.618.50.319.4621887.7432.21757.8722.21722.42781.21896.5429.51882.71289.72277981273257733550251068335940429478507510.9211.05416.9815.1615.4621.263221.893917.2520.3220.5622.59923.749824.322524.509625.96214549650.9130.9340.132.9834.2636.0738.6841.544.1146.5155.9258.9157.0163.2163.0562.1864.52104.28115.45119.21123.88123.09131.42135.84151.22175.59218.35831.216274004143228136.05132.24122.3244.0943.0539.5930.195.0786.1858.76433380522.0022.3320.4910.465.0586.27009304329.9211.0511.26780945610.066.735.0686.6315.921.2784.5011.551.621.953.435.0486.453.1753184.248708.570079.941158.654375.873703.793880.367666737.4737.4722.5621.383969.27425107.776469.0872.1317973.339813.63498.14204122.781120.1438.3232458.558317.076636.095827.703397.697910.235430.926232.333824.330141.097827.631436.18851.48798671.5821.24424803.474.22687236.5424.15881240.4189.86564101.3549.65702103.5469.848.551633.31071.399167218.70.319.4891853.1429.51797.9732.11768.72803.51912.1434.81916.8127.79.74277491278893737184250157836234939470321110.911.217.2715.1715.6121.444222.097617.0720.4620.6722.733123.810324.877624.599626.30714411860.9230.840.5532.6434.5136.6138.6141.1744.1346.351.3759.5457.0264.4865.462.3964.6699.56110.58116.63122.97123.24131.34136.82150.45176.19218.96OpenBenchmarking.org

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigab2004006008001000812.17831.22

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance Metricab60K120K180K240K300K2736902740041. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarab306090120150142143MIN: 14 / MAX: 2649MIN: 14 / MAX: 2679

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCab50100150200250227228MIN: 30 / MAX: 2846MIN: 30 / MAX: 2850

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runab306090120150134.32136.05MIN: 6.63 / MAX: 8571.43MIN: 6.35 / MAX: 8571.43

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runab306090120150130.72132.24MIN: 8.12 / MAX: 6666.67MIN: 8.21 / MAX: 6666.67

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheab306090120150118.22122.32MIN: 5.57 / MAX: 7500MIN: 5.84 / MAX: 6666.67

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Timeab102030405044.5044.09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Timeab102030405045.1543.05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Timeab91827364540.2839.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Timeab71421283530.3130.19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeab1.14532.29063.43594.58125.72655.095.07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmarkab2040608010087.0886.18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Timeab132639526558.6358.76

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Timeab51015202522.0222.00

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test Timeab51015202521.6422.33

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test Timeab51015202519.7920.49

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test Timeab369121510.3410.46

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeab1.14532.29063.43594.58125.72655.095.05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmarkab2040608010086.4786.27

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Timeab71421283531.5329.92

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Timeab369121511.4311.05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test Timeab369121511.9811.27

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test Timeab369121510.0510.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test Timeab2468106.916.73

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeab1.13852.2773.41554.5545.69255.055.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmarkab2040608010086.3186.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Timeab4812162015.8615.92

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareab0.28760.57520.86281.15041.4381.2701.2781. (CXX) g++ options: -O3

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastab1.01272.02543.03814.05085.06354.4774.5011. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Timeab0.34880.69761.04641.39521.7441.401.55

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Timeab0.3710.7421.1131.4841.8551.6490082431.620000000

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Timeab0.44550.8911.33651.7822.22751.981.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Timeab0.7831.5662.3493.1323.9153.483.43

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeab1.14082.28163.42244.56325.7045.075.04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmarkab2040608010086.5286.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Timeab0.721.442.162.883.63.203.17

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 1024ab11K22K33K44K55K51077.953184.2

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 1024ab11K22K33K44K55K49144.648708.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 1024ab15K30K45K60K75K71285.170079.9

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 1024ab9K18K27K36K45K41319.941158.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128ab12K24K36K48K60K50930.054375.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128ab16K32K48K64K80K71625.173703.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128ab20K40K60K80K100K95581.293880.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128ab14K28K42K56K70K66349.767666.0

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128ab160320480640800729.5737.4

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 1024ab160320480640800737.4737.4

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelab160320480640800723.97722.561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelab0.31140.62280.93421.24561.5571.381261.383961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Parallelab36912159.257049.274251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Parallelab20406080100107.98107.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardab150300450600750682.98469.091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardab0.47970.95941.43911.91882.39851.464172.131791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Parallelab2040608010074.4873.341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Parallelab4812162013.4313.631. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standardab2468108.338648.142041. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standardab306090120150119.88122.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Parallelab306090120150119.39120.141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Parallelab2468108.375498.323241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardab132639526558.8958.561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardab4812162016.9817.081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelab81624324035.8936.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelab71421283527.8627.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardab306090120150129.5097.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardab36912157.7219910.235401. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardab91827364537.5730.931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardab81624324026.6132.331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardab61218243024.1824.331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardab91827364541.3541.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelab71421283527.7427.631. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelab81624324036.0436.191. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelab0.33480.66961.00441.33921.6741.445741.487981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelab150300450600750691.24671.581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardab0.31560.63120.94681.26241.5781.402621.244241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardab2004006008001000712.70803.471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelab0.96391.92782.89173.85564.81954.283864.226871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelab50100150200250233.40236.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardab0.95541.91082.86623.82164.7774.246074.158811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardab50100150200250235.49240.421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Parallelab36912159.848589.865641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Parallelab20406080100101.53101.351. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardab36912156.009239.657021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardab4080120160200166.40103.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kab36912159.719.841. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Kab2468108.628.551. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedab4008001200160020001617.91633.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedab369121510101. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigab163248648069.9771.40

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedab4008001200160020001701.61672.01. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedab51015202518.518.71. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kab0.06980.13960.20940.27920.3490.310.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterab36912159.4629.4891. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedab4008001200160020001887.71853.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedab90180270360450432.2429.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedab4008001200160020001757.81797.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedab160320480640800722.2732.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedab4008001200160020001722.41768.71. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedab60012001800240030002781.22803.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedab4008001200160020001896.51912.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedab90180270360450429.5434.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedab4008001200160020001882.71916.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedab306090120150128.0127.71. (CC) gcc options: -O3 -pthread -lz -llzma

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowab36912159.729.74

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncab6K12K18K24K30K27798277491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillab300K600K900K1200K1500K127325712788931. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomab160K320K480K640K800K7335507371841. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomab500K1000K1500K2000K2500K251068325015781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingab800K1600K2400K3200K4000K359404236234931. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readab20M40M60M80M100M94785075947032111. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumab369121510.9210.90

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastab369121511.0511.201. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kab4812162016.9817.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Slowab4812162015.1615.171. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Mediumab4812162015.4615.611. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian Dragon Objab51015202521.2621.44MIN: 21.15 / MAX: 21.83MIN: 21.31 / MAX: 21.79

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian Dragon Objab51015202521.8922.10MIN: 21.71 / MAX: 22.21MIN: 21.99 / MAX: 22.41

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pab4812162017.2517.071. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pab51015202520.3220.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Kab51015202520.5620.671. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Crownab51015202522.6022.73MIN: 22.37 / MAX: 22.99MIN: 22.52 / MAX: 23.11

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Crownab61218243023.7523.81MIN: 23.53 / MAX: 24.25MIN: 23.57 / MAX: 24.24

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian Dragonab61218243024.3224.88MIN: 24.2 / MAX: 24.72MIN: 24.75 / MAX: 25.43

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian Dragonab61218243024.5124.60MIN: 24.39 / MAX: 24.8MIN: 24.42 / MAX: 25.04

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterab61218243025.9626.311. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillab300K600K900K1200K1500K145496514411861. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pab0.2070.4140.6210.8281.0350.910.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastab71421283530.9330.80

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pab91827364540.1040.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastab81624324032.9832.64

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fastab81624324034.2634.511. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pab81624324036.0736.611. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastab91827364538.6838.61

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowab91827364541.5041.17

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fastab102030405044.1144.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumab112233445546.5146.30

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kab132639526555.9251.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kab132639526558.9159.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fastab132639526557.0157.021. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kab142842567063.2164.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kab153045607563.0565.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Slowab142842567062.1862.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Mediumab142842567064.5264.661. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pab20406080100104.2899.561. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pab306090120150115.45110.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pab306090120150119.21116.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pab306090120150123.88122.971. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastab306090120150123.09123.24

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastab306090120150131.42131.34

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fastab306090120150135.84136.821. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastab306090120150151.22150.45

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super Fastab4080120160200175.59176.191. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fastab50100150200250218.35218.961. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

152 Results Shown

Timed Linux Kernel Compilation
BRL-CAD
OpenVKL:
  vklBenchmark Scalar
  vklBenchmark ISPC
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, First Run / Cold Cache
Apache Spark:
  40000000 - 100 - Broadcast Inner Join Test Time
  40000000 - 100 - Inner Join Test Time
  40000000 - 100 - Repartition Test Time
  40000000 - 100 - Group By Test Time
  40000000 - 100 - Calculate Pi Benchmark Using Dataframe
  40000000 - 100 - Calculate Pi Benchmark
  40000000 - 100 - SHA-512 Benchmark Time
  20000000 - 100 - Broadcast Inner Join Test Time
  20000000 - 100 - Inner Join Test Time
  20000000 - 100 - Repartition Test Time
  20000000 - 100 - Group By Test Time
  20000000 - 100 - Calculate Pi Benchmark Using Dataframe
  20000000 - 100 - Calculate Pi Benchmark
  20000000 - 100 - SHA-512 Benchmark Time
  10000000 - 100 - Broadcast Inner Join Test Time
  10000000 - 100 - Inner Join Test Time
  10000000 - 100 - Repartition Test Time
  10000000 - 100 - Group By Test Time
  10000000 - 100 - Calculate Pi Benchmark Using Dataframe
  10000000 - 100 - Calculate Pi Benchmark
  10000000 - 100 - SHA-512 Benchmark Time
GROMACS
VVenC
Apache Spark:
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - SHA-512 Benchmark Time
CockroachDB:
  KV, 60% Reads - 1024
  KV, 50% Reads - 1024
  KV, 95% Reads - 1024
  KV, 10% Reads - 1024
  KV, 10% Reads - 128
  KV, 60% Reads - 128
  KV, 95% Reads - 128
  KV, 50% Reads - 128
  MoVR - 128
  MoVR - 1024
ONNX Runtime:
  fcn-resnet101-11 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
AOM AV1
VP9 libvpx Encoding
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Timed Linux Kernel Compilation
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
AOM AV1
VVenC
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
  3 - Decompression Speed
  3 - Compression Speed
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
  12 - Decompression Speed
  12 - Compression Speed
uvg266
RocksDB:
  Rand Fill Sync
  Rand Fill
  Update Rand
  Read Rand Write Rand
  Read While Writing
  Rand Read
uvg266
VVenC
AOM AV1
Kvazaar:
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
Embree:
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer - Asian Dragon Obj
VP9 libvpx Encoding
AOM AV1
VP9 libvpx Encoding
Embree:
  Pathtracer ISPC - Crown
  Pathtracer - Crown
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon
VVenC
RocksDB
AOM AV1
uvg266
AOM AV1
uvg266
Kvazaar
VP9 libvpx Encoding
uvg266:
  Bosphorus 4K - Ultra Fast
  Bosphorus 1080p - Slow
Kvazaar
uvg266
AOM AV1:
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
Kvazaar
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
uvg266:
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Super Fast
Kvazaar
uvg266
Kvazaar:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast