11900K Summer 2022

Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (1007 BIOS) and ASUS Intel RKL GT1 3GB on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206153-PTS-11900KSU17
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 5 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 11 Tests
Creator Workloads 15 Tests
Database Test Suite 2 Tests
Encoding 8 Tests
Game Development 3 Tests
HPC - High Performance Computing 7 Tests
Imaging 2 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 4 Tests
Quantum Mechanics 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Server 3 Tests
Server CPU Tests 10 Tests
Texture Compression 2 Tests
Video Encoding 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
June 14 2022
  4 Hours, 38 Minutes
B
June 14 2022
  4 Hours, 37 Minutes
C
June 15 2022
  4 Hours, 37 Minutes
D
June 15 2022
  2 Hours, 14 Minutes
Invert Hiding All Results Option
  4 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


11900K Summer 2022OpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (1007 BIOS)Intel Tiger Lake-H32GB2000GB Corsair Force MP600ASUS Intel RKL GT1 3GB (1300MHz)Intel Tiger Lake-H HD AudioMX2792 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 21.105.15.0-051500rc7daily20211029-generic (x86_64) 20211028GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 22.0.0-devel (git-f13d486 2021-11-03 impish-oibaf-ppa)1.2.195GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution11900K Summer 2022 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x40 - Thermald 2.4.6 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.21.10.1)- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%106%111%117%122%QMCPACKSVT-VP9Parallel BZIP2 CompressionSVT-AV1oneDNNNettledav1dx264AOM AV1libavif avifencperf-benchGlibc BenchmarksOSPraySVT-HEVCTimed Gem5 Compilationlibgav1Timed MPlayer CompilationsimdjsonEtcpakRenaissanceOSPray StudioQuantum ESPRESSO

11900K Summer 2022onnx: bertsquad-12 - CPU - Standardonednn: Recurrent Neural Network Training - u8s8f32 - CPUstress-ng: Glibc C String Functionsonednn: Recurrent Neural Network Inference - u8s8f32 - CPUqmcpack: simple-H2Osvt-av1: Preset 10 - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Krenaissance: Scala Dottystress-ng: Futexsvt-vp9: VMAF Optimized - Bosphorus 4Krenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRanksvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kstress-ng: Atomicaom-av1: Speed 9 Realtime - Bosphorus 1080pperf-bench: Memset 1MBglibc-bench: sinhperf-bench: Futex Hashstress-ng: IO_uringdav1d: Summer Nature 4Konednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUnettle: poly1305-aesdav1d: Chimera 1080p 10-bitcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionsvt-hevc: 7 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 1080prenaissance: Genetic Algorithm Using Jenetics + Futuresnettle: sha512webp2: Defaultsvt-av1: Preset 10 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080px264: Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Krocksdb: Read While Writingtensorflow-lite: Mobilenet Quantonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 1080pglibc-bench: singlibc-bench: atanhperf-bench: Epoll Waittensorflow-lite: NASNet Mobileglibc-bench: exponednn: Recurrent Neural Network Training - f32 - CPUtensorflow-lite: Mobilenet Floatonnx: bertsquad-12 - CPU - Parallelperf-bench: Sched Pipesvt-av1: Preset 4 - Bosphorus 1080pglibc-bench: pthread_oncestress-ng: NUMArenaissance: Apache Spark ALSrenaissance: Apache Spark Bayessvt-av1: Preset 8 - Bosphorus 1080pavifenc: 6, Losslessaom-av1: Speed 8 Realtime - Bosphorus 1080procksdb: Rand Readstress-ng: Context Switchingrenaissance: Akka Unbalanced Cobwebbed Treestress-ng: Forkingonednn: IP Shapes 1D - u8s8f32 - CPUsvt-vp9: VMAF Optimized - Bosphorus 1080ponnx: fcn-resnet101-11 - CPU - Parallelblender: Classroom - CPU-Onlysvt-av1: Preset 4 - Bosphorus 4Konnx: super-resolution-10 - CPU - Parallelrenaissance: ALS Movie Lensdav1d: Summer Nature 1080psvt-hevc: 10 - Bosphorus 4Klibgav1: Chimera 1080p 10-bitperf-bench: Futex Lock-Piglibc-bench: cosglibc-bench: ffsstress-ng: CPU Cacheglibc-bench: ffsllsvt-hevc: 10 - Bosphorus 1080ponednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Krenaissance: In-Memory Database Shootoutonnx: ArcFace ResNet-100 - CPU - Parallelospray: particle_volume/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeavifenc: 0renaissance: Finagle HTTP Requestsavifenc: 6webp2: Quality 75, Compression Effort 7ospray-studio: 2 - 1080p - 1 - Path Tracertensorflow-lite: Inception V4ospray-studio: 2 - 1080p - 16 - Path Tracerdav1d: Chimera 1080ponnx: GPT-2 - CPU - Standardgromacs: MPI CPU - water_GMX50_barestress-ng: MEMFDsimdjson: Kostyaavifenc: 2tensorflow-lite: Inception ResNet V2aom-av1: Speed 8 Realtime - Bosphorus 4Kx264: Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kospray: gravity_spheres_volume/dim_512/ao/real_timeospray: particle_volume/scivis/real_timeospray-studio: 3 - 1080p - 16 - Path Tracerstress-ng: Glibc Qsort Data Sortingtensorflow-lite: SqueezeNetstress-ng: Mallocblender: Barbershop - CPU-Onlyinfluxdb: 64 - 10000 - 2,5000,1 - 10000webp2: Quality 100, Lossless Compressionrocksdb: Read Rand Write Randavifenc: 10, Losslessonnx: super-resolution-10 - CPU - Standardonnx: yolov4 - CPU - Parallelglibc-bench: log2aom-av1: Speed 4 Two-Pass - Bosphorus 1080pospray: gravity_spheres_volume/dim_512/pathtracer/real_timebuild-gem5: Time To Compileaom-av1: Speed 6 Two-Pass - Bosphorus 1080pdraco: Lioninfluxdb: 4 - 10000 - 2,5000,1 - 10000onednn: IP Shapes 1D - f32 - CPUsimdjson: TopTweetonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 4Kospray-studio: 1 - 1080p - 1 - Path Tracernettle: chachastress-ng: Cryptoqe: AUSURF112rocksdb: Update Randsvt-hevc: 7 - Bosphorus 1080pstress-ng: SENDFILEstress-ng: CPU Stressstress-ng: Socket Activityperf-bench: Memcpy 1MBstress-ng: Matrix Mathlibgav1: Chimera 1080pnettle: aes256ospray-studio: 2 - 1080p - 32 - Path Traceretcpak: Single-Threaded - ETC2webp2: Quality 95, Compression Effort 7renaissance: Rand Forestonnx: GPT-2 - CPU - Parallelonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUgpaw: Carbon Nanotubeonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonnx: ArcFace ResNet-100 - CPU - Standardospray-studio: 1 - 1080p - 32 - Path Tracerblender: Pabellon Barcelona - CPU-Onlydraco: Church Facadebuild-mplayer: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUlibgav1: Summer Nature 1080pwebp2: Quality 100, Compression Effort 5ospray-studio: 3 - 1080p - 32 - Path Tracerblender: BMW27 - CPU-Onlyospray-studio: 1 - 1080p - 16 - Path Tracersvt-vp9: Visual Quality Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 1080pjava-jmh: Throughputsvt-hevc: 1 - Bosphorus 1080plibgav1: Summer Nature 4Konnx: yolov4 - CPU - Standardstress-ng: Memory Copyingonednn: Deconvolution Batch shapes_3d - f32 - CPUglibc-bench: asinhstress-ng: Semaphoresospray-studio: 3 - 1080p - 1 - Path Tracerglibc-bench: modfonednn: Convolution Batch Shapes Auto - f32 - CPUblender: Fishy Cat - CPU-Onlystress-ng: MMAPperf-bench: Syscall Basicstress-ng: System V Message Passingsimdjson: PartialTweetsstress-ng: Vector Mathonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUglibc-bench: tanhonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUsimdjson: DistinctUserIDstress-ng: x86_64 RdRandospray: particle_volume/pathtracer/real_timeonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUetcpak: Multi-Threaded - ETC2onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUglibc-bench: sincosglibc-bench: sqrtonnx: fcn-resnet101-11 - CPU - Standardsvt-hevc: 1 - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Ksimdjson: LargeRandABCD10154358.461700830.812245.9126.68867.48251.572.48636.72019792.9449.547837.92693.3183.664.44357591.98176.8154.19172417.2096681970524227.96198.828.066885.386924641.58492.467.73443.99408.3241326.7722.493.251225.39611.93129.99103.41121527042930.311.47653203.545.560127.01541275157571.7910.04043118.351437.545123188666.6893.3214320.0820416.41380.4110.72612.215145.53565676231924100.289752.976531.560.720371180.170360.152.261454713808.3893.3381.1278.41108254.44723.33365221.733.3351269.913.088789.916.212355.5126236.4114.20308148.5852720.89.426179.593193826841.230947720.1167010.9951012.734.6970.4926210.148.3131.0531.4334.2478235.645836885182.61822.439452055.421447.661818333.8791.3920272215.349455830615.149211.325.09145376.68230.4735691806268.74.011778.521.294125.1719181519.0613517.571542.31703459137.02229138.1523228.39279.0731.36233257452.05261.3812207.5265641307.595371.217480.853841831.843.43567446.9853115.29207864451457.21513132.7643.23261335.574.97977357131.63051550.36161.1825536369445.4699.2884.585261607.494.338321.05691287975.2423284.6629114.1406173.26258.422254919511534338.156.9958868.1218.789526.10378.558261833.1411.96938.677949.1297.4640.8258842658.51417.146611.109416.161431.71673.993771042.320.420.161.5210103107.861359296.881833.422.65179.16660.6571.7616.22102295.1854.027456.82757.9197.1968.64344544.2188.1857.45039517.1399668540125422.1194.828.038695.205584442.12490.128.05342.55423.7671297.7721.293.295227.35411.94129.21106.23421993092997.031.46563203.9546.368327.0271302397367.4410.02983189.951459.185003152676.743.3225316.0320524.61371.9109.33212.076143.21564996441955167.189690.277706.130.715645182.7271365.152.272452513827.4904.5682.0577.68107653.83963.37358222.463.33819272.363.06789.7916.132363.6126936.01614.18832148.4112710.69.37180.441195726581.331244713.7967021.0011004.364.6670.0326131.748.4430.831.3624.2482435.555737170183.971813.389467042.291444.641805397.9787.3420416425.362452830815.142311.275.06027378.92830.335621804674.64.008178.521.29355.1519071511.8913448.131534.64703316137.36228041.0323119.639245.3631.35866757205.06260.2712205.2365765307.012370.685480.053961831.923.44465445.393104.69207164393455.8511732.8493.23137336.454.96677543131.443057250.41161.5325513909701.7249.384.495261607.214.3356721.08321286303.1123324.655114.1194173.53258.182254255611517263.47758952.218.763526.09178.547621831.2911.95698.5977948.33297.4990.8260442656.51417.153111.116116.152831.72173.993171042.320.420.161.526423113.191733539.891839.4921.86979.41160.7162.05689.12247778.9653.027232.32560.8196.4368.35366961.11186.3255.71503217.1821681054225003189.637.70235.447014637.23471.448.03342.38419.7111344.6747.253.368226.7211.9125.68106.91522232312902.281.51319201.7446.856627.77321285247532.1610.28323106.141422.455113162106.6363.38731314.0820526.51389.8111.23612.154143.74556668631946582.899803.877378.240.716538182.4370361.232.292458613710.1903.9581.8878.21108753.81023.33392219.933.37318269.663.069449.7916.032381.9125536.17274.19096150.0072707.79.462181.39319562658431173720.466401.0041013.34.6569.89925991.248.2230.8931.6164.2283335.779137059183.731808.929521309.341455.11814759792.94520415535.381454830615.14311.255.08966378.8130.3335481815340.84.02488.511.29685.1419111517.8313507.571535.9700065137.39228816.3423206.529236.0631.49493557451.66260.4412195.8765487306.91369.816481.853761832.563.43528445.4493114.22207664242456.21512232.7863.23367336.424.96877555131.293058450.37161.4525481229119.7059.2984.665251610.064.3355821.07951288557.1823314.6575814.1318173.32258.022253867511528872.37758922.3118.784426.12668.546911831.3411.97018.5977859.9297.6810.8266862656.19917.151111.116616.156431.71253.99371042.320.420.161.523110.841831.9522.63779.61960.6671.97703.853.777760.02719.0196.5167.18187.354.12078618.14086460586189.527.72185.22854638.26476.47.76442.54422.8541308.1726.78219.66812.31128.89106.5121.50262197.645.564227.350313107810.30253106.543227476.7723.3218420797.91365.4110.00312.284143.859650.60.726471182.342.28413643.6904.3681.0478.62108953.79993.334223.33486272.733.102369.8116.062355.936.34164.23414149.4752692.19.4671953720.464.6569.98148.0430.9531.3754.2622435.83875.38615.238911.275.08714378.73630.294.001168.471.30115.1519121520.611536.74137.6831.357055260.7712155.79306.412480.41825.783.447753108.5832.763.24005335.8950.3161.499.384.514.3306721.09384.6598714.11722573195718.778226.1068.550271830.7111.95528.59297.3460.8261442658.61917.139811.115616.154231.70783.994062.320.420.161.52OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCBA2004006008001000642101010151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUACDB90018002700360045004358.463113.193110.843107.86MIN: 3112.17MIN: 3107.55MIN: 3105.51MIN: 3100.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsBAC400K800K1200K1600K2000K1359296.881700830.811733539.891. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUACBD50010001500200025002245.911839.491833.401831.95MIN: 1825.27MIN: 1826.96MIN: 1828.11MIN: 1828.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.13Input: simple-H2OABDC61218243026.6922.6522.6421.871. (CXX) g++ options: -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 4KABCD2040608010067.4879.1779.4179.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KABDC142842567051.5060.6560.6660.711. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCBDA163248648062.0571.7071.9772.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyDCAB150300450600750703.8689.1636.7616.2MIN: 522.46 / MAX: 1350.04MIN: 509.57 / MAX: 1367.83MIN: 526.61 / MAX: 1306.21MIN: 507.78 / MAX: 1396.3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexABC500K1000K1500K2000K2500K2019792.942102295.182247778.961. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KACDB122436486049.5453.0253.7754.021. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOADBC2K4K6K8K10K7837.97760.07456.87232.3MIN: 7837.85 / MAX: 11331.76MAX: 10905.77MAX: 11038.64MIN: 6221.45 / MAX: 10912.79

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankBDAC60012001800240030002757.92719.02693.32560.8MIN: 2482.23 / MAX: 2957.59MIN: 2491.58 / MAX: 2936.02MIN: 2463.11 / MAX: 2826.4MIN: 2382.1 / MAX: 2770.21

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pACDB4080120160200183.60196.43196.51197.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KADCB153045607564.4467.1868.3568.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicBAC80K160K240K320K400K344544.20357591.98366961.111. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pACDB4080120160200176.81186.32187.30188.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBDACB132639526554.1254.1955.7257.451. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinhDACB4812162018.1417.2117.1817.141. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashDBCA1.5M3M4.5M6M7.5M64605866685401681054268197051. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringACB5K10K15K20K25K24227.9625003.0025422.101. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KDCBA4080120160200189.52189.63194.82198.821. (CC) gcc options: -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABDC2468108.066888.038697.721807.70230MIN: 4.81MIN: 4.84MIN: 4.81MIN: 4.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCADB1.22562.45123.67684.90246.1285.447015.386925.228505.20558MIN: 4.95MIN: 4.8MIN: 4.75MIN: 4.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesBCDA100020003000400050004442.124637.234638.264641.581. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitCDBA110220330440550471.44476.40490.12492.461. (CC) gcc options: -pthread -lm

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionBCDA2468108.0538.0337.7647.7341. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCDBA102030405042.3842.5442.5543.991. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pACDB90180270360450408.32419.71422.85423.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesCADB300600900120015001344.61326.71308.11297.7MIN: 1327.22 / MAX: 1358.81MIN: 1307.06 / MAX: 1345.19MIN: 1265.88 / MAX: 1358.77MIN: 1270.51 / MAX: 1321.22

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512BADC160320480640800721.29722.49726.78747.251. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCBA0.75781.51562.27343.03123.7893.3683.2953.2511. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 1080pDACB50100150200250219.67225.40226.72227.351. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pCABD369121511.9011.9311.9412.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pCDBA306090120150125.68128.89129.21129.991. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 4KABDC20406080100103.41106.23106.51106.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingABC500K1000K1500K2000K2500K2152704219930922232311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantBAC60012001800240030002997.032930.312902.28

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUCDAB0.34050.6811.02151.3621.70251.513191.502621.476531.46563MIN: 1.43MIN: 1.4MIN: 1.39MIN: 1.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pDCAB4080120160200197.60201.74203.50203.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinCBDA112233445546.8646.3745.5645.561. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: atanhCDBA71421283527.7727.3527.0327.021. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitACBD30K60K90K120K150K1275151285241302391310781. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileACB160032004800640080007571.797532.167367.44

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: expDCAB369121510.3010.2810.0410.031. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUBADC70014002100280035003189.953118.353106.543106.14MIN: 3078.21MIN: 3113.21MIN: 3101.69MIN: 3100.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatBAC300600900120015001459.181437.541422.45

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelBCA1102203304405505005115121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeBCAD70K140K210K280K350K3152673162103188663227471. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pCABD2468106.6366.6896.7406.7721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: pthread_onceCBDA0.76211.52422.28633.04843.81053.387313.322503.321843.321401. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACBA70140210280350314.08316.03320.081. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSDCBA4K8K12K16K20K20797.920526.520524.620416.4MIN: 20754.97 / MAX: 20902.04MIN: 20468.32 / MAX: 20594.78MIN: 20452.4 / MAX: 20618.01MIN: 20337.14 / MAX: 20472.95

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCABD300600900120015001389.81380.41371.91365.4MIN: 1032.48 / MAX: 1390.69MIN: 1030.73MIN: 1025.2MIN: 1013.57

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pBDAC20406080100109.33110.00110.73111.241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessDACB369121512.2812.2212.1512.081. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pBCDA306090120150143.21143.74143.85145.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadCBA12M24M36M48M60M5566686356499644565676231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingACB400K800K1200K1600K2000K1924100.281946582.891955167.181. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeCABD2K4K6K8K10K9803.89752.99690.29650.6MIN: 7656.59 / MAX: 9803.83MIN: 7618.53MIN: 7503.63MIN: 7528

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingACB17K34K51K68K85K76531.5677378.2477706.131. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUDACB0.16350.3270.49050.6540.81750.7264710.7203710.7165380.715645MIN: 0.67MIN: 0.67MIN: 0.66MIN: 0.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pADCB4080120160200180.10182.34182.43182.721. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelACB16324864807070711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyBCA80160240320400365.15361.23360.15

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 4KABDC0.51571.03141.54712.06282.57852.2612.2722.2842.2921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelBAC100020003000400050004525454745861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensBACD3K6K9K12K15K13827.413808.313710.113643.6MIN: 13827.36 / MAX: 15305.31MIN: 13808.28 / MAX: 15301.71MIN: 13710.09 / MAX: 15212.77MAX: 15019.04

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pACDB2004006008001000893.33903.95904.36904.561. (CC) gcc options: -pthread -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KDACB2040608010081.0481.1281.8882.051. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080p 10-bitBCAD2040608010077.6878.2178.4178.621. (CXX) g++ options: -O3 -lrt

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiBACD200400600800100010761082108710891. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: cosABCD122436486054.4553.8453.8153.801. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsBDCA0.75911.51822.27733.03643.79553.373583.334223.333923.333651. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCAB50100150200250219.93221.73222.461. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsllCBAD0.7591.5182.2773.0363.7953.373183.338193.335103.334861. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pCABD60120180240300269.66269.91272.36272.731. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDACB0.6981.3962.0942.7923.493.102363.088783.069443.06780MIN: 3.03MIN: 3.02MIN: 3.03MIN: 3.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KBCDA36912159.799.799.819.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KCDBA4812162016.0316.0616.1316.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCBDA50010001500200025002381.92363.62355.92355.5MIN: 2244.67 / MAX: 2504.25MIN: 2214.57 / MAX: 2486.06MIN: 2208.8 / MAX: 2429.73MIN: 2184.59 / MAX: 2554.54

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCAB300600900120015001255126212691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeBCDA81624324036.0236.1736.3436.41

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeBCAD0.95271.90542.85813.81084.76354.188324.190964.203084.23414

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CDAB306090120150150.01149.48148.59148.411. (CXX) g++ options: -O3 -fPIC -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsABCD60012001800240030002720.82710.62707.72692.1MIN: 2537.94 / MAX: 2897.99MIN: 2527.4 / MAX: 2864.03MIN: 2487.03 / MAX: 2841.18MIN: 2507.61 / MAX: 2821.13

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6DCAB36912159.4679.4629.4269.3701. (CXX) g++ options: -O3 -fPIC -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7CBA4080120160200181.39180.44179.591. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerBCDA40080012001600200019571956195319381. (CXX) g++ options: -O3 -lm -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4ACB6K12K18K24K30K26841.226584.026581.3

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerBCA7K14K21K28K35K3124431173309471. (CXX) g++ options: -O3 -lm -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pBACD160320480640800713.79720.11720.40720.461. (CC) gcc options: -pthread -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCAB140028004200560070006640670167021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareABC0.22590.45180.67770.90361.12950.9951.0011.0041. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDBAC20040060080010001004.361012.731013.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCDBA1.05532.11063.16594.22125.27654.654.654.664.691. (CXX) g++ options: -O3

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABDC163248648070.4970.0369.9869.901. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2ABC6K12K18K24K30K26210.126131.725991.2

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KDCAB112233445548.0448.2248.3148.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KBCDA71421283530.8030.8930.9531.051. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 4KBDAC71421283531.3631.3831.4331.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCABD0.9591.9182.8773.8364.7954.228334.247824.248244.26224

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeBACD81624324035.5635.6535.7835.84

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerBCA8K16K24K32K40K3717037059368851. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingACB4080120160200182.60183.73183.971. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetABC4008001200160020001822.431813.381808.92

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocABC2M4M6M8M10M9452055.429467042.299521309.341. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCAB300600900120015001455.101447.661444.64

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000BCA400K800K1200K1600K2000K1805397.91814759.01818333.8

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionCAB2004006008001000792.95791.39787.341. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomACB400K800K1200K1600K2000K2027221204155320416421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessDCBA1.21192.42383.63574.84766.05955.3865.3815.3625.3491. (CXX) g++ options: -O3 -fPIC -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardBCA100020003000400050004528454845581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelACB701402102803503063063081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: log2DACB4812162015.2415.1515.1415.141. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pCBDA369121511.2511.2711.2711.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeBDCA1.14562.29123.43684.58245.7285.060275.087145.089665.09145

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileBCDA80160240320400378.93378.81378.74376.68

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pDBCA71421283530.2930.3030.3330.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionABC80016002400320040003569356235481. (CXX) g++ options: -O3

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000BAC400K800K1200K1600K2000K1804674.61806268.71815340.8

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCABD0.90561.81122.71683.62244.5284.024804.011774.008174.00116MIN: 3.92MIN: 3.9MIN: 3.9MIN: 3.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetDCAB2468108.478.518.528.521. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUDCAB0.29270.58540.87811.17081.46351.301101.296801.294121.29350MIN: 1.25MIN: 1.24MIN: 1.25MIN: 1.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KCBDA1.16332.32663.48994.65325.81655.145.155.155.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerADCB40080012001600200019181912191119071. (CXX) g++ options: -O3 -lm -ldl

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaBCAD300600900120015001511.891517.831519.061520.61MIN: 679.88 / MAX: 4633.29MIN: 679.93 / MAX: 4633.69MIN: 679.9 / MAX: 4635.71MIN: 680.19 / MAX: 4644.141. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoBCA3K6K9K12K15K13448.1313507.5713517.571. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF112ADCB300600900120015001542.311536.741535.901534.641. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomCBA150K300K450K600K750K7000657033167034591. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pABCD306090120150137.02137.36137.39137.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILEBCA50K100K150K200K250K228041.03228816.34229138.151. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressBCA5K10K15K20K25K23119.6323206.5223228.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCBA2K4K6K8K10K9236.069245.369279.071. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBDBAC71421283531.3631.3631.3631.491. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathBCA12K24K36K48K60K57205.0657451.6657452.051. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080pBCDA60120180240300260.27260.44260.77261.381. (CXX) g++ options: -O3 -lrt

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256DCBA3K6K9K12K15K12155.7912195.8712205.2312207.52MIN: 7981.08 / MAX: 20658.54MIN: 7973.01 / MAX: 20659.96MIN: 7983.96 / MAX: 20655.99MIN: 7986.83 / MAX: 20656.741. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerBAC14K28K42K56K70K6576565641654871. (CXX) g++ options: -O3 -lm -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2DCBA70140210280350306.41306.91307.01307.601. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7ABC80160240320400371.22370.69369.821. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCADB100200300400500481.8480.8480.4480.0MIN: 437.62 / MAX: 607.1MIN: 434.42 / MAX: 584.45MIN: 443.53 / MAX: 630.56MIN: 439.84 / MAX: 608

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCAB120024003600480060005376538453961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCBAD4008001200160020001832.561831.921831.841825.78MIN: 1827.43MIN: 1827.22MIN: 1827.31MIN: 1821.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUDBAC0.77571.55142.32713.10283.87853.447753.444653.435673.43528MIN: 3.28MIN: 3.29MIN: 3.29MIN: 3.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeACB100200300400500446.99445.45445.391. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUACDB70014002100280035003115.293114.223108.583104.69MIN: 3100.97MIN: 3107.51MIN: 3103.14MIN: 3099.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardBCA4008001200160020002071207620781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC14K28K42K56K70K6445164393642421. (CXX) g++ options: -O3 -lm -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyACB100200300400500457.21456.21455.80

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeACB110022003300440055005131512251171. (CXX) g++ options: -O3

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileBCAD81624324032.8532.7932.7632.76

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDCAB0.7291.4582.1872.9163.6453.240053.233673.232613.23137MIN: 3.19MIN: 3.18MIN: 3.18MIN: 3.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 1080pADCB70140210280350335.57335.89336.42336.451. (CXX) g++ options: -O3 -lrt

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5ACB1.12032.24063.36094.48125.60154.9794.9684.9661. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA17K34K51K68K85K7755577543773571. (CXX) g++ options: -O3 -lm -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyABC306090120150131.60131.44131.29

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA7K14K21K28K35K3058430572305151. (CXX) g++ options: -O3 -lm -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KDACB112233445550.3050.3650.3750.411. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pACDB4080120160200161.18161.45161.49161.531. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputCBA5000M10000M15000M20000M25000M25481229119.7125513909701.7225536369445.47

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pACBD36912159.289.299.309.301. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KBDAC2040608010084.4984.5184.5884.661. (CXX) g++ options: -O3 -lrt

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCAB1102203304405505255265261. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingBAC300600900120015001607.211607.491610.061. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABCD0.97611.95222.92833.90444.88054.338304.335674.335584.33067MIN: 4.22MIN: 4.25MIN: 4.24MIN: 4.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: asinhDBCA51015202521.0921.0821.0821.061. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresBAC300K600K900K1200K1500K1286303.111287975.241288557.181. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerBCA50010001500200025002332233123281. (CXX) g++ options: -O3 -lm -ldl

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: modfADCB1.04922.09843.14764.19685.2464.662914.659874.657584.655101. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUACBD4812162014.1414.1314.1214.12MIN: 14.06MIN: 14.04MIN: 14.03MIN: 14.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyBCA4080120160200173.53173.32173.26

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCBA60120180240300258.02258.18258.421. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicCBAD5M10M15M20M25M225386752254255622549195225731951. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingBCA2M4M6M8M10M11517263.4711528872.3711534338.151. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsABCD2468106.997.007.007.001. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathACB13K26K39K52K65K58868.1258922.3158952.201. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUACDB51015202518.7918.7818.7818.76MIN: 18.6MIN: 18.58MIN: 18.6MIN: 18.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: tanhCDAB61218243026.1326.1126.1026.091. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUADBC2468108.558268.550278.547628.54691MIN: 8.44MIN: 8.42MIN: 8.42MIN: 8.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUACBD4008001200160020001833.141831.341831.291830.71MIN: 1828.37MIN: 1827.21MIN: 1826.61MIN: 1826.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCABD369121511.9711.9711.9611.96MIN: 11.9MIN: 11.9MIN: 11.89MIN: 11.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDBCDA2468108.598.598.598.601. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCBA20K40K60K80K100K77859.9077948.3377949.101. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeDABC60120180240300297.35297.46297.50297.68

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUCDBA0.1860.3720.5580.7440.930.8266860.8261440.8260440.825884MIN: 0.82MIN: 0.82MIN: 0.81MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2CBAD60012001800240030002656.202656.512658.512658.621. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUBCAD4812162017.1517.1517.1517.14MIN: 16.79MIN: 16.78MIN: 16.73MIN: 16.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUCBDA369121511.1211.1211.1211.11MIN: 11.07MIN: 11.06MIN: 11.05MIN: 11.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUACDB4812162016.1616.1616.1516.15MIN: 16.07MIN: 16.07MIN: 16.07MIN: 16.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sincosBACD71421283531.7231.7231.7131.711. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sqrtDACB0.89871.79742.69613.59484.49353.994063.993773.993703.993171. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardABC204060801001041041041. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KABCD0.5221.0441.5662.0882.612.322.322.322.321. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABCD0.09450.1890.28350.3780.47250.420.420.420.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABCD0.0360.0720.1080.1440.180.160.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomABCD0.3420.6841.0261.3681.711.521.521.521.521. (CXX) g++ options: -O3

197 Results Shown

ONNX Runtime
oneDNN
Stress-NG
oneDNN
QMCPACK
SVT-AV1
SVT-VP9
AOM AV1
Renaissance
Stress-NG
SVT-VP9
Renaissance:
  Savina Reactors.IO
  Apache Spark PageRank
SVT-VP9
AOM AV1
Stress-NG
AOM AV1
perf-bench
Glibc Benchmarks
perf-bench
Stress-NG
dav1d
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
Nettle
dav1d
Parallel BZIP2 Compression
SVT-HEVC
SVT-AV1
Renaissance
Nettle
WebP2 Image Encode
SVT-AV1
AOM AV1
x264
SVT-AV1
Facebook RocksDB
TensorFlow Lite
oneDNN
AOM AV1
Glibc Benchmarks:
  sin
  atanh
perf-bench
TensorFlow Lite
Glibc Benchmarks
oneDNN
TensorFlow Lite
ONNX Runtime
perf-bench
SVT-AV1
Glibc Benchmarks
Stress-NG
Renaissance:
  Apache Spark ALS
  Apache Spark Bayes
SVT-AV1
libavif avifenc
AOM AV1
Facebook RocksDB
Stress-NG
Renaissance
Stress-NG
oneDNN
SVT-VP9
ONNX Runtime
Blender
SVT-AV1
ONNX Runtime
Renaissance
dav1d
SVT-HEVC
libgav1
perf-bench
Glibc Benchmarks:
  cos
  ffs
Stress-NG
Glibc Benchmarks
SVT-HEVC
oneDNN
AOM AV1:
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
Renaissance
ONNX Runtime
OSPray:
  particle_volume/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
libavif avifenc
Renaissance
libavif avifenc
WebP2 Image Encode
OSPray Studio
TensorFlow Lite
OSPray Studio
dav1d
ONNX Runtime
GROMACS
Stress-NG
simdjson
libavif avifenc
TensorFlow Lite
AOM AV1
x264
SVT-AV1
OSPray:
  gravity_spheres_volume/dim_512/ao/real_time
  particle_volume/scivis/real_time
OSPray Studio
Stress-NG
TensorFlow Lite
Stress-NG
Blender
InfluxDB
WebP2 Image Encode
Facebook RocksDB
libavif avifenc
ONNX Runtime:
  super-resolution-10 - CPU - Standard
  yolov4 - CPU - Parallel
Glibc Benchmarks
AOM AV1
OSPray
Timed Gem5 Compilation
AOM AV1
Google Draco
InfluxDB
oneDNN
simdjson
oneDNN
AOM AV1
OSPray Studio
Nettle
Stress-NG
Quantum ESPRESSO
Facebook RocksDB
SVT-HEVC
Stress-NG:
  SENDFILE
  CPU Stress
  Socket Activity
perf-bench
Stress-NG
libgav1
Nettle
OSPray Studio
Etcpak
WebP2 Image Encode
Renaissance
ONNX Runtime
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
GPAW
oneDNN
ONNX Runtime
OSPray Studio
Blender
Google Draco
Timed MPlayer Compilation
oneDNN
libgav1
WebP2 Image Encode
OSPray Studio
Blender
OSPray Studio
SVT-VP9:
  Visual Quality Optimized - Bosphorus 4K
  Visual Quality Optimized - Bosphorus 1080p
Java JMH
SVT-HEVC
libgav1
ONNX Runtime
Stress-NG
oneDNN
Glibc Benchmarks
Stress-NG
OSPray Studio
Glibc Benchmarks
oneDNN
Blender
Stress-NG
perf-bench
Stress-NG
simdjson
Stress-NG
oneDNN
Glibc Benchmarks
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
simdjson
Stress-NG
OSPray
oneDNN
Etcpak
oneDNN:
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
Glibc Benchmarks:
  sincos
  sqrt
ONNX Runtime
SVT-HEVC
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K
simdjson