11900K Summer 2022

Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (1007 BIOS) and ASUS Intel RKL GT1 3GB on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206153-PTS-11900KSU17
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 5 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 11 Tests
Creator Workloads 15 Tests
Database Test Suite 2 Tests
Encoding 8 Tests
Game Development 3 Tests
HPC - High Performance Computing 7 Tests
Imaging 2 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 4 Tests
Quantum Mechanics 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Server 3 Tests
Server CPU Tests 10 Tests
Texture Compression 2 Tests
Video Encoding 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
June 14 2022
  4 Hours, 38 Minutes
B
June 14 2022
  4 Hours, 37 Minutes
C
June 15 2022
  4 Hours, 37 Minutes
D
June 15 2022
  2 Hours, 14 Minutes
Invert Hiding All Results Option
  4 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


11900K Summer 2022OpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (1007 BIOS)Intel Tiger Lake-H32GB2000GB Corsair Force MP600ASUS Intel RKL GT1 3GB (1300MHz)Intel Tiger Lake-H HD AudioMX2792 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 21.105.15.0-051500rc7daily20211029-generic (x86_64) 20211028GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 22.0.0-devel (git-f13d486 2021-11-03 impish-oibaf-ppa)1.2.195GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution11900K Summer 2022 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x40 - Thermald 2.4.6 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.21.10.1)- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%106%111%117%122%QMCPACKSVT-VP9Parallel BZIP2 CompressionSVT-AV1oneDNNNettledav1dx264AOM AV1libavif avifencperf-benchGlibc BenchmarksOSPraySVT-HEVCTimed Gem5 Compilationlibgav1Timed MPlayer CompilationsimdjsonEtcpakQuantum ESPRESSORenaissanceOSPray Studio

11900K Summer 2022qe: AUSURF112blender: Barbershop - CPU-Onlywebp2: Quality 100, Lossless Compressionrenaissance: Apache Spark ALSjava-jmh: Throughputblender: Pabellon Barcelona - CPU-Onlygpaw: Carbon Nanotubeospray: particle_volume/pathtracer/real_timebuild-gem5: Time To Compilewebp2: Quality 95, Compression Effort 7blender: Classroom - CPU-Onlyospray: particle_volume/scivis/real_timerenaissance: ALS Movie Lenssvt-hevc: 1 - Bosphorus 4Kospray: particle_volume/ao/real_timerenaissance: Akka Unbalanced Cobwebbed Treewebp2: Quality 75, Compression Effort 7gromacs: MPI CPU - water_GMX50_bareblender: Fishy Cat - CPU-Onlyavifenc: 0aom-av1: Speed 4 Two-Pass - Bosphorus 4Kblender: BMW27 - CPU-Onlyospray-studio: 3 - 1080p - 1 - Path Traceraom-av1: Speed 0 Two-Pass - Bosphorus 4Konnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardospray-studio: 3 - 1080p - 16 - Path Traceronnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timelibgav1: Chimera 1080p 10-bitrenaissance: Savina Reactors.IOospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracersimdjson: Kostyarenaissance: Apache Spark PageRankospray-studio: 3 - 1080p - 32 - Path Traceronednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kospray-studio: 2 - 1080p - 32 - Path Tracersvt-av1: Preset 4 - Bosphorus 4Kospray-studio: 1 - 1080p - 32 - Path Traceravifenc: 2renaissance: Genetic Algorithm Using Jenetics + Futuressimdjson: DistinctUserIDsimdjson: TopTweetsimdjson: PartialTweetssvt-hevc: 1 - Bosphorus 1080pinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000tensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: SqueezeNetaom-av1: Speed 4 Two-Pass - Bosphorus 1080ptensorflow-lite: Mobilenet Quantrocksdb: Read While Writingrocksdb: Rand Readrocksdb: Update Randrocksdb: Read Rand Write Randsimdjson: LargeRandaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 1080prenaissance: Scala Dottyetcpak: Single-Threaded - ETC2libgav1: Summer Nature 4Krenaissance: Apache Spark Bayesrenaissance: In-Memory Database Shootoutaom-av1: Speed 6 Realtime - Bosphorus 4Krenaissance: Finagle HTTP Requestslibgav1: Chimera 1080pbuild-mplayer: Time To Compileperf-bench: Epoll Waitstress-ng: x86_64 RdRandstress-ng: Mallocstress-ng: NUMAstress-ng: MMAPstress-ng: Futexstress-ng: Atomicstress-ng: Memory Copyingstress-ng: CPU Stressstress-ng: CPU Cachestress-ng: IO_uringstress-ng: MEMFDstress-ng: System V Message Passingstress-ng: Matrix Mathstress-ng: Semaphoresstress-ng: Cryptostress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Context Switchingstress-ng: Socket Activitystress-ng: SENDFILEstress-ng: Vector Mathstress-ng: Forkingperf-bench: Futex Lock-Piperf-bench: Futex Hashrenaissance: Rand Forestqmcpack: simple-H2Osvt-av1: Preset 4 - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080ponednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsvt-av1: Preset 8 - Bosphorus 4Kx264: Bosphorus 4Kdav1d: Summer Nature 4Kdav1d: Chimera 1080p 10-bitperf-bench: Sched Pipeonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUsvt-hevc: 7 - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kdav1d: Chimera 1080psvt-vp9: Visual Quality Optimized - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: VMAF Optimized - Bosphorus 4Konednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUperf-bench: Memcpy 1MBsvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Klibgav1: Summer Nature 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kavifenc: 6aom-av1: Speed 10 Realtime - Bosphorus 4Konednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsvt-av1: Preset 10 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Kcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionperf-bench: Memset 1MBdraco: Church Facadesvt-av1: Preset 12 - Bosphorus 4Konednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUetcpak: Multi-Threaded - ETC2onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUsvt-av1: Preset 8 - Bosphorus 1080pavifenc: 10, Losslesswebp2: Quality 100, Compression Effort 5svt-hevc: 7 - Bosphorus 1080px264: Bosphorus 1080pdraco: Lionnettle: aes256perf-bench: Syscall Basicaom-av1: Speed 8 Realtime - Bosphorus 1080pglibc-bench: expdav1d: Summer Nature 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080pwebp2: Defaultaom-av1: Speed 9 Realtime - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pglibc-bench: sinaom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 10 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pglibc-bench: sincosnettle: sha512svt-av1: Preset 12 - Bosphorus 1080pnettle: chachaglibc-bench: cosglibc-bench: atanhglibc-bench: tanhglibc-bench: sinhglibc-bench: ffsglibc-bench: asinhglibc-bench: pthread_onceglibc-bench: ffsllglibc-bench: sqrtglibc-bench: log2glibc-bench: modfnettle: poly1305-aesABCD1542.311447.66791.3920416.425536369445.469457.21446.985297.464376.682371.217360.1535.645813808.32.3236.4119752.9179.5930.995173.26148.5855.17131.623280.16701045384512306126267011015207836885526454745584.203084.24782193819185.0914578.417837.930947305154.692693.3773574358.463118.353115.292245.911831.841833.149.9656412.2616445170.491326.78.68.526.999.281806268.71818333.826841.226210.17571.791437.541822.4311.322930.3121527045656762370345920272211.5211.930.42636.7307.59584.581380.42355.516.212720.8261.3832.76412751577949.19452055.42320.08258.422019792.94357591.981607.4923228.3221.7324227.961012.7311534338.1557452.051287975.2413517.57182.61700830.811924100.289279.07229138.1558868.1276531.5610826819705480.826.6886.68930.4718.78958.066880.82588431.43331.05198.82492.463188664.011778.558260.72037143.9948.31720.1150.3612.21549.543.232613.435671.2941231.36233251.5335.5764.449.42672.4811.10945.386923.0887867.48281.127.73454.1917245131103.41114.140611.96932658.51416.1614110.7265.3494.979137.02129.99356912207.5222549195145.5310.0404893.33161.18180.13.251176.81183.645.5601203.517.14664.33831.47653225.396269.9131.7167722.49408.3241519.0654.447227.015426.103717.20963.3336521.05693.32143.33513.9937715.14924.662914641.581534.641444.64787.3420524.625513909701.724455.8445.39297.499378.928370.685365.1535.555713827.42.3236.01619690.2180.4411.001173.53148.4115.15131.4423320.16711045396500308126967021010207137170526452545284.188324.24824195719075.0602777.687456.831244305724.662757.9775433107.863189.953104.691833.41831.921831.299.79657652.2726439370.031297.78.598.5279.31804674.61805397.926581.326131.77367.441459.181813.3811.272997.0321993095649964470331620416421.5211.940.42616.2307.01284.491371.92363.616.132710.6260.2732.84913023977948.339467042.29316.03258.182102295.18344544.21607.2123119.63222.4625422.11004.3611517263.4757205.061286303.1113448.13183.971359296.881955167.189245.36228041.0358952.277706.1310766685401480.022.6516.7430.318.76358.038690.82604431.36230.8194.82490.123152674.008178.547620.71564542.5548.44713.7950.4112.07654.023.231373.444651.293531.35866760.65336.4568.649.3771.711.11615.205583.067879.16682.058.05357.4503955117106.23414.119411.95692656.51416.1528109.3325.3624.966137.36129.21356212205.2322542556143.2110.0298904.56161.53182.723.295188.18197.1946.3683203.9517.15314.335671.46563227.354272.3631.7217721.29423.7671511.8953.839627.02726.091717.13993.3735821.08323.32253.338193.9931715.14234.65514442.121535.91455.1792.94520526.525481229119.705456.21445.449297.681378.81369.816361.2335.779113710.12.3236.17279803.8181.3931.004173.32150.0075.14131.2923310.1670104537651130612556640642207637059525458645484.190964.22833195619115.0896678.217232.331173305844.652560.8775553113.193106.143114.221839.491832.561831.349.79654872.2926424269.8991344.68.598.5179.291815340.818147592658425991.27532.161422.451808.9211.252902.2822232315566686370006520415531.5211.90.42689.1306.9184.661389.82381.916.032707.7260.4432.78612852477859.99521309.34314.08258.022247778.96366961.111610.0623206.52219.93250031013.311528872.3757451.661288557.1813507.57183.731733539.891946582.899236.06228816.3458922.3177378.2410876810542481.821.8696.63630.3318.78447.70230.82668631.61630.89189.63471.443162104.02488.546910.71653842.3848.22720.450.3712.15453.023.233673.435281.296831.49493560.71336.4268.359.46262.0511.11665.447013.0694479.41181.888.03355.7150325122106.91514.131811.97012656.19916.1564111.2365.3814.968137.39125.68354812195.8722538675143.7410.2832903.95161.45182.433.368186.32196.4346.8566201.7417.15114.335581.51319226.72269.6631.7125747.25419.7111517.8353.810227.773226.126617.18213.3339221.07953.387313.373183.993715.1434.657584637.231536.7420797.9297.346378.73635.838713643.62.3236.34169650.6149.4755.150.164.234144.26224195319125.0871478.627760.04.652719.03110.843106.543108.581831.951825.781830.719.812.28469.9811308.18.598.4779.311.271.5212.310.42703.8306.41284.511365.42355.916.062692.1260.7732.7613107810896460586480.422.6376.77230.2918.77827.72180.82614431.37530.95189.52476.43227474.001168.550270.72647142.5448.04720.4650.312.28453.773.240053.447751.301131.35705560.66335.8967.189.46771.9711.11565.22853.1023679.61981.047.76454.120786106.51214.11711.95522658.61916.1542110.0035.386137.68128.8912155.7922573195143.8510.3025904.36161.49182.34187.3196.5145.5642197.617.13984.330671.50262219.668272.7331.7078726.78422.8541520.6153.799927.350326.10618.14083.3342221.09383.321843.334863.9940615.23894.659874638.26OpenBenchmarking.org

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF112DCBA300600900120015001536.741535.901534.641542.311. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCBA300600900120015001455.101444.641447.66

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionCBA2004006008001000792.95787.34791.391. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSDCBA4K8K12K16K20K20797.920526.520524.620416.4MIN: 20754.97 / MAX: 20902.04MIN: 20468.32 / MAX: 20594.78MIN: 20452.4 / MAX: 20618.01MIN: 20337.14 / MAX: 20472.95

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputCBA5000M10000M15000M20000M25000M25481229119.7125513909701.7225536369445.47

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCBA100200300400500456.21455.80457.21

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeCBA100200300400500445.45445.39446.991. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeDCBA60120180240300297.35297.68297.50297.46

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileDCBA80160240320400378.74378.81378.93376.68

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7CBA80160240320400369.82370.69371.221. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCBA80160240320400361.23365.15360.15

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeDCBA81624324035.8435.7835.5635.65

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensDCBA3K6K9K12K15K13643.613710.113827.413808.3MAX: 15019.04MIN: 13710.09 / MAX: 15212.77MIN: 13827.36 / MAX: 15305.31MIN: 13808.28 / MAX: 15301.71

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KDCBA0.5221.0441.5662.0882.612.322.322.322.321. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeDCBA81624324036.3436.1736.0236.41

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeDCBA2K4K6K8K10K9650.69803.89690.29752.9MIN: 7528MIN: 7656.59 / MAX: 9803.83MIN: 7503.63MIN: 7618.53

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7CBA4080120160200181.39180.44179.591. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCBA0.22590.45180.67770.90361.12951.0041.0010.9951. (CXX) g++ options: -O3

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCBA4080120160200173.32173.53173.26

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0DCBA306090120150149.48150.01148.41148.591. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KDCBA1.16332.32663.48994.65325.81655.155.145.155.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCBA306090120150131.29131.44131.60

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA50010001500200025002331233223281. (CXX) g++ options: -O3 -lm -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KDCBA0.0360.0720.1080.1440.180.160.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCBA16324864807071701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCBA204060801001041041041. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCBA120024003600480060005376539653841. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCBA1102203304405505115005121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCBA701402102803503063083061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCBA300600900120015001255126912621. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCBA140028004200560070006640670267011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCBA2004006008001000642101010151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCBA4008001200160020002076207120781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA8K16K24K32K40K3705937170368851. (CXX) g++ options: -O3 -lm -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCBA1102203304405505255265261. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCBA100020003000400050004586452545471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCBA100020003000400050004548452845581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeDCBA0.95271.90542.85813.81084.76354.234144.190964.188324.20308

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeDCBA0.9591.9182.8773.8364.7954.262244.228334.248244.24782

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerDCBA40080012001600200019531956195719381. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerDCBA40080012001600200019121911190719181. (CXX) g++ options: -O3 -lm -ldl

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeDCBA1.14562.29123.43684.58245.7285.087145.089665.060275.09145

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080p 10-bitDCBA2040608010078.6278.2177.6878.411. (CXX) g++ options: -O3 -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IODCBA2K4K6K8K10K7760.07232.37456.87837.9MAX: 10905.77MIN: 6221.45 / MAX: 10912.79MAX: 11038.64MIN: 7837.85 / MAX: 11331.76

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA7K14K21K28K35K3117331244309471. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA7K14K21K28K35K3058430572305151. (CXX) g++ options: -O3 -lm -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaDCBA1.05532.11063.16594.22125.27654.654.654.664.691. (CXX) g++ options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankDCBA60012001800240030002719.02560.82757.92693.3MIN: 2491.58 / MAX: 2936.02MIN: 2382.1 / MAX: 2770.21MIN: 2482.23 / MAX: 2957.59MIN: 2463.11 / MAX: 2826.4

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA17K34K51K68K85K7755577543773571. (CXX) g++ options: -O3 -lm -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUDCBA90018002700360045003110.843113.193107.864358.46MIN: 3105.51MIN: 3107.55MIN: 3100.51MIN: 3112.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDCBA70014002100280035003106.543106.143189.953118.35MIN: 3101.69MIN: 3100.97MIN: 3078.21MIN: 3113.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUDCBA70014002100280035003108.583114.223104.693115.29MIN: 3103.14MIN: 3107.51MIN: 3099.04MIN: 3100.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUDCBA50010001500200025001831.951839.491833.402245.91MIN: 1828.17MIN: 1826.96MIN: 1828.11MIN: 1825.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDCBA4008001200160020001825.781832.561831.921831.84MIN: 1821.07MIN: 1827.43MIN: 1827.22MIN: 1827.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUDCBA4008001200160020001830.711831.341831.291833.14MIN: 1826.24MIN: 1827.21MIN: 1826.61MIN: 1828.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KDCBA36912159.819.799.799.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA14K28K42K56K70K6548765765656411. (CXX) g++ options: -O3 -lm -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 4KDCBA0.51571.03141.54712.06282.57852.2842.2922.2722.2611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA14K28K42K56K70K6424264393644511. (CXX) g++ options: -O3 -lm -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2DCBA163248648069.9869.9070.0370.491. (CXX) g++ options: -O3 -fPIC -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesDCBA300600900120015001308.11344.61297.71326.7MIN: 1265.88 / MAX: 1358.77MIN: 1327.22 / MAX: 1358.81MIN: 1270.51 / MAX: 1321.22MIN: 1307.06 / MAX: 1345.19

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDDCBA2468108.598.598.598.601. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetDCBA2468108.478.518.528.521. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsDCBA2468107.007.007.006.991. (CXX) g++ options: -O3

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pDCBA36912159.309.299.309.281. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CBA400K800K1200K1600K2000K1815340.81804674.61806268.7

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CBA400K800K1200K1600K2000K1814759.01805397.91818333.8

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CBA6K12K18K24K30K26584.026581.326841.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CBA6K12K18K24K30K25991.226131.726210.1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCBA160032004800640080007532.167367.447571.79

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCBA300600900120015001422.451459.181437.54

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCBA4008001200160020001808.921813.381822.43

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pDCBA369121511.2711.2511.2711.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCBA60012001800240030002902.282997.032930.31

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingCBA500K1000K1500K2000K2500K2223231219930921527041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadCBA12M24M36M48M60M5566686356499644565676231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomCBA150K300K450K600K750K7000657033167034591. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomCBA400K800K1200K1600K2000K2041553204164220272211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomDCBA0.3420.6841.0261.3681.711.521.521.521.521. (CXX) g++ options: -O3

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pDCBA369121512.3111.9011.9411.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pDCBA0.09450.1890.28350.3780.47250.420.420.420.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyDCBA150300450600750703.8689.1616.2636.7MIN: 522.46 / MAX: 1350.04MIN: 509.57 / MAX: 1367.83MIN: 507.78 / MAX: 1396.3MIN: 526.61 / MAX: 1306.21

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2DCBA70140210280350306.41306.91307.01307.601. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KDCBA2040608010084.5184.6684.4984.581. (CXX) g++ options: -O3 -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesDCBA300600900120015001365.41389.81371.91380.4MIN: 1013.57MIN: 1032.48 / MAX: 1390.69MIN: 1025.2MIN: 1030.73

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutDCBA50010001500200025002355.92381.92363.62355.5MIN: 2208.8 / MAX: 2429.73MIN: 2244.67 / MAX: 2504.25MIN: 2214.57 / MAX: 2486.06MIN: 2184.59 / MAX: 2554.54

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KDCBA4812162016.0616.0316.1316.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsDCBA60012001800240030002692.12707.72710.62720.8MIN: 2507.61 / MAX: 2821.13MIN: 2487.03 / MAX: 2841.18MIN: 2527.4 / MAX: 2864.03MIN: 2537.94 / MAX: 2897.99

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080pDCBA60120180240300260.77260.44260.27261.381. (CXX) g++ options: -O3 -lrt

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileDCBA81624324032.7632.7932.8532.76

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitDCBA30K60K90K120K150K1310781285241302391275151. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCBA20K40K60K80K100K77859.9077948.3377949.101. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCBA2M4M6M8M10M9521309.349467042.299452055.421. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACBA70140210280350314.08316.03320.081. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCBA60120180240300258.02258.18258.421. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCBA500K1000K1500K2000K2500K2247778.962102295.182019792.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCBA80K160K240K320K400K366961.11344544.20357591.981. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCBA300600900120015001610.061607.211607.491. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCBA5K10K15K20K25K23206.5223119.6323228.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCBA50100150200250219.93222.46221.731. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringCBA5K10K15K20K25K25003.0025422.1024227.961. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCBA20040060080010001013.301004.361012.731. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCBA2M4M6M8M10M11528872.3711517263.4711534338.151. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCBA12K24K36K48K60K57451.6657205.0657452.051. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCBA300K600K900K1200K1500K1288557.181286303.111287975.241. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCBA3K6K9K12K15K13507.5713448.1313517.571. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCBA4080120160200183.73183.97182.601. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCBA400K800K1200K1600K2000K1733539.891359296.881700830.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCBA400K800K1200K1600K2000K1946582.891955167.181924100.281. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCBA2K4K6K8K10K9236.069245.369279.071. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECBA50K100K150K200K250K228816.34228041.03229138.151. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCBA13K26K39K52K65K58922.3158952.2058868.121. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCBA17K34K51K68K85K77378.2477706.1376531.561. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiDCBA200400600800100010891087107610821. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashDCBA1.5M3M4.5M6M7.5M64605866810542668540168197051. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestDCBA100200300400500480.4481.8480.0480.8MIN: 443.53 / MAX: 630.56MIN: 437.62 / MAX: 607.1MIN: 439.84 / MAX: 608MIN: 434.42 / MAX: 584.45

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.13Input: simple-H2ODCBA61218243022.6421.8722.6526.691. (CXX) g++ options: -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pDCBA2468106.7726.6366.7406.6891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pDCBA71421283530.2930.3330.3030.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUDCBA51015202518.7818.7818.7618.79MIN: 18.6MIN: 18.58MIN: 18.56MIN: 18.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDCBA2468107.721807.702308.038698.06688MIN: 4.81MIN: 4.77MIN: 4.84MIN: 4.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUDCBA0.1860.3720.5580.7440.930.8261440.8266860.8260440.825884MIN: 0.82MIN: 0.82MIN: 0.81MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 4KDCBA71421283531.3831.6231.3631.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KDCBA71421283530.9530.8930.8031.051. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KDCBA4080120160200189.52189.63194.82198.821. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitDCBA110220330440550476.40471.44490.12492.461. (CC) gcc options: -pthread -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeDCBA70K140K210K280K350K3227473162103152673188661. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDCBA0.90561.81122.71683.62244.5284.001164.024804.008174.01177MIN: 3.91MIN: 3.92MIN: 3.9MIN: 3.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUDCBA2468108.550278.546918.547628.55826MIN: 8.42MIN: 8.39MIN: 8.42MIN: 8.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUDCBA0.16350.3270.49050.6540.81750.7264710.7165380.7156450.720371MIN: 0.67MIN: 0.66MIN: 0.67MIN: 0.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KDCBA102030405042.5442.3842.5543.991. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KDCBA112233445548.0448.2248.4448.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pDCBA160320480640800720.46720.40713.79720.111. (CC) gcc options: -pthread -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KDCBA112233445550.3050.3750.4150.361. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessDCBA369121512.2812.1512.0812.221. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KDCBA122436486053.7753.0254.0249.541. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDCBA0.7291.4582.1872.9163.6453.240053.233673.231373.23261MIN: 3.19MIN: 3.18MIN: 3.18MIN: 3.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUDCBA0.77571.55142.32713.10283.87853.447753.435283.444653.43567MIN: 3.28MIN: 3.29MIN: 3.29MIN: 3.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUDCBA0.29270.58540.87811.17081.46351.301101.296801.293501.29412MIN: 1.25MIN: 1.24MIN: 1.25MIN: 1.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBDCBA71421283531.3631.4931.3631.361. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KDCBA142842567060.6660.7160.6551.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 1080pDCBA70140210280350335.89336.42336.45335.571. (CXX) g++ options: -O3 -lrt

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KDCBA153045607567.1868.3568.6464.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6DCBA36912159.4679.4629.3709.4261. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KDCBA163248648071.9762.0571.7072.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDCBA369121511.1211.1211.1211.11MIN: 11.05MIN: 11.07MIN: 11.06MIN: 11.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUDCBA1.22562.45123.67684.90246.1285.228505.447015.205585.38692MIN: 4.75MIN: 4.95MIN: 4.65MIN: 4.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDCBA0.6981.3962.0942.7923.493.102363.069443.067803.08878MIN: 3.03MIN: 3.03MIN: 3.02MIN: 3.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 4KDCBA2040608010079.6279.4179.1767.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KDCBA2040608010081.0481.8882.0581.121. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionDCBA2468107.7648.0338.0537.7341. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBDCBA132639526554.1255.7257.4554.191. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeCBA110022003300440055005122511751311. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 4KDCBA20406080100106.51106.92106.23103.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDCBA4812162014.1214.1314.1214.14MIN: 14.04MIN: 14.04MIN: 14.03MIN: 14.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUDCBA369121511.9611.9711.9611.97MIN: 11.89MIN: 11.9MIN: 11.89MIN: 11.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2DCBA60012001800240030002658.622656.202656.512658.511. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUDCBA4812162016.1516.1616.1516.16MIN: 16.07MIN: 16.07MIN: 16.07MIN: 16.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pDCBA20406080100110.00111.24109.33110.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessDCBA1.21192.42383.63574.84766.05955.3865.3815.3625.3491. (CXX) g++ options: -O3 -fPIC -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5CBA1.12032.24063.36094.48125.60154.9684.9664.9791. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pDCBA306090120150137.68137.39137.36137.021. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pDCBA306090120150128.89125.68129.21129.991. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionCBA80016002400320040003548356235691. (CXX) g++ options: -O3

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256DCBA3K6K9K12K15K12155.7912195.8712205.2312207.52MIN: 7981.08 / MAX: 20658.54MIN: 7973.01 / MAX: 20659.96MIN: 7983.96 / MAX: 20655.99MIN: 7986.83 / MAX: 20656.741. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicDCBA5M10M15M20M25M225731952253867522542556225491951. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lpython3.9 -lcrypt -lutil -lz -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pDCBA306090120150143.85143.74143.21145.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: expDCBA369121510.3010.2810.0310.041. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pDCBA2004006008001000904.36903.95904.56893.331. (CC) gcc options: -pthread -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pDCBA4080120160200161.49161.45161.53161.181. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pDCBA4080120160200182.34182.43182.72180.101. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCBA0.75781.51562.27343.03123.7893.3683.2953.2511. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pDCBA4080120160200187.30186.32188.18176.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pDCBA4080120160200196.51196.43197.19183.601. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinDCBA112233445545.5646.8646.3745.561. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pDCBA4080120160200197.60201.74203.95203.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUDCBA4812162017.1417.1517.1517.15MIN: 16.81MIN: 16.78MIN: 16.79MIN: 16.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDCBA0.97611.95222.92833.90444.88054.330674.335584.335674.33830MIN: 4.23MIN: 4.24MIN: 4.25MIN: 4.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUDCBA0.34050.6811.02151.3621.70251.502621.513191.465631.47653MIN: 1.4MIN: 1.43MIN: 1.39MIN: 1.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 1080pDCBA50100150200250219.67226.72227.35225.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pDCBA60120180240300272.73269.66272.36269.911. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sincosDCBA71421283531.7131.7131.7231.721. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512DCBA160320480640800726.78747.25721.29722.491. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pDCBA90180270360450422.85419.71423.77408.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaDCBA300600900120015001520.611517.831511.891519.06MIN: 680.19 / MAX: 4644.14MIN: 679.93 / MAX: 4633.69MIN: 679.88 / MAX: 4633.29MIN: 679.9 / MAX: 4635.711. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: cosDCBA122436486053.8053.8153.8454.451. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: atanhDCBA71421283527.3527.7727.0327.021. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: tanhDCBA61218243026.1126.1326.0926.101. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinhDCBA4812162018.1417.1817.1417.211. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsDCBA0.75911.51822.27733.03643.79553.334223.333923.373583.333651. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: asinhDCBA51015202521.0921.0821.0821.061. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: pthread_onceDCBA0.76211.52422.28633.04843.81053.321843.387313.322503.321401. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsllDCBA0.7591.5182.2773.0363.7953.334863.373183.338193.335101. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sqrtDCBA0.89871.79742.69613.59484.49353.994063.993703.993173.993771. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: log2DCBA4812162015.2415.1415.1415.151. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: modfDCBA1.04922.09843.14764.19685.2464.659874.657584.655104.662911. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesDCBA100020003000400050004638.264637.234442.124641.581. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

197 Results Shown

Quantum ESPRESSO
Blender
WebP2 Image Encode
Renaissance
Java JMH
Blender
GPAW
OSPray
Timed Gem5 Compilation
WebP2 Image Encode
Blender
OSPray
Renaissance
SVT-HEVC
OSPray
Renaissance
WebP2 Image Encode
GROMACS
Blender
libavif avifenc
AOM AV1
Blender
OSPray Studio
AOM AV1
ONNX Runtime:
  fcn-resnet101-11 - CPU - Parallel
  fcn-resnet101-11 - CPU - Standard
  GPT-2 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
  yolov4 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Parallel
  GPT-2 - CPU - Standard
  bertsquad-12 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
OSPray Studio
ONNX Runtime:
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
OSPray:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
OSPray Studio:
  2 - 1080p - 1 - Path Tracer
  1 - 1080p - 1 - Path Tracer
OSPray
libgav1
Renaissance
OSPray Studio:
  2 - 1080p - 16 - Path Tracer
  1 - 1080p - 16 - Path Tracer
simdjson
Renaissance
OSPray Studio
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
AOM AV1
OSPray Studio
SVT-AV1
OSPray Studio
libavif avifenc
Renaissance
simdjson:
  DistinctUserID
  TopTweet
  PartialTweets
SVT-HEVC
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  Mobilenet Float
  SqueezeNet
AOM AV1
TensorFlow Lite
Facebook RocksDB:
  Read While Writing
  Rand Read
  Update Rand
  Read Rand Write Rand
simdjson
AOM AV1:
  Speed 6 Realtime - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 1080p
Renaissance
Etcpak
libgav1
Renaissance:
  Apache Spark Bayes
  In-Memory Database Shootout
AOM AV1
Renaissance
libgav1
Timed MPlayer Compilation
perf-bench
Stress-NG:
  x86_64 RdRand
  Malloc
  NUMA
  MMAP
  Futex
  Atomic
  Memory Copying
  CPU Stress
  CPU Cache
  IO_uring
  MEMFD
  System V Message Passing
  Matrix Math
  Semaphores
  Crypto
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Context Switching
  Socket Activity
  SENDFILE
  Vector Math
  Forking
perf-bench:
  Futex Lock-Pi
  Futex Hash
Renaissance
QMCPACK
SVT-AV1
AOM AV1
oneDNN:
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
SVT-AV1
x264
dav1d:
  Summer Nature 4K
  Chimera 1080p 10-bit
perf-bench
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
  IP Shapes 1D - u8s8f32 - CPU
SVT-HEVC
AOM AV1
dav1d
SVT-VP9
libavif avifenc
SVT-VP9
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
perf-bench
SVT-VP9
libgav1
AOM AV1
libavif avifenc
AOM AV1
oneDNN:
  IP Shapes 3D - f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  IP Shapes 3D - u8s8f32 - CPU
SVT-AV1
SVT-HEVC
Parallel BZIP2 Compression
perf-bench
Google Draco
SVT-AV1
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
Etcpak
oneDNN
SVT-AV1
libavif avifenc
WebP2 Image Encode
SVT-HEVC
x264
Google Draco
Nettle
perf-bench
AOM AV1
Glibc Benchmarks
dav1d
SVT-VP9:
  Visual Quality Optimized - Bosphorus 1080p
  VMAF Optimized - Bosphorus 1080p
WebP2 Image Encode
AOM AV1
SVT-VP9
Glibc Benchmarks
AOM AV1
oneDNN:
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
SVT-AV1
SVT-HEVC
Glibc Benchmarks
Nettle
SVT-AV1
Nettle
Glibc Benchmarks:
  cos
  atanh
  tanh
  sinh
  ffs
  asinh
  pthread_once
  ffsll
  sqrt
  log2
  modf
Nettle