core-i5-12400-april

Intel Core i5-12400 testing with a ASRock B660M-HDV (3.02 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204077-NE-COREI512423
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 3 Tests
Creator Workloads 6 Tests
Encoding 3 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 9 Tests
Intel oneAPI 3 Tests
Programmer / Developer System Benchmarks 2 Tests
Raytracing 2 Tests
Renderers 2 Tests
Server CPU Tests 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 06 2022
  1 Hour, 59 Minutes
Intel Core i5-12400
April 06 2022
  6 Minutes
B
April 06 2022
  5 Hours, 40 Minutes
C
April 07 2022
  5 Hours, 38 Minutes
Invert Hiding All Results Option
  3 Hours, 21 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


core-i5-12400-april ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionAIntel Core i5-12400BCIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)ASRock B660M-HDV (3.02 BIOS)Intel Device 7aa716GB512GB SabrentllvmpipeRealtek ALC897IntelUbuntu 22.045.15.0-18-generic (x86_64)GNOME Shell 41.3X Server 1.20.144.5 Mesa 21.2.2 (LLVM 12.0.1 256 bits)1.1.182GCC 11.2.0ext41920x10804.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-iOLsLC/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-iOLsLC/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Intel Core i5-12400: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.7- Intel Core i5-12400: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9- B: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9- C: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Java Details- A, B, C: OpenJDK Runtime Environment (build 11.0.14.1+1-Ubuntu-0ubuntu1)Python Details- A: Python 3.9.12- B: Python 3.10.4- C: Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

core-i5-12400-april dav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timebuild-mplayer: Time To Compilecompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiononednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 16 - Path Tracerospray-studio: 3 - 1080p - 32 - Path Tracerbuild-wasmer: Time To Compilerocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randjava-jmh: Throughputonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, LosslessAIntel Core i5-12400BC561.82172.71710.02481.750.134.1713.91842.4763.1661.380.349.5811.6724.11107.06124.6133.2112.968812.3203163.5911.838041.799132.4722242.43210.955113.72464.373728.326827.917247.6937162.85346.670150.702446.398317.215229463.334134.534757.428488.271.826836805.224467.553.701133983471418956352115145572991166896856813692985.485456972066797801467166149755420972409216.50257456878277292427439444285386928452822189.7485.1612.20714.9186.2612.15191.89795.53536.820.144.7015.088.9945.6568.1472.380.3710.3912.3626.97116.51149.25150.3314.212713.8893182.1122.000381.965772.7314639.1139.8903.740529.901060.9578032.1089414.33189.063887.6989013.75651.426411.981564095.962264.904102.342264.982.689664099.422271.691.0471631143165374050151102548508971041246205412236677.914495946917428801558793163165623002668637.3660767205301309443458564494488432472940191.07585.96812.21314.9756.247612.41192.02796.46536.420.144.7015.028.9245.6868.3972.410.3710.3712.4126.95114.84142.90149.6614.156113.8804182.0371.985501.952202.7339939.06210.0313.739629.840820.9593362.0971414.32558.901157.8172613.80031.427381.993654086.372264.464101.912261.842.690254098.32270.921.0534131083163373550418102588509541040446205412224577.817496068657431351598728162112022882301161.5660497194301309438458574494488432572963191.47685.93912.19314.9996.257OpenBenchmarking.org

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pABC130260390520650SE +/- 1.17, N = 3SE +/- 1.26, N = 3561.82612.15612.411. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pABC110220330440550Min: 610.11 / Avg: 612.15 / Max: 614.16Min: 610.02 / Avg: 612.41 / Max: 614.281. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KABC4080120160200SE +/- 0.26, N = 3SE +/- 0.29, N = 3172.71191.89192.021. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KABC4080120160200Min: 191.38 / Avg: 191.89 / Max: 192.21Min: 191.44 / Avg: 192.02 / Max: 192.351. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pABC2004006008001000SE +/- 1.98, N = 3SE +/- 3.38, N = 3710.02795.53796.461. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pABC140280420560700Min: 791.58 / Avg: 795.53 / Max: 797.86Min: 789.78 / Avg: 796.46 / Max: 800.731. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitABC120240360480600SE +/- 0.72, N = 3SE +/- 0.74, N = 3481.75536.82536.421. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitABC90180270360450Min: 535.39 / Avg: 536.82 / Max: 537.71Min: 534.95 / Avg: 536.42 / Max: 537.261. (CC) gcc options: -pthread -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABC0.03150.0630.09450.1260.1575SE +/- 0.00, N = 3SE +/- 0.00, N = 30.130.140.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABC12345Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.14 / Avg: 0.14 / Max: 0.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABC1.05752.1153.17254.235.2875SE +/- 0.00, N = 3SE +/- 0.00, N = 34.174.704.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABC246810Min: 4.69 / Avg: 4.7 / Max: 4.7Min: 4.7 / Avg: 4.7 / Max: 4.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC48121620SE +/- 0.13, N = 15SE +/- 0.07, N = 313.9115.0815.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC48121620Min: 14.18 / Avg: 15.08 / Max: 15.72Min: 14.88 / Avg: 15.02 / Max: 15.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC3691215SE +/- 0.00, N = 3SE +/- 0.07, N = 38.008.998.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC3691215Min: 8.98 / Avg: 8.99 / Max: 8.99Min: 8.77 / Avg: 8.92 / Max: 8.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC1020304050SE +/- 0.02, N = 3SE +/- 0.07, N = 342.4745.6545.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC918273645Min: 45.61 / Avg: 45.65 / Max: 45.69Min: 45.59 / Avg: 45.68 / Max: 45.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC1530456075SE +/- 0.12, N = 3SE +/- 0.04, N = 363.1668.1468.391. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC1326395265Min: 67.94 / Avg: 68.14 / Max: 68.37Min: 68.31 / Avg: 68.39 / Max: 68.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC1632486480SE +/- 0.14, N = 3SE +/- 0.05, N = 361.3872.3872.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC1428425670Min: 72.16 / Avg: 72.38 / Max: 72.63Min: 72.32 / Avg: 72.41 / Max: 72.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABC0.08330.16660.24990.33320.4165SE +/- 0.00, N = 3SE +/- 0.00, N = 30.340.370.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABC12345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABC3691215SE +/- 0.01, N = 3SE +/- 0.04, N = 39.5810.3910.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABC3691215Min: 10.36 / Avg: 10.39 / Max: 10.41Min: 10.3 / Avg: 10.37 / Max: 10.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC3691215SE +/- 0.04, N = 3SE +/- 0.10, N = 811.6712.3612.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC48121620Min: 12.29 / Avg: 12.36 / Max: 12.42Min: 11.72 / Avg: 12.41 / Max: 12.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC612182430SE +/- 0.01, N = 3SE +/- 0.00, N = 324.1126.9726.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC612182430Min: 26.94 / Avg: 26.97 / Max: 26.99Min: 26.95 / Avg: 26.95 / Max: 26.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC306090120150SE +/- 0.41, N = 3SE +/- 0.29, N = 3107.06116.51114.841. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC20406080100Min: 115.73 / Avg: 116.51 / Max: 117.14Min: 114.27 / Avg: 114.84 / Max: 115.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC306090120150SE +/- 2.69, N = 15SE +/- 0.31, N = 3124.60149.25142.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC306090120150Min: 141.99 / Avg: 149.25 / Max: 174.98Min: 142.33 / Avg: 142.9 / Max: 143.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC306090120150SE +/- 0.90, N = 3SE +/- 1.27, N = 3133.21150.33149.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC306090120150Min: 149.23 / Avg: 150.33 / Max: 152.11Min: 147.22 / Avg: 149.66 / Max: 151.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeABC48121620SE +/- 0.03, N = 3SE +/- 0.04, N = 312.9714.2114.16
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeABC48121620Min: 14.16 / Avg: 14.21 / Max: 14.26Min: 14.11 / Avg: 14.16 / Max: 14.23

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeABC48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 312.3213.8913.88
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeABC48121620Min: 13.88 / Avg: 13.89 / Max: 13.9Min: 13.85 / Avg: 13.88 / Max: 13.93

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeABC4080120160200SE +/- 0.02, N = 3SE +/- 0.10, N = 3163.59182.11182.04
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeABC306090120150Min: 182.09 / Avg: 182.11 / Max: 182.16Min: 181.85 / Avg: 182.04 / Max: 182.21

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeABC0.45010.90021.35031.80042.2505SE +/- 0.00584, N = 3SE +/- 0.00129, N = 31.838042.000381.98550
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeABC246810Min: 1.99 / Avg: 2 / Max: 2.01Min: 1.98 / Avg: 1.99 / Max: 1.99

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeABC0.44230.88461.32691.76922.2115SE +/- 0.00094, N = 3SE +/- 0.00260, N = 31.799131.965771.95220
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeABC246810Min: 1.96 / Avg: 1.97 / Max: 1.97Min: 1.95 / Avg: 1.95 / Max: 1.96

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeABC0.61511.23021.84532.46043.0755SE +/- 0.00279, N = 3SE +/- 0.00158, N = 32.472222.731462.73399
OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeABC246810Min: 2.73 / Avg: 2.73 / Max: 2.74Min: 2.73 / Avg: 2.73 / Max: 2.74

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileABC1020304050SE +/- 0.09, N = 3SE +/- 0.08, N = 342.4339.1139.06
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileABC918273645Min: 38.94 / Avg: 39.11 / Max: 39.24Min: 38.91 / Avg: 39.06 / Max: 39.19

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionABC3691215SE +/- 0.106, N = 3SE +/- 0.023, N = 310.9559.89010.0311. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionABC3691215Min: 9.69 / Avg: 9.89 / Max: 10.05Min: 10 / Avg: 10.03 / Max: 10.081. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC306090120150SE +/- 0.00457, N = 3SE +/- 0.00589, N = 3113.724003.740523.73962MIN: 51.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC20406080100Min: 3.73 / Avg: 3.74 / Max: 3.75Min: 3.73 / Avg: 3.74 / Max: 3.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC1428425670SE +/- 0.01269, N = 3SE +/- 0.00524, N = 364.373709.901069.84082MIN: 14.76MIN: 9.49MIN: 9.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC1326395265Min: 9.88 / Avg: 9.9 / Max: 9.92Min: 9.83 / Avg: 9.84 / Max: 9.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC714212835SE +/- 0.001376, N = 3SE +/- 0.001531, N = 328.3268000.9578030.959336MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC612182430Min: 0.96 / Avg: 0.96 / Max: 0.96Min: 0.96 / Avg: 0.96 / Max: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC714212835SE +/- 0.00109, N = 3SE +/- 0.00196, N = 327.917202.108942.09714MIN: 2.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC612182430Min: 2.11 / Avg: 2.11 / Max: 2.11Min: 2.09 / Avg: 2.1 / Max: 2.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1122334455SE +/- 0.01, N = 3SE +/- 0.01, N = 347.6914.3314.33MIN: 17.03MIN: 14.19MIN: 14.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1020304050Min: 14.3 / Avg: 14.33 / Max: 14.35Min: 14.3 / Avg: 14.33 / Max: 14.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC4080120160200SE +/- 0.17309, N = 15SE +/- 0.13280, N = 15162.853009.063888.90115MIN: 6.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC306090120150Min: 8.34 / Avg: 9.06 / Max: 10.26Min: 8.51 / Avg: 8.9 / Max: 10.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC1122334455SE +/- 0.00768, N = 3SE +/- 0.10400, N = 346.670107.698907.81726MIN: 16.15MIN: 7.57MIN: 7.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC1020304050Min: 7.68 / Avg: 7.7 / Max: 7.71Min: 7.7 / Avg: 7.82 / Max: 8.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC1122334455SE +/- 0.02, N = 3SE +/- 0.03, N = 350.7013.7613.80MIN: 19.83MIN: 13.16MIN: 13.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC1020304050Min: 13.71 / Avg: 13.76 / Max: 13.78Min: 13.75 / Avg: 13.8 / Max: 13.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC1122334455SE +/- 0.00232, N = 3SE +/- 0.00246, N = 346.398301.426411.42738MIN: 1.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC918273645Min: 1.42 / Avg: 1.43 / Max: 1.43Min: 1.42 / Avg: 1.43 / Max: 1.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC48121620SE +/- 0.00178, N = 3SE +/- 0.01869, N = 317.215201.981561.99365MIN: 1.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC48121620Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 1.96 / Avg: 1.99 / Max: 2.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC6K12K18K24K30KSE +/- 3.35, N = 3SE +/- 12.29, N = 329463.304095.964086.37MIN: 14417.6MIN: 4045.08MIN: 4011.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC5K10K15K20K25KMin: 4091.12 / Avg: 4095.96 / Max: 4102.4Min: 4061.81 / Avg: 4086.37 / Max: 4099.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC7K14K21K28K35KSE +/- 0.80, N = 3SE +/- 3.21, N = 334134.502264.902264.46MIN: 18540.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC6K12K18K24K30KMin: 2263.38 / Avg: 2264.9 / Max: 2266.06Min: 2258.06 / Avg: 2264.46 / Max: 2268.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC7K14K21K28K35KSE +/- 1.65, N = 3SE +/- 3.15, N = 334757.404102.344101.91MIN: 18514.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC6K12K18K24K30KMin: 4099.36 / Avg: 4102.34 / Max: 4105.06Min: 4095.66 / Avg: 4101.91 / Max: 4105.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC6K12K18K24K30KSE +/- 1.35, N = 3SE +/- 5.98, N = 328488.202264.982261.84MIN: 124191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC5K10K15K20K25KMin: 2262.36 / Avg: 2264.98 / Max: 2266.84Min: 2250.41 / Avg: 2261.84 / Max: 2270.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC1632486480SE +/- 0.00256, N = 3SE +/- 0.00320, N = 371.826802.689662.69025MIN: 3.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC1428425670Min: 2.68 / Avg: 2.69 / Max: 2.69Min: 2.68 / Avg: 2.69 / Max: 2.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC8K16K24K32K40KSE +/- 7.00, N = 3SE +/- 6.25, N = 336805.204099.424098.30MIN: 165671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC6K12K18K24K30KMin: 4085.43 / Avg: 4099.42 / Max: 4106.64Min: 4085.8 / Avg: 4098.3 / Max: 4104.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC5K10K15K20K25KSE +/- 4.51, N = 3SE +/- 3.05, N = 324467.502271.692270.92MIN: 9924.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC4K8K12K16K20KMin: 2266.8 / Avg: 2271.69 / Max: 2280.7Min: 2265.12 / Avg: 2270.92 / Max: 2275.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC1224364860SE +/- 0.00153, N = 3SE +/- 0.00112, N = 353.701101.047161.05341MIN: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC1122334455Min: 1.04 / Avg: 1.05 / Max: 1.05Min: 1.05 / Avg: 1.05 / Max: 1.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC7001400210028003500SE +/- 7.42, N = 3SE +/- 10.02, N = 33398311431081. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC6001200180024003000Min: 3099 / Avg: 3113.67 / Max: 3123Min: 3088 / Avg: 3108 / Max: 31191. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC7001400210028003500SE +/- 0.58, N = 3SE +/- 0.33, N = 33471316531631. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC6001200180024003000Min: 3164 / Avg: 3165 / Max: 3166Min: 3163 / Avg: 3163.33 / Max: 31641. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC9001800270036004500SE +/- 3.53, N = 3SE +/- 2.85, N = 34189374037351. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC7001400210028003500Min: 3735 / Avg: 3740.33 / Max: 3747Min: 3729 / Avg: 3734.67 / Max: 37381. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC12K24K36K48K60KSE +/- 21.38, N = 3SE +/- 272.17, N = 35635250151504181. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC10K20K30K40K50KMin: 50120 / Avg: 50151 / Max: 50192Min: 50143 / Avg: 50417.67 / Max: 509621. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC20K40K60K80K100KSE +/- 146.65, N = 3SE +/- 84.91, N = 31151451025481025881. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC20K40K60K80K100KMin: 102256 / Avg: 102548.33 / Max: 102715Min: 102434 / Avg: 102588 / Max: 1027271. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC12K24K36K48K60KSE +/- 44.19, N = 3SE +/- 27.83, N = 35729950897509541. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC10K20K30K40K50KMin: 50809 / Avg: 50897 / Max: 50948Min: 50916 / Avg: 50953.67 / Max: 510081. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC20K40K60K80K100KSE +/- 167.02, N = 3SE +/- 63.66, N = 31166891041241040441. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC20K40K60K80K100KMin: 103790 / Avg: 104124 / Max: 104296Min: 103922 / Avg: 104043.67 / Max: 1041371. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC15K30K45K60K75KSE +/- 147.67, N = 3SE +/- 140.67, N = 36856862054620541. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC12K24K36K48K60KMin: 61759 / Avg: 62054.33 / Max: 62202Min: 61773 / Avg: 62054.33 / Max: 621971. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC30K60K90K120K150KSE +/- 170.37, N = 3SE +/- 103.73, N = 31369291223661222451. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC20K40K60K80K100KMin: 122025 / Avg: 122365.67 / Max: 122542Min: 122044 / Avg: 122245 / Max: 1223901. (CXX) g++ options: -O3 -lm -ldl

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.2Time To CompileABC20406080100SE +/- 0.11, N = 3SE +/- 0.23, N = 385.4977.9177.821. (CC) gcc options: -m64 -ldl -lxkbcommon -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.2Time To CompileABC1632486480Min: 77.7 / Avg: 77.91 / Max: 78.07Min: 77.35 / Avg: 77.82 / Max: 78.071. (CC) gcc options: -m64 -ldl -lxkbcommon -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadABC11M22M33M44M55MSE +/- 269279.25, N = 3SE +/- 309988.44, N = 34569720649594691496068651. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadABC9M18M27M36M45MMin: 49313751 / Avg: 49594691.33 / Max: 50133079Min: 49262931 / Avg: 49606865 / Max: 502255541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomABC160K320K480K640K800KSE +/- 1507.03, N = 3SE +/- 847.59, N = 36797807428807431351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomABC130K260K390K520K650KMin: 739923 / Avg: 742880 / Max: 744864Min: 741675 / Avg: 743135 / Max: 7446111. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingABC300K600K900K1200K1500KSE +/- 7170.07, N = 3SE +/- 12505.02, N = 31467166155879315987281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingABC300K600K900K1200K1500KMin: 1548377 / Avg: 1558793.33 / Max: 1572537Min: 1577138 / Avg: 1598728.33 / Max: 16204561. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomABC300K600K900K1200K1500KSE +/- 8702.90, N = 3SE +/- 1481.37, N = 31497554163165616211201. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomABC300K600K900K1200K1500KMin: 1617061 / Avg: 1631656.33 / Max: 1647167Min: 1618298 / Avg: 1621120.33 / Max: 16233121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputABC5000M10000M15000M20000M25000M20972409216.5023002668637.3622882301161.56

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelABC13002600390052006500SE +/- 5.13, N = 3SE +/- 8.66, N = 35745607660491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelABC11002200330044005500Min: 6068.5 / Avg: 6075.5 / Max: 6085.5Min: 6033.5 / Avg: 6048.67 / Max: 6063.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardABC15003000450060007500SE +/- 6.71, N = 3SE +/- 2.52, N = 36878720571941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardABC13002600390052006500Min: 7196 / Avg: 7204.83 / Max: 7218Min: 7189 / Avg: 7193.83 / Max: 7197.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelABC70140210280350SE +/- 0.88, N = 3SE +/- 1.36, N = 32773013011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelABC50100150200250Min: 299 / Avg: 300.67 / Max: 302Min: 299.5 / Avg: 301.33 / Max: 3041. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardABC70140210280350SE +/- 0.17, N = 3SE +/- 0.17, N = 32923093091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardABC60120180240300Min: 308.5 / Avg: 308.83 / Max: 309Min: 308.5 / Avg: 308.67 / Max: 3091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelABC100200300400500SE +/- 4.83, N = 12SE +/- 3.68, N = 124274434381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelABC80160240320400Min: 416.5 / Avg: 443.29 / Max: 466Min: 428 / Avg: 438.21 / Max: 4641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardABC100200300400500SE +/- 0.17, N = 3SE +/- 0.17, N = 34394584581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardABC80160240320400Min: 458 / Avg: 458.33 / Max: 458.5Min: 458 / Avg: 458.17 / Max: 458.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelABC1326395265SE +/- 0.44, N = 3SE +/- 0.17, N = 34456571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelABC1122334455Min: 55.5 / Avg: 56.17 / Max: 57Min: 56.5 / Avg: 56.83 / Max: 571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardABC1020304050SE +/- 0.00, N = 3SE +/- 0.00, N = 34244441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardABC918273645Min: 43.5 / Avg: 43.5 / Max: 43.5Min: 43.5 / Avg: 43.5 / Max: 43.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelABC2004006008001000SE +/- 3.33, N = 3SE +/- 2.96, N = 38539449441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelABC170340510680850Min: 940.5 / Avg: 943.83 / Max: 950.5Min: 939.5 / Avg: 943.83 / Max: 949.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardABC2004006008001000SE +/- 0.00, N = 3SE +/- 0.17, N = 38698848841. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardABC160320480640800Min: 884 / Avg: 884 / Max: 884Min: 883.5 / Avg: 883.83 / Max: 8841. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelABC7001400210028003500SE +/- 4.73, N = 3SE +/- 11.90, N = 32845324732571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelABC6001200180024003000Min: 3239.5 / Avg: 3246.5 / Max: 3255.5Min: 3241 / Avg: 3257.33 / Max: 3280.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardABC6001200180024003000SE +/- 12.08, N = 3SE +/- 0.67, N = 32822294029631. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardABC5001000150020002500Min: 2928 / Avg: 2940.33 / Max: 2964.5Min: 2962 / Avg: 2963.33 / Max: 29641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Intel Core i5-12400BC4080120160200SE +/- 0.09, N = 3SE +/- 0.20, N = 3189.74191.08191.481. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Intel Core i5-12400BC4080120160200Min: 190.92 / Avg: 191.08 / Max: 191.23Min: 191.09 / Avg: 191.48 / Max: 191.791. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Intel Core i5-12400BC20406080100SE +/- 0.15, N = 3SE +/- 0.30, N = 385.1685.9785.941. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Intel Core i5-12400BC1632486480Min: 85.66 / Avg: 85.97 / Max: 86.14Min: 85.34 / Avg: 85.94 / Max: 86.291. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Intel Core i5-12400BC3691215SE +/- 0.03, N = 3SE +/- 0.02, N = 312.2112.2112.191. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Intel Core i5-12400BC48121620Min: 12.18 / Avg: 12.21 / Max: 12.27Min: 12.16 / Avg: 12.19 / Max: 12.221. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessIntel Core i5-12400BC48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 314.9214.9815.001. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessIntel Core i5-12400BC48121620Min: 14.93 / Avg: 14.98 / Max: 15.01Min: 14.96 / Avg: 15 / Max: 15.051. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessIntel Core i5-12400BC246810SE +/- 0.026, N = 3SE +/- 0.029, N = 36.2006.2476.2571. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessIntel Core i5-12400BC3691215Min: 6.2 / Avg: 6.25 / Max: 6.28Min: 6.2 / Avg: 6.26 / Max: 6.31. (CXX) g++ options: -O3 -fPIC -lm

76 Results Shown

dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
OSPray:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
Timed MPlayer Compilation
Parallel BZIP2 Compression
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
OSPray Studio:
  1 - 1080p - 1 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  3 - 1080p - 1 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  1 - 1080p - 32 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  2 - 1080p - 32 - Path Tracer
  3 - 1080p - 16 - Path Tracer
  3 - 1080p - 32 - Path Tracer
Timed Wasmer Compilation
Facebook RocksDB:
  Rand Read
  Update Rand
  Read While Writing
  Read Rand Write Rand
Java JMH
ONNX Runtime:
  GPT-2 - CPU - Parallel
  GPT-2 - CPU - Standard
  yolov4 - CPU - Parallel
  yolov4 - CPU - Standard
  bertsquad-12 - CPU - Parallel
  bertsquad-12 - CPU - Standard
  fcn-resnet101-11 - CPU - Parallel
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
libavif avifenc:
  0
  2
  6
  6, Lossless
  10, Lossless