3950X April

AMD Ryzen 9 3950X 16-Core testing with a ASUS ROG CROSSHAIR VII HERO (WI-FI) (3103 BIOS) and Sapphire AMD Radeon RX 470/480/570/570X/580/580X/590 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204120-NE-3950XAPRI19
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 2 Tests
Creator Workloads 6 Tests
Encoding 3 Tests
Common Kernel Benchmarks 2 Tests
Multi-Core 8 Tests
Intel oneAPI 3 Tests
Raytracing 2 Tests
Renderers 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 11 2022
  1 Hour, 26 Minutes
B
April 11 2022
  4 Hours, 15 Minutes
3
April 12 2022
  4 Hours, 4 Minutes
Invert Hiding All Results Option
  3 Hours, 15 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3950X AprilProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAB3AMD Ryzen 9 3950X 16-Core @ 3.50GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VII HERO (WI-FI) (3103 BIOS)AMD Starship/Matisse16GBSamsung SSD 970 EVO 250GBSapphire AMD Radeon RX 470/480/570/570X/580/580X/590 4GB (1260/1750MHz)AMD Ellesmere HDMI AudioDELL S2409WIntel I211 + Realtek RTL8822BE 802.11a/b/g/n/acUbuntu 20.045.11.0-43-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.131.2.128GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021Java Details- OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04)Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

AB3Result OverviewPhoronix Test Suite100%101%102%103%OSPrayParallel BZIP2 Compressionperf-benchoneDNNTimed MPlayer CompilationFacebook RocksDBAOM AV1libavif avifenclibgav1Java JMHOSPray Studio

3950X Aprilperf-bench: Epoll Waitospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeonednn: IP Shapes 3D - f32 - CPUperf-bench: Syscall Basicaom-av1: Speed 0 Two-Pass - Bosphorus 4Kperf-bench: Memset 1MBperf-bench: Futex Lock-Piaom-av1: Speed 9 Realtime - Bosphorus 4Konednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - f32 - CPUperf-bench: Memcpy 1MBonednn: Recurrent Neural Network Inference - u8s8f32 - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kavifenc: 6, Losslessaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pavifenc: 6compress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionavifenc: 10, Losslessonednn: Recurrent Neural Network Training - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUrocksdb: Read While Writingonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timerocksdb: Rand Readavifenc: 2onednn: Deconvolution Batch shapes_3d - f32 - CPUrocksdb: Read Rand Write Randonednn: Recurrent Neural Network Training - u8s8f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 1080pbuild-mplayer: Time To Compileperf-bench: Futex Hashospray-studio: 2 - 1080p - 32 - Path Traceraom-av1: Speed 4 Two-Pass - Bosphorus 4Konednn: Convolution Batch Shapes Auto - f32 - CPUlibgav1: Chimera 1080pospray-studio: 2 - 1080p - 16 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 3 - 1080p - 1 - Path Traceronednn: IP Shapes 3D - u8s8f32 - CPUospray: particle_volume/pathtracer/real_timeospray-studio: 3 - 1080p - 16 - Path Traceronednn: IP Shapes 1D - f32 - CPUavifenc: 0libgav1: Chimera 1080p 10-bitospray-studio: 1 - 1080p - 16 - Path Tracerlibgav1: Summer Nature 1080procksdb: Update Randospray-studio: 1 - 1080p - 1 - Path Traceraom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 4Konednn: IP Shapes 1D - u8s8f32 - CPUperf-bench: Sched Pipeospray-studio: 2 - 1080p - 1 - Path Tracerlibgav1: Summer Nature 4Kospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 32 - Path Tracerjava-jmh: Throughputonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 4KAB3299762.559352.4661911.0447204878730.1973.52022348249.392653.012677.160.67941541.474.1708314.4700712600.060.4595.7714.479.998115.98.52124.996.4594.3425.6225129.5423.24135135.1132220532.54310.78944323.465622.41348193347156.0744.3759523688225117.4428.2621.9514799340668436.0320.6164146.78303844.0637322160.790537241.187353124.66061112.11251.8129566198.41654775185410.2810.921.29867403883190463.77653887695032629111123.5531.8473949.58320982.613992.5085610.6479215921790.1872.61368346151.512596.922638.950.68081340.214.2965514.4396902610.480.4594.4914.359.814118.048.37123.896.5554.4005.6665136.7323.03975149.0732540002.552140.79215423.590822.52098132905256.3814.4003223831895119.5328.2622.0654819775670356.0020.5759147.37302834.0609522190.788060241.551352994.66399112.40751.6929498198.03656248185210.2910.941.30043403760190663.67652917689132638201689.9181.8474351.47334292.351832.2608310.2818204074330.1869.68147546150.852556.632581.960.70167640.534.2185714.8717272540.710.4496.4914.649.899117.748.45122.976.5654.3485.5965075.4922.98905093.4632561602.563160.79553123.643222.58308150045956.0194.3728323806135091.0228.1121.9654823523667016.0020.5250146.99302964.0507422230.789330241.931354044.65093112.24351.7529541198.48654806185010.2710.941.30085403227190763.71652967689232625899282.1111.8470652.26OpenBenchmarking.org

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitAB37K14K21K28K35KSE +/- 344.15, N = 3SE +/- 354.56, N = 32997632098334291. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_time3AB0.58811.17621.76432.35242.9405SE +/- 0.00623, N = 3SE +/- 0.00062, N = 32.351832.559352.61399

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_time3AB0.56441.12881.69322.25762.822SE +/- 0.00092, N = 3SE +/- 0.00142, N = 32.260832.466192.50856

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAB33691215SE +/- 0.02, N = 3SE +/- 0.03, N = 311.0410.6510.28MIN: 10.91MIN: 10.46MIN: 9.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall Basic3AB5M10M15M20M25MSE +/- 147824.81, N = 3SE +/- 156358.13, N = 32040743320487873215921791. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KB3A0.04280.08560.12840.17120.214SE +/- 0.00, N = 3SE +/- 0.00, N = 40.180.180.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MB3BA1632486480SE +/- 0.77, N = 7SE +/- 0.10, N = 369.6872.6173.521. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiB3A100200300400500SE +/- 5.21, N = 6SE +/- 5.17, N = 34614614821. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KA3B1224364860SE +/- 0.75, N = 3SE +/- 0.52, N = 1549.3950.8551.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUAB36001200180024003000SE +/- 20.31, N = 3SE +/- 15.15, N = 32653.012596.922556.63MIN: 2610.08MIN: 2539.82MIN: 2510.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAB36001200180024003000SE +/- 26.03, N = 3SE +/- 28.67, N = 32677.162638.952581.96MIN: 2661.56MIN: 2574.31MIN: 2502.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU3BA0.15790.31580.47370.63160.7895SE +/- 0.005860, N = 3SE +/- 0.001462, N = 30.7016760.6808130.679415MIN: 0.63MIN: 0.61MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KB3A918273645SE +/- 0.41, N = 3SE +/- 0.66, N = 340.2140.5341.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUB3A0.96671.93342.90013.86684.8335SE +/- 0.02203, N = 3SE +/- 0.02789, N = 34.296554.218574.17083MIN: 3.93MIN: 3.9MIN: 3.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBBA348121620SE +/- 0.20, N = 3SE +/- 0.09, N = 314.4414.4714.871. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUBA36001200180024003000SE +/- 41.03, N = 3SE +/- 7.94, N = 32610.482600.062540.71MIN: 2529.82MIN: 2582.7MIN: 2495.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p3AB0.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 30.440.450.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pBA320406080100SE +/- 0.34, N = 3SE +/- 0.30, N = 394.4995.7796.491. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KBA348121620SE +/- 0.11, N = 15SE +/- 0.18, N = 314.3514.4714.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessA3B3691215SE +/- 0.107, N = 3SE +/- 0.019, N = 39.9989.8999.8141. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pA3B306090120150SE +/- 0.43, N = 3SE +/- 0.58, N = 3115.90117.74118.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pB3A246810SE +/- 0.03, N = 3SE +/- 0.02, N = 38.378.458.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p3BA306090120150SE +/- 1.64, N = 4SE +/- 0.98, N = 14122.97123.89124.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 63BA246810SE +/- 0.013, N = 3SE +/- 0.010, N = 36.5656.5556.4591. (CXX) g++ options: -O3 -fPIC -lm

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionB3A0.991.982.973.964.95SE +/- 0.013, N = 3SE +/- 0.060, N = 44.4004.3484.3421. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessBA31.27492.54983.82475.09966.3745SE +/- 0.046, N = 3SE +/- 0.019, N = 35.6665.6225.5961. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUBA311002200330044005500SE +/- 21.94, N = 3SE +/- 15.57, N = 35136.735129.545075.49MIN: 5068.06MIN: 5095.07MIN: 5015.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUAB3612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 323.2423.0422.99MIN: 23.08MIN: 22.84MIN: 22.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUBA311002200330044005500SE +/- 12.81, N = 3SE +/- 56.24, N = 35149.075135.115093.46MIN: 5102.61MIN: 5111.68MIN: 4939.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingAB3700K1400K2100K2800K3500KSE +/- 9707.64, N = 3SE +/- 10292.52, N = 33222053325400032561601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU3BA0.57671.15341.73012.30682.8835SE +/- 0.01307, N = 3SE +/- 0.00703, N = 32.563162.552142.54310MIN: 2.46MIN: 2.44MIN: 2.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU3BA0.1790.3580.5370.7160.895SE +/- 0.000562, N = 3SE +/- 0.004119, N = 30.7955310.7921540.789443MIN: 0.75MIN: 0.74MIN: 0.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeAB3612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 323.4723.5923.64

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeAB3510152025SE +/- 0.04, N = 3SE +/- 0.05, N = 322.4122.5222.58

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadB3A20M40M60M80M100MSE +/- 886019.83, N = 3SE +/- 310806.70, N = 38132905281500459819334711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2BA31326395265SE +/- 0.26, N = 3SE +/- 0.08, N = 356.3856.0756.021. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUBA30.99011.98022.97033.96044.9505SE +/- 0.02200, N = 3SE +/- 0.01112, N = 34.400324.375954.37283MIN: 4.28MIN: 4.28MIN: 4.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomA3B500K1000K1500K2000K2500KSE +/- 5611.13, N = 3SE +/- 18963.04, N = 32368822238061323831891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUBA311002200330044005500SE +/- 34.54, N = 3SE +/- 19.61, N = 35119.535117.445091.02MIN: 5034.39MIN: 5086.78MIN: 5027.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p3AB714212835SE +/- 0.07, N = 3SE +/- 0.04, N = 328.1128.2628.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileB3A510152025SE +/- 0.06, N = 3SE +/- 0.02, N = 322.0721.9721.95

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashAB31000K2000K3000K4000K5000KSE +/- 3374.73, N = 3SE +/- 6656.67, N = 34799340481977548235231. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerBA314K28K42K56K70KSE +/- 63.26, N = 3SE +/- 128.74, N = 36703566843667011. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KB3A246810SE +/- 0.01, N = 3SE +/- 0.02, N = 36.006.006.031. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAB3510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 320.6220.5820.53MIN: 20.48MIN: 20.41MIN: 20.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080pA3B306090120150SE +/- 0.08, N = 3SE +/- 0.37, N = 3146.78146.99147.371. (CXX) g++ options: -O3 -lpthread -lrt

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerA3B7K14K21K28K35KSE +/- 11.15, N = 3SE +/- 17.17, N = 33038430296302831. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time3BA0.91431.82862.74293.65724.5715SE +/- 0.00418, N = 3SE +/- 0.00423, N = 34.050744.060954.06373

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer3BA5001000150020002500SE +/- 1.76, N = 32223221922161. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUA3B0.17790.35580.53370.71160.8895SE +/- 0.006963, N = 3SE +/- 0.004439, N = 30.7905370.7893300.788060MIN: 0.75MIN: 0.73MIN: 0.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeAB350100150200250SE +/- 0.20, N = 3SE +/- 0.34, N = 3241.19241.55241.93

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer3AB8K16K24K32K40KSE +/- 45.40, N = 3SE +/- 21.28, N = 33540435312352991. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUBA31.04942.09883.14824.19765.247SE +/- 0.00368, N = 3SE +/- 0.00958, N = 34.663994.660614.65093MIN: 4.46MIN: 4.46MIN: 4.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0B3A306090120150SE +/- 0.41, N = 3SE +/- 0.34, N = 3112.41112.24112.111. (CXX) g++ options: -O3 -fPIC -lm

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080p 10-bitB3A1224364860SE +/- 0.10, N = 3SE +/- 0.01, N = 351.6951.7551.811. (CXX) g++ options: -O3 -lpthread -lrt

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerA3B6K12K18K24K30KSE +/- 13.72, N = 32956629541294981. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 1080pBA34080120160200SE +/- 0.14, N = 3SE +/- 0.16, N = 3198.03198.41198.481. (CXX) g++ options: -O3 -lpthread -lrt

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomA3B140K280K420K560K700KSE +/- 1748.11, N = 3SE +/- 1055.36, N = 36547756548066562481. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerAB3400800120016002000SE +/- 0.67, N = 3SE +/- 0.67, N = 31854185218501. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p3AB3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 310.2710.2810.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KAB33691215SE +/- 0.04, N = 3SE +/- 0.02, N = 310.9210.9410.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU3BA0.29270.58540.87811.17081.4635SE +/- 0.00084, N = 3SE +/- 0.00156, N = 31.300851.300431.29867MIN: 1.28MIN: 1.28MIN: 1.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched Pipe3BA90K180K270K360K450KSE +/- 2490.82, N = 3SE +/- 3062.78, N = 34032274037604038831. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer3BA400800120016002000SE +/- 2.65, N = 3SE +/- 1.86, N = 31907190619041. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KB3A1428425670SE +/- 0.06, N = 3SE +/- 0.07, N = 363.6763.7163.771. (CXX) g++ options: -O3 -lpthread -lrt

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerA3B14K28K42K56K70KSE +/- 54.01, N = 3SE +/- 76.21, N = 36538865296652911. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerA3B16K32K48K64K80KSE +/- 50.93, N = 3SE +/- 55.43, N = 37695076892768911. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughput3AB7000M14000M21000M28000M35000M32625899282.1132629111123.5532638201689.92

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUBA30.41570.83141.24711.66282.0785SE +/- 0.00549, N = 3SE +/- 0.00506, N = 31.847431.847391.84706MIN: 1.78MIN: 1.78MIN: 1.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

3: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KAB31224364860SE +/- 0.66, N = 15SE +/- 0.82, N = 1549.5851.4752.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

70 Results Shown

perf-bench
OSPray:
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
oneDNN
perf-bench
AOM AV1
perf-bench:
  Memset 1MB
  Futex Lock-Pi
AOM AV1
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
AOM AV1
oneDNN
perf-bench
oneDNN
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 4K
libavif avifenc
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
libavif avifenc
Parallel BZIP2 Compression
libavif avifenc
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Facebook RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
OSPray:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
Facebook RocksDB
libavif avifenc
oneDNN
Facebook RocksDB
oneDNN
AOM AV1
Timed MPlayer Compilation
perf-bench
OSPray Studio
AOM AV1
oneDNN
libgav1
OSPray Studio
OSPray
OSPray Studio
oneDNN
OSPray
OSPray Studio
oneDNN
libavif avifenc
libgav1
OSPray Studio
libgav1
Facebook RocksDB
OSPray Studio
AOM AV1:
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 4K
oneDNN
perf-bench
OSPray Studio
libgav1
OSPray Studio:
  1 - 1080p - 32 - Path Tracer
  3 - 1080p - 32 - Path Tracer
Java JMH
oneDNN
AOM AV1