june

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206054-PTS-JUNE759444
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 8 Tests
Creator Workloads 8 Tests
Encoding 5 Tests
HPC - High Performance Computing 4 Tests
Imaging 2 Tests
Java 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 7 Tests
Server 2 Tests
Server CPU Tests 8 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
June 04 2022
  2 Hours, 46 Minutes
B
June 04 2022
  2 Hours, 27 Minutes
C
June 05 2022
  2 Hours, 27 Minutes
D
June 05 2022
  2 Hours, 27 Minutes
Invert Hiding All Results Option
  2 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


juneProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCDAMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS)AMD Starship/Matisse16GB500GB Seagate FireCuda 520 SSD ZP500GM30002AMD Radeon RX 56/64 8GB (1630/945MHz)AMD Vega 10 HDMI AudioASUS MG28URealtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-22-generic (x86_64)GNOME Shell 41.3X Server + Wayland4.6 Mesa 21.3.5 (LLVM 12.0.1)1.2.195GCC 11.2.0ext43840x21604.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42)1.3.204OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - D: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021Graphics Details- BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102Java Details- OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Details- A: Python 3.10.2- B: Python 3.10.4- C: Python 3.10.4- D: Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDLogarithmic Result OverviewPhoronix Test SuiteoneDNNTensorFlow LiteGROMACSONNX Runtimeperf-benchSVT-HEVCInfluxDBSVT-AV1Glibc BenchmarksSVT-VP9libavif avifencStress-NGJava JMHEtcpakNettleWebP2 Image EncodeRenaissancesimdjsonx264GravityMark

junewebp2: Quality 100, Lossless Compressionjava-jmh: Throughputwebp2: Quality 95, Compression Effort 7renaissance: ALS Movie Lensrenaissance: Akka Unbalanced Cobwebbed Treesvt-hevc: 1 - Bosphorus 4Kgravitymark: 1920 x 1080 - OpenGLgravitymark: 1920 x 1080 - Vulkangravitymark: 2560 x 1440 - OpenGLgravitymark: 3840 x 2160 - OpenGLgravitymark: 2560 x 1440 - Vulkangravitymark: 3840 x 2160 - Vulkangromacs: MPI CPU - water_GMX50_barewebp2: Quality 75, Compression Effort 7renaissance: Genetic Algorithm Using Jenetics + Futuresavifenc: 0onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUinfluxdb: 4 - 10000 - 2,5000,1 - 10000onnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: yolov4 - CPU - Parallelonednn: Recurrent Neural Network Training - f32 - CPUonnx: ArcFace ResNet-100 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonednn: Recurrent Neural Network Inference - u8s8f32 - CPUrenaissance: Savina Reactors.IOonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUrenaissance: Apache Spark PageRankrenaissance: Apache Spark ALSinfluxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 10000tensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobilerenaissance: In-Memory Database Shootoutrenaissance: Apache Spark Bayessimdjson: PartialTweetssvt-av1: Preset 4 - Bosphorus 4Kavifenc: 2simdjson: DistinctUserIDsimdjson: TopTweettensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantrenaissance: Scala Dottyetcpak: Single-Threaded - DXT1etcpak: Single-Threaded - ETC2simdjson: Kostyasvt-hevc: 1 - Bosphorus 1080psimdjson: LargeRandrenaissance: Finagle HTTP Requestsrenaissance: Rand Forestperf-bench: Epoll Waitstress-ng: NUMAstress-ng: Mallocstress-ng: Atomicstress-ng: Futexstress-ng: MMAPstress-ng: IO_uringstress-ng: Memory Copyingstress-ng: MEMFDstress-ng: Glibc Qsort Data Sortingstress-ng: CPU Cachestress-ng: Matrix Mathstress-ng: System V Message Passingstress-ng: Glibc C String Functionsstress-ng: Context Switchingstress-ng: Socket Activitystress-ng: Vector Mathstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: SENDFILEstress-ng: Forkingstress-ng: Cryptoperf-bench: Futex Lock-Piperf-bench: Futex Hashsvt-av1: Preset 4 - Bosphorus 1080ponednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUperf-bench: Memcpy 1MBsvt-av1: Preset 8 - Bosphorus 4Kx264: Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUperf-bench: Sched Pipesvt-vp9: VMAF Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Konednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUavifenc: 6, Losslessonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsvt-av1: Preset 10 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Kavifenc: 6perf-bench: Memset 1MBsvt-av1: Preset 12 - Bosphorus 4Konednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUavifenc: 10, Losslessetcpak: Multi-Threaded - DXT1etcpak: Multi-Threaded - ETC2svt-av1: Preset 8 - Bosphorus 1080pperf-bench: Syscall Basicnettle: aes256webp2: Quality 100, Compression Effort 5x264: Bosphorus 1080pglibc-bench: expsvt-hevc: 7 - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pglibc-bench: sinwebp2: Defaultonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 10 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pglibc-bench: sincossvt-av1: Preset 12 - Bosphorus 1080pnettle: sha512glibc-bench: cosglibc-bench: pthread_onceglibc-bench: tanhglibc-bench: sinhglibc-bench: atanhglibc-bench: asinhglibc-bench: ffsglibc-bench: modfglibc-bench: log2glibc-bench: ffsllglibc-bench: sqrtnettle: chachanettle: poly1305-aesstress-ng: x86_64 RdRandABCD742.93423214280399.043323.9712757.912754.42.82107.2105.391.165.790.6651.047151.7842666.5134.1633601.735077.990962160594580432111724432226.3111957755473654541443530995.18321.128223.825968.53174.23211.31236274.61293571.62445920233041022716303614.92126.23.832.46766.0284.594.6820691724037144820.6778.7236.904232.1662.93111.013814.1743.442469211.0216082432.37576128.962786611.95296.2735619.314844.13850.42188.3156.9658365.057796050.71991624.534520622.359558.2686846.172483545.8729249.94219949.8952768.8821561.5854246938516.22992.134347.948214.62482130.77735.5873.179559.130232683944.5443.9645.4942.230.026936.172811.43173.356936.708972.33975.568.08969.02277495.6541.192936.95855.953072.0473061.536116.763179623056059.024.242137.7616.1858151.75168179.5118065.32592.97223.219812.031223.218274.8545.9757352.206637.0473.91826.2309738.374926.982138.339830.6446.099477.0961421.52697.049167.860421068.173216.07721.68124255951266.374309.65512616.712943.42.95106.5105.390.465.390.464.51.138146.3162888.2129.5264101.514105.68852735.16878485844911652564089.4816606058571323461943242496.517969.92483.662485.343153.23133.91240324.61291849.94200142072.815355.63646.02104.43.992.56362.5024.724.612898.872411.063096.7833.9237.857246.1442.9211.631.043571.9721.245429328.216996416.51577614.812871422.77300.5834951.825038.57871.88196.5152.09609858003229.072026796.064833096.849191.9390749.352481083.2630044.54216302.5954543.4922488.1766949048876.445.654392.4412916.23947636.4835.644.704661.8037634040845.2146.8849.5150.391.245281.8527110.87811.98030.93394576.44578.877.65472.01400798.21224.907222.5875.6483156.5793146.367117.447194108836131.074.11141.2516.4404159.49176.38184.6187.8762.16772.8865.257023.40615232.831290.4241.7435350.931658.6768.59776.0599835.605724.824637.45831.74565.680266.5386219.45636.467217.278021185.583212.21722.89424266478961.702308.7112677.312759.62.96106.5105.691.765.290.564.41.14148.8412871.2128.7774092.334117.248474826860485944911632574133.6311836445567337458645112485.867858.72492.882458.193116.03137.41237035.8129119542840.442165.815359.53824.82149.742.55663.474.554.612894.312435.63146.6729.0249.284246.8052.8911.631.043529.2730.945264326.6216871709.82580078.482787436.17300.5734815.955039.59869.61197.1158.8261156.127995516.422080090.434842631.569185.3490779.382485080.4529333.09246261.3154932.4122493.2366349107736.4358.2392.4435214.76563836.3535.364.71021.7963435197145.1246.5249.8450.441.168562.257910.76411.97520.93088878.40979.867.71873.95533998.21624.959522.58565.6013169.253144.303117.33178500636541.114.14140.9916.1542158.6176.32183.53187.1964.73133.0035.264683.38355230.171287.2241.7416351.994637.7170.38096.0265838.059326.84735.841729.44176.086366.7107720.73236.931427.463671151.973217.7721.40124234601965.181310.50412638.512947.62.95106.3105.591.265.790.7651.155147.2372903.9128.3964110.744100.658160366888479545111652574126.6511545962570453473044232495.978459.72473.222458.933117.03162.41195550.61240380.942166.741956.7153263732.72113.842.56162.1524.724.662888.782401.333113.9818.3238.625247.1082.9811.611.053587.1751.745400331.5416827898.02576825.663025044.82298.1435188.015048.06867.3195.83156.9261125.267971710.382087104.714808874.549193.2490727.652481553.2429002.22192604.9454797.9622466.964949106516.4287.658262.4314.84563336.58734.854.709111.8131934495645.246.8349.8650.621.291071.9964610.98111.38320.9248279.78779.67.61972.41391799.57224.852422.50075.6443162.383154.65119.133193848796515.54.1414116.381158.69180.36185.11187.3460.51262.8965.278033.40609226.788288.0542.8935372.283637.3968.636.0971838.327626.978835.86929.42925.692166.5406519.50476.464637.2811210953234.76OpenBenchmarking.org

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionABCD160320480640800742.93721.68722.89721.401. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputABCD5000M10000M15000M20000M25000M23214280399.0424255951266.3724266478961.7024234601965.18

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7ABCD70140210280350323.97309.66308.71310.501. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensABCD3K6K9K12K15K12757.912616.712677.312638.5MIN: 12757.86 / MAX: 14048.24MIN: 12616.69 / MAX: 13750.37MIN: 12677.26 / MAX: 13923.11MAX: 13822.5

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeABCD3K6K9K12K15K12754.412943.412759.612947.6MIN: 10167.69 / MAX: 12754.41MIN: 10264.93 / MAX: 12943.41MIN: 10128.47MIN: 10307.6 / MAX: 12947.63

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KABCD0.6661.3321.9982.6643.332.822.952.962.951. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

GravityMark

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 1920 x 1080 - Renderer: OpenGLABCD20406080100107.2106.5106.5106.3

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 1920 x 1080 - Renderer: VulkanABCD20406080100105.3105.3105.6105.5

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 2560 x 1440 - Renderer: OpenGLABCD2040608010091.190.491.791.2

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 3840 x 2160 - Renderer: OpenGLABCD153045607565.765.365.265.7

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 2560 x 1440 - Renderer: VulkanABCD2040608010090.690.490.590.7

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.53Resolution: 3840 x 2160 - Renderer: VulkanABCD153045607565.064.564.465.0

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareABCD0.25990.51980.77971.03961.29951.0471.1381.1401.1551. (CXX) g++ options: -O3

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7ABCD306090120150151.78146.32148.84147.241. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesABCD60012001800240030002666.52888.22871.22903.9MIN: 2472.42 / MAX: 2807.84MIN: 2858.54 / MAX: 2927.17MIN: 2838.52 / MAX: 2898.82MIN: 2862.13 / MAX: 2943.57

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0ABCD306090120150134.16129.53128.78128.401. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABCD7K14K21K28K35K33601.704101.514092.334110.74MIN: 21286.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABCD8K16K24K32K40K35077.904105.684117.244100.65MIN: 21314.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABCD200K400K600K800K1000K909621.0852735.1847482.0816036.0

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelABCD1530456075606868681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardABCD20406080100597860881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelABCD1000200030004000500045804858485947951. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelABCD1002003004005004324494494511. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelABCD3006009001200150011171165116311651. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelABCD601201802403002442562572571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABCD7K14K21K28K35K32226.304089.484133.634126.65MIN: 17963.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardABCD40080012001600200011191660118311541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardABCD1400280042005600700057756058644559621. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardABCD1202403604806005475715675701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardABCD1002003004005003653233374531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelABCD1000200030004000500045414619458647301. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardABCD1000200030004000500044354324451144231. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABCD7K14K21K28K35K30995.102496.512485.862495.97MIN: 14054.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOABCD2K4K6K8K10K8321.17969.97858.78459.7MAX: 12754.85MIN: 7969.86 / MAX: 11204.55MAX: 11840.34MAX: 12201.65

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABCD6K12K18K24K30K28223.802483.662492.882473.22MIN: 127181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABCD6K12K18K24K30K25968.502485.342458.192458.93MIN: 126311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankABCD70014002100280035003174.23153.23116.03117.0MIN: 2810.59 / MAX: 3285.06MIN: 2764.94 / MAX: 3198.77MIN: 2658.74 / MAX: 3170.73MIN: 2793.69 / MAX: 3226.06

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSABCD70014002100280035003211.33133.93137.43162.4MIN: 3059.26 / MAX: 3335.58MIN: 3035.49 / MAX: 3252.39MIN: 3005.95 / MAX: 3263.19MIN: 3046.43 / MAX: 3270.26

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABCD300K600K900K1200K1500K1236274.61240324.61237035.81195550.6

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABCD300K600K900K1200K1500K1293571.61291849.91291195.01240380.9

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4ABCD500K1000K1500K2000K2500K2445920.042001.042840.442166.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2ABCD500K1000K1500K2000K2500K2330410.042072.842165.841956.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileABCD500K1000K1500K2000K2500K2271630.015355.615359.515326.0

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutABCD80016002400320040003614.93646.03824.83732.7MIN: 3226.39 / MAX: 3786.41MIN: 3383.09 / MAX: 4041.23MIN: 3519.07 / MAX: 4246.27MIN: 3478.56 / MAX: 4080.38

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesABCD50010001500200025002126.22104.42149.72113.8MIN: 1633.81 / MAX: 2126.21MIN: 1617.62 / MAX: 2358.18MIN: 1657.38MIN: 1613.04

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsABCD0.91.82.73.64.53.833.994.004.001. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 4KABCD0.57671.15341.73012.30682.88352.4672.5632.5562.5611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABCD153045607566.0362.5063.4762.151. (CXX) g++ options: -O3 -fPIC -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDABCD1.0622.1243.1864.2485.314.594.724.554.721. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetABCD1.0532.1063.1594.2125.2654.684.614.614.661. (CXX) g++ options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetABCD40K80K120K160K200K206917.002898.872894.312888.78

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatABCD50K100K150K200K250K240371.002411.062435.602401.33

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantABCD10K20K30K40K50K44820.63096.73146.63113.9

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyABCD2004006008001000778.7833.9729.0818.3MIN: 646.98 / MAX: 1439.03MIN: 636.78 / MAX: 1324.63MIN: 618.39 / MAX: 1281.64MIN: 618.81 / MAX: 1331.73

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: DXT1ABCD50100150200250236.90237.86249.28238.631. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2ABCD50100150200250232.17246.14246.81247.111. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaABCD0.67051.3412.01152.6823.35252.932.922.892.981. (CXX) g++ options: -O3

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pABCD369121511.0011.6311.6311.611. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomABCD0.23630.47260.70890.94521.18151.011.041.041.051. (CXX) g++ options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsABCD80016002400320040003814.13571.93529.23587.1MIN: 3534.91 / MAX: 3897.61MIN: 3345.25 / MAX: 3795.91MIN: 3302.49 / MAX: 3811.65MIN: 3364.31 / MAX: 3763.07

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestABCD160320480640800743.4721.2730.9751.7MIN: 665.12 / MAX: 887.13MIN: 609.48 / MAX: 834.11MIN: 607.63 / MAX: 901.62MIN: 622.63 / MAX: 907.43

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitABCD10K20K30K40K50K424694542945264454001. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMAABCD70140210280350211.02328.20326.62331.541. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocABCD4M8M12M16M20M16082432.3716996416.5116871709.8216827898.021. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicABCD120K240K360K480K600K576128.96577614.81580078.48576825.661. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexABCD600K1200K1800K2400K3000K2786611.952871422.772787436.173025044.821. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPABCD70140210280350296.27300.58300.57298.141. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringABCD8K16K24K32K40K35619.3134951.8234815.9535188.011. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingABCD110022003300440055004844.135038.575039.595048.061. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDABCD2004006008001000850.42871.88869.61867.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingABCD4080120160200188.30196.50197.10195.831. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheABCD4080120160200156.96152.09158.82156.921. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathABCD13K26K39K52K65K58365.0560985.0061156.1261125.261. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingABCD2M4M6M8M10M7796050.708003229.077995516.427971710.381. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsABCD400K800K1200K1600K2000K1991624.532026796.062080090.432087104.711. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingABCD1000K2000K3000K4000K5000K4520622.354833096.844842631.564808874.541. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityABCD2K4K6K8K10K9558.269191.939185.349193.241. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathABCD20K40K60K80K100K86846.1790749.3590779.3890727.651. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresABCD500K1000K1500K2000K2500K2483545.872481083.262485080.452481553.241. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressABCD6K12K18K24K30K29249.9430044.5429333.0929002.221. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILEABCD50K100K150K200K250K219949.89216302.59246261.31192604.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingABCD12K24K36K48K60K52768.8854543.4954932.4154797.961. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoABCD5K10K15K20K25K21561.5822488.1722493.2322466.901. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiABCD1402804205607005426696636491. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashABCD1.1M2.2M3.3M4.4M5.5M46938514904887491077349106511. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pABCD2468106.2296.4406.4356.4281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABCD2040608010092.134305.654398.239007.65826MIN: 5.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABCD112233445547.948202.441292.443522.43000MIN: 12.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBABCD4812162014.6216.2414.7714.851. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 4KABCD81624324030.7836.4836.3536.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KABCD81624324035.5835.6435.3634.851. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABCD163248648073.179504.704664.710204.70911MIN: 4.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABCD132639526559.130201.803761.796341.81319MIN: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeABCD80K160K240K320K400K3268393404083519713449561. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KABCD102030405044.5445.2145.1245.201. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KABCD112233445543.9646.8846.5246.831. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KABCD112233445545.4949.5149.8449.861. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KABCD112233445542.2050.3950.4450.621. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABCD71421283530.026901.245281.168561.29107MIN: 2.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABCD81624324036.172801.852712.257901.99646MIN: 2.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessABCD369121511.4310.8810.7610.981. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABCD163248648073.3611.9811.9811.38MIN: 49.92MIN: 11.88MIN: 11.88MIN: 11.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABCD81624324036.7089000.9339450.9308880.924820MIN: 0.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 4KABCD2040608010072.3476.4578.4179.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KABCD2040608010075.5678.8779.8679.601. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6ABCD2468108.0897.6547.7187.6191. (CXX) g++ options: -O3 -fPIC -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBABCD163248648069.0272.0173.9672.411. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 4KABCD2040608010095.6598.2198.2299.571. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABCD91827364541.1924.9124.9624.85MIN: 27.53MIN: 24.56MIN: 24.62MIN: 24.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABCD81624324036.9622.5922.5922.50MIN: 22.25MIN: 22.13MIN: 22.18MIN: 21.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessABCD1.33882.67764.01645.35526.6945.9505.6485.6015.6441. (CXX) g++ options: -O3 -fPIC -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: DXT1ABCD70014002100280035003072.053156.583169.253162.381. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2ABCD70014002100280035003061.543146.373144.303154.651. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pABCD306090120150116.76117.45117.33119.131. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicABCD4M8M12M16M20M179623051941088317850063193848791. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256ABCD140028004200560070006059.026131.076541.116515.50MIN: 4394.53 / MAX: 9359.29MIN: 4421.4 / MAX: 9534.58MIN: 4692.01 / MAX: 10192.6MIN: 4701.43 / MAX: 10170.911. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5ABCD0.95451.9092.86353.8184.77254.2424.1104.1404.1401. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pABCD306090120150137.76141.25140.99141.001. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: expABCD4812162016.1916.4416.1516.381. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pABCD4080120160200151.75159.49158.60158.691. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pABCD4080120160200168.00176.38176.32180.361. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pABCD4080120160200179.51184.60183.53185.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pABCD4080120160200180.00187.87187.19187.341. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinABCD153045607565.3362.1764.7360.511. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultABCD0.67571.35142.02712.70283.37852.9722.8863.0032.8961. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABCD61218243023.219805.257025.264685.27803MIN: 5.62MIN: 5.18MIN: 5.18MIN: 5.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABCD369121512.031003.406153.383553.40609MIN: 3.34MIN: 3.27MIN: 3.26MIN: 3.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 1080pABCD50100150200250223.22232.83230.17226.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pABCD60120180240300274.85290.42287.22288.051. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sincosABCD102030405045.9841.7441.7442.891. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pABCD80160240320400352.21350.93351.99372.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512ABCD140280420560700637.04658.67637.71637.391. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: cosABCD163248648073.9268.6070.3868.631. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: pthread_onceABCD2468106.230976.059986.026586.097181. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: tanhABCD91827364538.3735.6138.0638.331. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinhABCD61218243026.9824.8226.8526.981. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: atanhABCD91827364538.3437.4635.8435.871. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: asinhABCD71421283530.6431.7529.4429.431. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsABCD2468106.099475.680266.086365.692161. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: modfABCD2468107.096146.538626.710776.540651. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: log2ABCD51015202521.5319.4620.7319.501. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsllABCD2468107.049166.467216.931426.464631. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sqrtABCD2468107.860427.278027.463677.281121. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaABCD300600900120015001068.171185.581151.971095.00MIN: 514.62 / MAX: 3098.54MIN: 574.66 / MAX: 3420.69MIN: 558.73 / MAX: 3316.95MIN: 529.55 / MAX: 3168.561. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesABCD70014002100280035003216.073212.213217.703234.761. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

Test: x86_64 RdRand

A: The test run did not produce a result. E: stress-ng: error: [2367028] No stress workers invoked (one or more were unsupported)

B: The test run did not produce a result. E: stress-ng: error: [4176560] No stress workers invoked (one or more were unsupported)

C: The test run did not produce a result. E: stress-ng: error: [1694758] No stress workers invoked (one or more were unsupported)

D: The test run did not produce a result. E: stress-ng: error: [3404707] No stress workers invoked (one or more were unsupported)

145 Results Shown

WebP2 Image Encode
Java JMH
WebP2 Image Encode
Renaissance:
  ALS Movie Lens
  Akka Unbalanced Cobwebbed Tree
SVT-HEVC
GravityMark:
  1920 x 1080 - OpenGL
  1920 x 1080 - Vulkan
  2560 x 1440 - OpenGL
  3840 x 2160 - OpenGL
  2560 x 1440 - Vulkan
  3840 x 2160 - Vulkan
GROMACS
WebP2 Image Encode
Renaissance
libavif avifenc
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
InfluxDB
ONNX Runtime:
  fcn-resnet101-11 - CPU - Parallel
  fcn-resnet101-11 - CPU - Standard
  GPT-2 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Parallel
  yolov4 - CPU - Parallel
oneDNN
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Standard
  GPT-2 - CPU - Standard
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
oneDNN
Renaissance
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Renaissance:
  Apache Spark PageRank
  Apache Spark ALS
InfluxDB:
  64 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
Renaissance:
  In-Memory Database Shootout
  Apache Spark Bayes
simdjson
SVT-AV1
libavif avifenc
simdjson:
  DistinctUserID
  TopTweet
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
Renaissance
Etcpak:
  Single-Threaded - DXT1
  Single-Threaded - ETC2
simdjson
SVT-HEVC
simdjson
Renaissance:
  Finagle HTTP Requests
  Rand Forest
perf-bench
Stress-NG:
  NUMA
  Malloc
  Atomic
  Futex
  MMAP
  IO_uring
  Memory Copying
  MEMFD
  Glibc Qsort Data Sorting
  CPU Cache
  Matrix Math
  System V Message Passing
  Glibc C String Functions
  Context Switching
  Socket Activity
  Vector Math
  Semaphores
  CPU Stress
  SENDFILE
  Forking
  Crypto
perf-bench:
  Futex Lock-Pi
  Futex Hash
SVT-AV1
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
perf-bench
SVT-AV1
x264
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
perf-bench
SVT-VP9:
  VMAF Optimized - Bosphorus 4K
  PSNR/SSIM Optimized - Bosphorus 4K
  Visual Quality Optimized - Bosphorus 4K
SVT-HEVC
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
libavif avifenc
oneDNN:
  IP Shapes 3D - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
SVT-AV1
SVT-HEVC
libavif avifenc
perf-bench
SVT-AV1
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
libavif avifenc
Etcpak:
  Multi-Threaded - DXT1
  Multi-Threaded - ETC2
SVT-AV1
perf-bench
Nettle
WebP2 Image Encode
x264
Glibc Benchmarks
SVT-HEVC
SVT-VP9:
  Visual Quality Optimized - Bosphorus 1080p
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
Glibc Benchmarks
WebP2 Image Encode
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
SVT-AV1
SVT-HEVC
Glibc Benchmarks
SVT-AV1
Nettle
Glibc Benchmarks:
  cos
  pthread_once
  tanh
  sinh
  atanh
  asinh
  ffs
  modf
  log2
  ffsll
  sqrt
Nettle:
  chacha
  poly1305-aes