3900xt-november

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211180-SYST-3900XTN38
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 17 2022
  5 Hours, 24 Minutes
aa
November 17 2022
  5 Hours, 21 Minutes
b
November 17 2022
  13 Hours, 42 Minutes
Invert Behavior (Only Show Selected Data)
  8 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3900xt-november OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS)AMD Starship/Matisse16GB500GB Seagate FireCuda 520 SSD ZP500GM30002AMD Radeon RX 56/64 8GB (1630/945MHz)AMD Vega 10 HDMI AudioASUS MG28URealtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42)1.3.204GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution3900xt-november BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021- BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102- Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

aaabResult OverviewPhoronix Test Suite100%106%112%118%spaCyoneDNNStress-NGnekRSEnCodecXmrigJPEG XL Decoding libjxlnginxQuadRayJPEG XL libjxlSMHasherAOM AV1FLAC Audio EncodingCpuminer-Optlibavif avifencY-CruncherOpenRadiossminiBUDENeural Magic DeepSparseOpenFOAMFFmpegLibplaceboTensorFlow

3900xt-november tensorflow: CPU - 512 - GoogLeNetsmhasher: SHA3-256smhasher: SHA3-256nekrs: TurboPipe Periodicopenradioss: INIVOL and Fluid Structure Interaction Drop Containertensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 64 - ResNet-50jpegxl: JPEG - 100jpegxl: PNG - 100tensorflow: CPU - 32 - ResNet-50minibude: OpenMP - BM2minibude: OpenMP - BM2openradioss: Bird Strike on Windshieldffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandtensorflow: CPU - 256 - AlexNetopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timetensorflow: CPU - 64 - GoogLeNetffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandtensorflow: CPU - 16 - ResNet-50openradioss: Rubber O-Ring Seal Installationjpegxl: JPEG - 80jpegxl: PNG - 80avifenc: 0openradioss: Bumper Beamxmrig: Monero - 1Mtensorflow: CPU - 32 - GoogLeNetopenradioss: Cell Phone Drop Testjpegxl: JPEG - 90jpegxl: PNG - 90aom-av1: Speed 4 Two-Pass - Bosphorus 4Kcpuminer-opt: Garlicoinxmrig: Wownero - 1Mffmpeg: libx265 - Liveffmpeg: libx265 - Liveonednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 4Konednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 64 - AlexNetlibplacebo: av1_grain_laplibplacebo: hdr_lutlibplacebo: hdr_peakdetectlibplacebo: polar_nocomputelibplacebo: deband_heavycpuminer-opt: Blake-2 Sonednn: Deconvolution Batch shapes_1d - f32 - CPUavifenc: 2aom-av1: Speed 6 Two-Pass - Bosphorus 4Kspacy: en_core_web_trfspacy: en_core_web_lgtensorflow: CPU - 16 - GoogLeNetjpegxl-decode: 1nginx: 1000nginx: 500nginx: 200nginx: 100minibude: OpenMP - BM1minibude: OpenMP - BM1aom-av1: Speed 4 Two-Pass - Bosphorus 1080ptensorflow: CPU - 32 - AlexNety-cruncher: 1Bffmpeg: libx264 - Liveffmpeg: libx264 - Livesmhasher: FarmHash128smhasher: FarmHash128onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUsmhasher: MeowHash x86_64 AES-NIsmhasher: MeowHash x86_64 AES-NIjpegxl-decode: Allsmhasher: Spooky32smhasher: Spooky32deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamtensorflow: CPU - 16 - AlexNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamencodec: 24 kbpsdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: Skeincoincpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Myriad-Groestlcpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: x25xcpuminer-opt: Magicpuminer-opt: LBC, LBRY Creditscpuminer-opt: Ringcoincpuminer-opt: Deepcoincpuminer-opt: scryptdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamencodec: 6 kbpsencodec: 3 kbpsaom-av1: Speed 0 Two-Pass - Bosphorus 1080pencodec: 1.5 kbpsdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamaom-av1: Speed 6 Realtime - Bosphorus 4Ksmhasher: t1ha2_atoncesmhasher: t1ha2_atoncesmhasher: t1ha0_aes_avx2 x86_64smhasher: t1ha0_aes_avx2 x86_64deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamencode-flac: WAV To FLACdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamy-cruncher: 500Mquadray: 5 - 4Kquadray: 2 - 4Kquadray: 3 - 4Kquadray: 1 - 4Kquadray: 5 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 1 - 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 4Kstress-ng: Context Switchingstress-ng: CPU Cachestress-ng: MEMFDstress-ng: Futexstress-ng: NUMAstress-ng: System V Message Passingstress-ng: Mutexstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Matrix Mathstress-ng: Mallocstress-ng: Semaphoresstress-ng: Forkingstress-ng: Cryptostress-ng: MMAPstress-ng: IO_uringstress-ng: Atomicstress-ng: Glibc Qsort Data Sortingstress-ng: CPU Stressstress-ng: Glibc C String Functionsstress-ng: Vector Mathstress-ng: SENDFILEonednn: IP Shapes 1D - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080psmhasher: fasthash32smhasher: fasthash32avifenc: 6, Losslessonednn: IP Shapes 3D - f32 - CPUsmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXaom-av1: Speed 8 Realtime - Bosphorus 1080psmhasher: wyhashsmhasher: wyhashonednn: Convolution Batch Shapes Auto - f32 - CPUavifenc: 6aom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pavifenc: 10, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUaaab28.792598.05149.0831559900000633.9728.64110.3610.090.680.6910.2916.394409.855285.6912.64199.73024137731.31241.92887217931.45240.839641179109.23363.2165741.82356830.5515.32164.79743165748.84155.1148.83155.13773228510.391318.689.08130.357127.427982.630.46104.518.629.027.411939.6510466.675.3866.994075.570.252503.1293.82073.692797.232610.2977.72476.595246808.227463.12112.678321219631.8948.6965135.6375259.8183513.1487222.6816.233405.81615.0876.2241.524204.7524.6664.10315884.241.881757.7437183.28146.6349.05614909.02180.605233.214455.63611.84049.774854.416146.479340.9537617.64579.713310688020497021130102170631.15603.58312602705.2711710226.4248.486920.620847.32647.2310.7345.14124.79198.0129125.34557.977633.385329.948123.132.25416686.5734.81166796.4673.708581.374113.460352.847617.25553.4102112.281516.815959.448523.557442.430912.749978.388919.5470.662.892.4610.212.6411.129.7139.4841.5734.464076744.52158.89747.312600010.05260.687925418.156319425.863599.428595.261078.0613779284.622463565.6840818.9222523.79294.055126.99575564.96180.7732900.392048965.2590999.65215308.144.6902647.4247.5545.236.9026658.9210.14912.060240.06929093.0385.2225.30923915.7522.54496.049108.47109.525.485.2743528.712620.221148.1432575200000650.2528.67110.4910.060.680.6910.2916.412410.307286.8412.64199.8131.49240.52017595531.62239.59109.38361.4795741.85230730.5115.33164.7249.01154.5748.85155.05813851710.44130.828.989.13131.57126.517945.830.58104.118.769.037.451960.711064075.5066.894145.830.252484.993.882106.732806.162598.97948.28479.655280605.4712564.01912.688331221731.8347.2165507.9276617.784210.4286726.116.268406.70615.0176.2541.841205.0324.6360.33616614.281.2789556.97637904.32148.6148.9815026.6181.455633.059555.47617.74059.708554.426146.110841.0572617.03459.72310862020495021260102180631.74603.33318202709.0311720226.5949.22820.310448.56447.6590.7346.082124.75828.0151125.16537.98933.322830.004123.234.54815706.9232.87569896.4174.011781.044113.642852.781217.32653.4052112.256716.767859.61923.567242.414912.746378.408819.5270.662.92.439.632.6211.229.8138.542.1734.914103283.77152.46773.072717409.38261.267932169.446371966.373592.788500.1761203.2113924963.432465656.1940572.7922525.14292.18765.88575708.53192.2331954.782035501.4490998.92215163.994.7485947.6247.9546.9835.2576917.3210.09412.028841.13428582.2285.925.2924070.6622.55046.076108.91111.495.5055.2507228.332564.206151.7632544333333631.3328.65110.0210.120.680.6910.3316.405410.141287.3312.67199.3531.55240.0931.56240.03109.30359.9990342.07700730.5015.37164.2648.62155.8148.77155.3110.44130.068.849.12130.149127.378175.830.48104.278.809.077.412112.4110577.375.1067.264147.650.252481.6193.792093.702840.162603.19961.48474.535194336.2239664.37312.715501197831.8548.2466302.8676562.0582789.4283591.6516.170404.24715.1276.3741.426204.7324.6761.81216361.641.7412155.87538610.17148.5949.50014856.10182.256932.91455.49618.13639.676154.862146.641440.9084609.83459.823810566720510021397102017628.47600.87311432730.3011660225.8949.490820.202648.21548.450.7346.624125.04697.9966125.26917.982433.340429.988623.1333.67916085.4833.61268310.6673.766681.3092114.403752.416217.20953.4398112.122316.825859.413823.59642.362712.763478.306219.5310.662.862.4910.142.6311.179.7039.2642.0734.603796353.99149.79770.892563831.67260.027654922.056344621.563456.538961.258814.3813767394.332466594.7340197.622434.69293.294168.54571674.68184.7730793.531531955.8887587.66205666.124.7308647.6048.2345.6436.6886686.2510.15212.031342.57527788.5184.9325.91423571.2622.52106.019107.41110.265.4975.28794OpenBenchmarking.org

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetbaaa71421283528.3328.7128.79

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256baaa6001200180024003000SE +/- 19.10, N = 72564.212620.222598.051. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256baaa306090120150SE +/- 1.41, N = 7151.76148.14149.081. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicbaaa7000M14000M21000M28000M35000MSE +/- 20784155.29, N = 33254433333332575200000315599000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containerbaaa140280420560700SE +/- 0.82, N = 3631.33650.25633.97

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetbaaa71421283528.6528.6728.64

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetbaaa20406080100SE +/- 0.06, N = 3110.02110.49110.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50baaa369121510.1210.0610.09

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100baaa0.1530.3060.4590.6120.765SE +/- 0.00, N = 30.680.680.681. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100baaa0.15530.31060.46590.62120.7765SE +/- 0.01, N = 30.690.690.691. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50baaa3691215SE +/- 0.01, N = 310.3310.2910.29

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2baaa48121620SE +/- 0.00, N = 316.4116.4116.391. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2baaa90180270360450SE +/- 0.03, N = 3410.14410.31409.861. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldbaaa60120180240300SE +/- 0.10, N = 3287.33286.84285.69

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa3691215SE +/- 0.06, N = 312.6712.6412.641. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa4080120160200SE +/- 0.94, N = 3199.35199.81199.731. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa714212835SE +/- 0.03, N = 331.5531.4931.311. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa50100150200250SE +/- 0.20, N = 3240.09240.52241.931. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandbaaa714212835SE +/- 0.07, N = 331.5631.6231.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandbaaa50100150200250SE +/- 0.53, N = 3240.03239.59240.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetbaaa20406080100SE +/- 0.06, N = 3109.30109.38109.23

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timebaaa80160240320400360.00361.48363.221. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timebaaa102030405042.0841.8541.821. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetbaaa714212835SE +/- 0.01, N = 330.5030.5130.55

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa48121620SE +/- 0.03, N = 315.3715.3315.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa4080120160200SE +/- 0.31, N = 3164.26164.72164.801. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformbaaa1122334455SE +/- 0.08, N = 348.6249.0148.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformbaaa306090120150SE +/- 0.25, N = 3155.81154.57155.111. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandbaaa1122334455SE +/- 0.09, N = 348.7748.8548.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandbaaa306090120150SE +/- 0.30, N = 3155.31155.06155.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50baaa3691215SE +/- 0.02, N = 310.4410.4410.39

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationbaaa306090120150SE +/- 0.35, N = 3130.06130.82131.00

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80baaa3691215SE +/- 0.03, N = 38.848.988.681. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80baaa3691215SE +/- 0.02, N = 39.129.139.081. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0baaa306090120150SE +/- 0.52, N = 3130.15131.57130.361. (CXX) g++ options: -O3 -fPIC -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beambaaa306090120150SE +/- 0.16, N = 3127.37126.51127.42

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1Mbaaa2K4K6K8K10KSE +/- 88.16, N = 38175.87945.87982.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetbaaa714212835SE +/- 0.01, N = 330.4830.5830.46

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testbaaa20406080100SE +/- 0.05, N = 3104.27104.11104.51

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90baaa246810SE +/- 0.08, N = 38.808.768.621. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90baaa3691215SE +/- 0.02, N = 39.079.039.021. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kbaaa246810SE +/- 0.01, N = 37.417.457.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoinbaaa5001000150020002500SE +/- 24.17, N = 152112.411960.711939.651. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1Mbaaa2K4K6K8K10KSE +/- 4.48, N = 310577.310640.010466.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Livebaaa20406080100SE +/- 0.73, N = 375.1075.5075.381. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Livebaaa1530456075SE +/- 0.65, N = 367.2666.8966.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbaaa9001800270036004500SE +/- 9.23, N = 34147.654145.834075.57MIN: 4122.17MIN: 4135.73MIN: 4066.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kbaaa0.05630.11260.16890.22520.2815SE +/- 0.00, N = 30.250.250.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbaaa5001000150020002500SE +/- 1.05, N = 32481.612484.902503.12MIN: 2470.87MIN: 2478.59MIN: 2492.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetbaaa20406080100SE +/- 0.06, N = 393.7993.8893.80

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 5.229.1Test: av1_grain_lapbaaa5001000150020002500SE +/- 8.76, N = 32093.702106.732073.691. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 5.229.1Test: hdr_lutbaaa6001200180024003000SE +/- 46.08, N = 32840.162806.162797.231. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 5.229.1Test: hdr_peakdetectbaaa6001200180024003000SE +/- 4.71, N = 32603.192598.972610.201. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 5.229.1Test: polar_nocomputebaaa2004006008001000SE +/- 3.01, N = 3961.48948.28977.721. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 5.229.1Test: deband_heavybaaa100200300400500SE +/- 3.08, N = 3474.53479.65476.591. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 Sbaaa110K220K330K440K550KSE +/- 4099.93, N = 105194335280605246801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUbaaa246810SE +/- 0.14562, N = 156.223965.471258.22740MIN: 4.61MIN: 4.57MIN: 71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2baaa1428425670SE +/- 0.20, N = 364.3764.0263.121. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kbaaa3691215SE +/- 0.06, N = 312.7112.6812.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfbaaa2004006008001000550833832

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgbaaa3K6K9K12K15K119781221712196

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetbaaa714212835SE +/- 0.03, N = 331.8531.8331.89

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1baaa1122334455SE +/- 0.56, N = 348.2447.2148.69

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000baaa14K28K42K56K70K66302.8665507.9265135.631. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500baaa16K32K48K64K80K76562.0576617.7075259.811. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200baaa20K40K60K80K100K82789.4284210.4283513.141. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100baaa20K40K60K80K100K83591.6586726.1087222.681. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1baaa48121620SE +/- 0.04, N = 316.1716.2716.231. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1baaa90180270360450SE +/- 0.95, N = 3404.25406.71405.821. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pbaaa48121620SE +/- 0.08, N = 315.1215.0115.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetbaaa20406080100SE +/- 0.07, N = 376.3776.2576.22

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1Bbaaa1020304050SE +/- 0.11, N = 341.4341.8441.52

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Livebaaa4080120160200SE +/- 0.39, N = 3204.73205.03204.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Livebaaa612182430SE +/- 0.05, N = 324.6724.6324.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128baaa1428425670SE +/- 0.72, N = 1561.8160.3464.101. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128baaa4K8K12K16K20KSE +/- 128.88, N = 1516361.6416614.2815884.241. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUbaaa0.42340.84681.27021.69362.117SE +/- 0.12895, N = 151.741211.278951.88170MIN: 1.02MIN: 1.15MIN: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIbaaa1326395265SE +/- 0.39, N = 1555.8856.9857.741. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIbaaa8K16K24K32K40KSE +/- 285.07, N = 1538610.1737904.3237183.281. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Allbaaa306090120150SE +/- 0.27, N = 3148.59148.61146.63

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32baaa1122334455SE +/- 0.31, N = 1549.5048.9849.061. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32baaa3K6K9K12K15KSE +/- 98.93, N = 1514856.1015026.6014909.021. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambaaa4080120160200182.26181.46180.61

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambaaa81624324032.9133.0633.21

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetbaaa1224364860SE +/- 0.04, N = 355.4955.4755.63

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambaaa130260390520650618.14617.74611.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambaaa36912159.67619.70859.7748

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsbaaa122436486054.8654.4354.42

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambaaa306090120150146.64146.11146.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambaaa91827364540.9141.0640.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambaaa130260390520650609.83617.03617.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambaaa36912159.82389.72309.7133

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoinbaaa20K40K60K80K100KSE +/- 1171.67, N = 31056671086201068801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoinbaaa40K80K120K160K200KSE +/- 26.46, N = 32051002049502049701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestlbaaa5K10K15K20K25KSE +/- 153.44, N = 32139721260211301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyritebaaa20K40K60K80K100KSE +/- 58.12, N = 31020171021801021701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xbaaa140280420560700SE +/- 0.96, N = 3628.47631.74631.151. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magibaaa130260390520650SE +/- 0.50, N = 3600.87603.33603.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Creditsbaaa7K14K21K28K35KSE +/- 18.56, N = 33114331820312601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoinbaaa6001200180024003000SE +/- 7.56, N = 32730.302709.032705.271. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoinbaaa3K6K9K12K15KSE +/- 5.77, N = 31166011720117101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptbaaa50100150200250SE +/- 0.18, N = 3225.89226.59226.421. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambaaa112233445549.4949.2348.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambaaa51015202520.2020.3120.62

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsbaaa112233445548.2248.5647.33

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsbaaa112233445548.4547.6647.23

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pbaaa0.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.730.730.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsbaaa112233445546.6246.0845.14

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambaaa306090120150125.05124.76124.79

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambaaa2468107.99668.01518.0129

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambaaa306090120150125.27125.17125.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambaaa2468107.98247.98907.9776

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambaaa81624324033.3433.3233.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambaaa71421283529.9930.0029.95

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kbaaa612182430SE +/- 0.06, N = 323.1323.2023.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atoncebaaa816243240SE +/- 0.28, N = 1533.6834.5532.251. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atoncebaaa4K8K12K16K20KSE +/- 116.42, N = 1516085.4815706.9216686.571. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64baaa816243240SE +/- 0.30, N = 1533.6132.8834.811. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64baaa15K30K45K60K75KSE +/- 549.14, N = 1568310.6669896.4166796.461. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambaaa163248648073.7774.0173.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambaaa2040608010081.3181.0481.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streambaaa306090120150114.40113.64113.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streambaaa122436486052.4252.7852.85

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACbaaa48121620SE +/- 0.04, N = 517.2117.3317.261. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambaaa122436486053.4453.4153.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambaaa306090120150112.12112.26112.28

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambaaa4812162016.8316.7716.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambaaa132639526559.4159.6259.45

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streambaaa61218243023.6023.5723.56

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streambaaa102030405042.3642.4142.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambaaa369121512.7612.7512.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambaaa2040608010078.3178.4178.39

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500Mbaaa510152025SE +/- 0.02, N = 319.5319.5319.55

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Kbaaa0.14850.2970.44550.5940.7425SE +/- 0.00, N = 30.660.660.661. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Kbaaa0.65251.3051.95752.613.2625SE +/- 0.01, N = 32.862.902.891. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Kbaaa0.56031.12061.68092.24122.8015SE +/- 0.01, N = 32.492.432.461. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Kbaaa3691215SE +/- 0.04, N = 310.149.6310.211. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pbaaa0.5941.1881.7822.3762.97SE +/- 0.00, N = 32.632.622.641. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pbaaa3691215SE +/- 0.06, N = 311.1711.2211.121. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pbaaa3691215SE +/- 0.10, N = 39.709.819.711. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pbaaa918273645SE +/- 0.13, N = 339.2638.5039.481. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pbaaa1020304050SE +/- 0.26, N = 342.0742.1741.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kbaaa816243240SE +/- 0.08, N = 334.6034.9134.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context Switchingbaaa900K1800K2700K3600K4500K3796353.994103283.774076744.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Cachebaaa4080120160200149.79152.46158.891. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDbaaa170340510680850770.89773.07747.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Futexbaaa600K1200K1800K2400K3000K2563831.672717409.382600010.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAbaaa60120180240300260.02261.26260.681. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message Passingbaaa2M4M6M8M10M7654922.057932169.447925418.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mutexbaaa1.4M2.8M4.2M5.6M7M6344621.566371966.376319425.861. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory Copyingbaaa80016002400320040003456.533592.783599.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket Activitybaaa2K4K6K8K10K8961.208500.178595.201. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix Mathbaaa13K26K39K52K65K58814.3861203.2161078.061. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mallocbaaa3M6M9M12M15M13767394.3313924963.4313779284.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Semaphoresbaaa500K1000K1500K2000K2500K2466594.732465656.192463565.681. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Forkingbaaa9K18K27K36K45K40197.6040572.7940818.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Cryptobaaa5K10K15K20K25K22434.6922525.1422523.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPbaaa60120180240300293.29292.10294.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringbaaa2K4K6K8K10K4168.548765.885126.991. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Atomicbaaa120K240K360K480K600K571674.68575708.53575564.961. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data Sortingbaaa4080120160200184.77192.23180.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Stressbaaa7K14K21K28K35K30793.5331954.7832900.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String Functionsbaaa400K800K1200K1600K2000K1531955.882035501.442048965.251. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector Mathbaaa20K40K60K80K100K87587.6690998.9290999.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEbaaa50K100K150K200K250K205666.12215163.99215308.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUbaaa1.06842.13683.20524.27365.342SE +/- 0.01435, N = 34.730864.748594.69026MIN: 4.49MIN: 4.55MIN: 4.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kbaaa1122334455SE +/- 0.12, N = 347.6047.6247.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kbaaa1122334455SE +/- 0.11, N = 348.2347.9547.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pbaaa1122334455SE +/- 0.20, N = 345.6446.9845.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32baaa816243240SE +/- 0.48, N = 436.6935.2636.901. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32baaa15003000450060007500SE +/- 77.96, N = 46686.256917.326658.921. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessbaaa3691215SE +/- 0.04, N = 310.1510.0910.151. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbaaa3691215SE +/- 0.01, N = 312.0312.0312.06MIN: 11.91MIN: 11.93MIN: 11.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXbaaa1020304050SE +/- 0.70, N = 342.5841.1340.071. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXbaaa6K12K18K24K30KSE +/- 369.00, N = 327788.5128582.2229093.031. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pbaaa20406080100SE +/- 0.06, N = 384.9385.9085.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashbaaa612182430SE +/- 0.37, N = 325.9125.2925.311. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashbaaa5K10K15K20K25KSE +/- 338.95, N = 323571.2624070.6623915.751. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbaaa510152025SE +/- 0.02, N = 322.5222.5522.54MIN: 21.81MIN: 22.11MIN: 22.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6baaa246810SE +/- 0.028, N = 36.0196.0766.0491. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pbaaa20406080100SE +/- 0.48, N = 3107.41108.91108.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pbaaa20406080100SE +/- 0.74, N = 3110.26111.49109.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessbaaa1.23862.47723.71584.95446.193SE +/- 0.026, N = 35.4975.5055.4801. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbaaa1.18982.37963.56944.75925.949SE +/- 0.00411, N = 35.287945.250725.27435MIN: 5.2MIN: 5.17MIN: 5.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

184 Results Shown

TensorFlow
SMHasher:
  SHA3-256:
    cycles/hash
    MiB/sec
nekRS
OpenRadioss
TensorFlow:
  CPU - 256 - GoogLeNet
  CPU - 512 - AlexNet
  CPU - 64 - ResNet-50
JPEG XL libjxl:
  JPEG - 100
  PNG - 100
TensorFlow
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
OpenRadioss
FFmpeg:
  libx264 - Upload:
    FPS
    Seconds
  libx265 - Platform:
    FPS
    Seconds
  libx265 - Video On Demand:
    FPS
    Seconds
TensorFlow
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
TensorFlow
FFmpeg:
  libx265 - Upload:
    FPS
    Seconds
  libx264 - Platform:
    FPS
    Seconds
  libx264 - Video On Demand:
    FPS
    Seconds
TensorFlow
OpenRadioss
JPEG XL libjxl:
  JPEG - 80
  PNG - 80
libavif avifenc
OpenRadioss
Xmrig
TensorFlow
OpenRadioss
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
AOM AV1
Cpuminer-Opt
Xmrig
FFmpeg:
  libx265 - Live:
    FPS
    Seconds
oneDNN
AOM AV1
oneDNN
TensorFlow
Libplacebo:
  av1_grain_lap
  hdr_lut
  hdr_peakdetect
  polar_nocompute
  deband_heavy
Cpuminer-Opt
oneDNN
libavif avifenc
AOM AV1
spaCy:
  en_core_web_trf
  en_core_web_lg
TensorFlow
JPEG XL Decoding libjxl
nginx:
  1000
  500
  200
  100
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
AOM AV1
TensorFlow
Y-Cruncher
FFmpeg:
  libx264 - Live:
    FPS
    Seconds
SMHasher:
  FarmHash128:
    cycles/hash
    MiB/sec
oneDNN
SMHasher:
  MeowHash x86_64 AES-NI:
    cycles/hash
    MiB/sec
JPEG XL Decoding libjxl
SMHasher:
  Spooky32:
    cycles/hash
    MiB/sec
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
TensorFlow
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
EnCodec
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Cpuminer-Opt:
  Skeincoin
  Triple SHA-256, Onecoin
  Myriad-Groestl
  Quad SHA-256, Pyrite
  x25x
  Magi
  LBC, LBRY Credits
  Ringcoin
  Deepcoin
  scrypt
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
EnCodec:
  6 kbps
  3 kbps
AOM AV1
EnCodec
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
AOM AV1
SMHasher:
  t1ha2_atonce:
    cycles/hash
    MiB/sec
  t1ha0_aes_avx2 x86_64:
    cycles/hash
    MiB/sec
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
FLAC Audio Encoding
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
Y-Cruncher
QuadRay:
  5 - 4K
  2 - 4K
  3 - 4K
  1 - 4K
  5 - 1080p
  2 - 1080p
  3 - 1080p
  1 - 1080p
AOM AV1:
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 4K
Stress-NG:
  Context Switching
  CPU Cache
  MEMFD
  Futex
  NUMA
  System V Message Passing
  Mutex
  Memory Copying
  Socket Activity
  Matrix Math
  Malloc
  Semaphores
  Forking
  Crypto
  MMAP
  IO_uring
  Atomic
  Glibc Qsort Data Sorting
  CPU Stress
  Glibc C String Functions
  Vector Math
  SENDFILE
oneDNN
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 1080p
SMHasher:
  fasthash32:
    cycles/hash
    MiB/sec
libavif avifenc
oneDNN
SMHasher:
  FarmHash32 x86_64 AVX:
    cycles/hash
    MiB/sec
AOM AV1
SMHasher:
  wyhash:
    cycles/hash
    MiB/sec
oneDNN
libavif avifenc
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
libavif avifenc
oneDNN