Ryzen 7 7700X

AMD Ryzen 7 7700X 8-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210285-PTS-RYZEN77743
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 4 Tests
Creator Workloads 7 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Encoding 3 Tests
HPC - High Performance Computing 5 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Multi-Core 5 Tests
Python Tests 3 Tests
Server CPU Tests 2 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
November 29 2022
  2 Hours, 34 Minutes
B
November 29 2022
  2 Hours, 38 Minutes
Invert Hiding All Results Option
  2 Hours, 36 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 7 7700X - Phoronix Test Suite

Ryzen 7 7700X

AMD Ryzen 7 7700X 8-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210285-PTS-RYZEN77743&sro&grs.

Ryzen 7 7700XProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABAMD Ryzen 7 7700X 8-Core @ 5.57GHz (8 Cores / 16 Threads)ASRock X670E PG Lightning (1.11 BIOS)AMD Device 14d832GB1000GB Western Digital WDS100T1X0E-00AFY0XFX AMD Radeon RX 6400 4GB (2320/1000MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbEUbuntu 22.045.17.0-1013-oem (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.2.0-devel (git-a9610ab740) (LLVM 13.0.1 DRM 3.44)1.3.219GCC 11.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Ryzen 7 7700Xopenradioss: Rubber O-Ring Seal Installationopenradioss: Cell Phone Drop Testopenradioss: Bumper Beamaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080popenradioss: Bird Strike on Windshieldjpegxl-decode: Allquadray: 5 - 4Kquadray: 5 - 1080popenradioss: INIVOL and Fluid Structure Interaction Drop Containerxmrig: Monero - 1Mcpuminer-opt: Blake-2 Sjpegxl: JPEG - 100onednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 1080pcpuminer-opt: LBC, LBRY Creditsavifenc: 10, Losslessquadray: 3 - 1080pdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamcpuminer-opt: Quad SHA-256, Pyriteonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUjpegxl-decode: 1onednn: IP Shapes 3D - u8s8f32 - CPUavifenc: 6, Losslesscpuminer-opt: Ringcoinaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 4Kspacy: en_core_web_lgaom-av1: Speed 0 Two-Pass - Bosphorus 1080pxmrig: Wownero - 1Maom-av1: Speed 9 Realtime - Bosphorus 4Kcpuminer-opt: x25xjpegxl: PNG - 100onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsmhasher: SHA3-256onednn: IP Shapes 1D - u8s8f32 - CPUjpegxl: JPEG - 80cpuminer-opt: scryptavifenc: 6y-cruncher: 500Mdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamtensorflow: CPU - 16 - AlexNetonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUjpegxl: PNG - 90onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamcpuminer-opt: Myriad-Groestldeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamencode-flac: WAV To FLACspacy: en_core_web_trfjpegxl: PNG - 80deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamsmhasher: t1ha2_atoncequadray: 1 - 1080ponednn: IP Shapes 1D - f32 - CPUcpuminer-opt: Deepcoinsmhasher: FarmHash32 x86_64 AVXsmhasher: t1ha0_aes_avx2 x86_64smhasher: wyhashsmhasher: FarmHash128jpegxl: JPEG - 90smhasher: MeowHash x86_64 AES-NIcpuminer-opt: Garlicoinsmhasher: fasthash32tensorflow: CPU - 32 - AlexNetquadray: 1 - 4Kdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamsmhasher: Spooky32deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamquadray: 2 - 1080pdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 256 - AlexNetonednn: Recurrent Neural Network Inference - f32 - CPUcpuminer-opt: Skeincoinaom-av1: Speed 6 Realtime - Bosphorus 4Ktensorflow: CPU - 64 - AlexNetaom-av1: Speed 6 Two-Pass - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUcpuminer-opt: Triple SHA-256, Onecoinquadray: 3 - 4Ktensorflow: CPU - 32 - GoogLeNetonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080ponednn: IP Shapes 1D - bf16bf16bf16 - CPUquadray: 2 - 4Ky-cruncher: 1Bdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUcpuminer-opt: Magideepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUavifenc: 2tensorflow: CPU - 64 - ResNet-50onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: Deconvolution Batch shapes_3d - f32 - CPUtensorflow: CPU - 32 - ResNet-50avifenc: 0deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 256 - ResNet-50deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUtensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 512 - AlexNetaom-av1: Speed 4 Two-Pass - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUtensorflow: CPU - 512 - GoogLeNetaom-av1: Speed 10 Realtime - Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUtensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 16 - ResNet-50onednn: IP Shapes 3D - f32 - CPUtensorflow: CPU - 16 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamonednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 4Ksmhasher: MeowHash x86_64 AES-NIsmhasher: t1ha0_aes_avx2 x86_64smhasher: FarmHash32 x86_64 AVXsmhasher: t1ha2_atoncesmhasher: FarmHash128smhasher: fasthash32smhasher: Spooky32smhasher: SHA3-256smhasher: wyhashAB119.0882.66120.871.08202252.55293.80.963.9526.0375558438900.785.8568133.52729704.01611.6963.346663.06721628108.9686964.731.185428.1332220.83173.6350.44186970.769679.368.68570.190.90.3155211352.66171.750.70215411.12315.655.33313.276.1328121.961.6213311.279.3939885.47974250046.782511.72596911.44649.687716567.3549.153.684361048029625.1881587.162495215031.7610.9643409.484639.526990.21165.4212.9654.848416441.9918.224213.8825.650538.9737229.531341.0913468036.1199.6149.912.871352358303.1589.761.1306218.21.462053.729.1968.5304117.12676.121953.726127.8418143.57540.5323252613.39536.3974.42381342.1353.77829.614.25152163.88576.10154.5030429.81113.38557.129417.492629.65648.54712622.219.4708789.21231.518.70.94288488.9970.933.0324788.9729.655.3636590.586.082164.408443.376192.185976.288913.10112618.1415.830.2656.0525.54832.99725.83958.62127.76733.9112314.13618.003143.693.67136.5964.53183.6277.84318.090.893.62565.637050.38898700.826.10103139.06753004.1411264.848761.6241594509.1372165.811.202838.0182252176.0251.13184450.759558.769.48563.720.910.3189171339170.070.70895711.03318.165.29113.176.1783122.861.6332311.199.3281784.89584221047.102911.65297511.37645.717916469.7249.443.705841042029456.0981123.6224812.6714949.1410.943176.24617.416957.3166.213.0254.599116367.8918.306413.9425.761138.8079230.411346.1713419036.23200.3249.742.880672350803.1490.041.1338218.251.466023.6929.1188.5531116.81776.106253.596227.9049143.25110.5311542618.97537.5274.58031339.3953.88729.674.24314163.58136.11284.4951129.76113.257.041817.517829.61649.40192618.849.4587889.32231.798.690.94389789.08713.029588.8929.675.360490.626.0845164.343943.38392.171876.296913.10022618.2615.830.2656.34825.76833.09825.8558.98227.91934.1142340.80818.231OpenBenchmarking.org

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationAB306090120150119.08143.60

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestAB2040608010082.6693.67

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamAB306090120150120.80136.59

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pAB163248648071.0864.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pAB4080120160200202.0183.61. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldAB60120180240300252.55277.84

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllAB70140210280350293.80318.09

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KAB0.2160.4320.6480.8641.080.960.891. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pAB0.87751.7552.63253.514.38753.903.621. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerAB120240360480600526.03565.63

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MAB160032004800640080007555.07050.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SAB200K400K600K800K1000K8438908898701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100AB0.18450.3690.55350.7380.92250.780.821. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUAB2468105.856806.10103MIN: 4.04MIN: 4.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pAB306090120150133.52139.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAB16K32K48K64K80K72970753001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessAB0.93171.86342.79513.72684.65854.0164.1411. (CXX) g++ options: -O3 -fPIC -lm

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pAB369121511.6912.001. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB142842567063.3564.85

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB142842567063.0761.62

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAB30K60K90K120K150K1628101594501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUAB36912158.968699.13721MIN: 7.11MIN: 7.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1AB153045607564.7365.81

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUAB0.27060.54120.81181.08241.3531.185421.20283MIN: 0.86MIN: 0.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessAB2468108.1338.0181. (CXX) g++ options: -O3 -fPIC -lm

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinAB50010001500200025002220.832252.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pAB4080120160200173.63176.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KAB122436486050.4451.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgAB4K8K12K16K20K1869718445

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pAB0.1710.3420.5130.6840.8550.760.751. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MAB2K4K6K8K10K9679.39558.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KAB153045607568.6869.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAB120240360480600570.19563.721. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100AB0.20480.40960.61440.81921.0240.900.911. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUAB0.07180.14360.21540.28720.3590.3155210.318917MIN: 0.23MIN: 0.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUAB300600900120015001352.661339.00MIN: 1252.52MIN: 1239.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256AB4080120160200171.75170.071. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUAB0.15950.3190.47850.6380.79750.7021540.708957MIN: 0.53MIN: 0.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80AB369121511.1211.031. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAB70140210280350315.65318.161. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6AB1.19992.39983.59974.79965.99955.3335.2911. (CXX) g++ options: -O3 -fPIC -lm

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MAB369121513.2713.17

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB2468106.13286.1783

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetAB306090120150121.96122.86

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUAB0.36750.7351.10251.471.83751.621331.63323MIN: 1.18MIN: 1.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90AB369121511.2711.191. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUAB36912159.393989.32817MIN: 7.19MIN: 7.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB2040608010085.4884.90

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAB9K18K27K36K45K42500422101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB112233445546.7847.10

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACAB369121511.7311.651. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfAB2004006008001000969975

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80AB369121511.4411.371. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB140280420560700649.69645.72

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceAB4K8K12K16K20K16567.3516469.721. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pAB112233445549.1549.441. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUAB0.83381.66762.50143.33524.1693.684363.70584MIN: 2.76MIN: 2.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAB2K4K6K8K10K10480104201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAB6K12K18K24K30K29625.1829456.091. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64AB20K40K60K80K100K81587.1681123.621. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: wyhash

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashAB5K10K15K20K25K24952.0024812.671. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128AB3K6K9K12K15K15031.7614949.141. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90AB369121510.9610.901. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIAB9K18K27K36K45K43409.4843176.201. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAB100020003000400050004639.524617.411. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SMHasher

Hash: fasthash32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32AB150030004500600075006990.216957.301. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetAB4080120160200165.42166.20

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KAB369121512.9613.021. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB122436486054.8554.60

SMHasher

Hash: Spooky32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32AB4K8K12K16K20K16441.9916367.891. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB51015202518.2218.31

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pAB4812162013.8813.941. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB61218243025.6525.76

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB91827364538.9738.81

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetAB50100150200250229.53230.41

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAB300600900120015001341.091346.17MIN: 1239.39MIN: 1243.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAB30K60K90K120K150K1346801341901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KAB81624324036.1036.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetAB4080120160200199.61200.32

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pAB112233445549.9149.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUAB0.64821.29641.94462.59283.2412.871352.88067MIN: 2.17MIN: 2.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinAB50K100K150K200K250K2358302350801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KAB0.70881.41762.12642.83523.5443.153.141. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetAB2040608010089.7690.04

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUAB0.25510.51020.76531.02041.27551.130621.13382MIN: 0.86MIN: 0.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pAB4812162018.2018.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUAB0.32990.65980.98971.31961.64951.462051.46602MIN: 1.1MIN: 1.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KAB0.83251.6652.49753.334.16253.703.691. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BAB71421283529.2029.12

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB2468108.53048.5531

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB306090120150117.13116.82

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB2468106.12196.1062

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB122436486053.7353.60

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB71421283527.8427.90

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB306090120150143.58143.25

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUAB0.11980.23960.35940.47920.5990.5323250.531154MIN: 0.39MIN: 0.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUAB60012001800240030002613.392618.97MIN: 2492.17MIN: 2498.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiAB120240360480600536.39537.521. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB2040608010074.4274.58

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUAB300600900120015001342.131339.39MIN: 1250.2MIN: 1240.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2AB122436486053.7853.891. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50AB71421283529.6129.67

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUAB0.95661.91322.86983.82644.7834.251524.24314MIN: 3.11MIN: 3.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB4080120160200163.89163.58

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB2468106.10156.1128

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAB1.01322.02643.03964.05285.0664.503044.49511MIN: 3.56MIN: 3.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50AB71421283529.8129.76

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0AB306090120150113.39113.201. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB132639526557.1357.04

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB4812162017.4917.52

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50AB71421283529.6529.61

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB140280420560700648.55649.40

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUAB60012001800240030002622.212618.84MIN: 2493.44MIN: 2492.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAB36912159.470879.45878MIN: 7.28MIN: 7.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetAB2040608010089.2189.32

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetAB50100150200250231.51231.79

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KAB2468108.708.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUAB0.21240.42480.63720.84961.0620.9428840.943897MIN: 0.72MIN: 0.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetAB2040608010088.9989.08

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KAB163248648070.9371.001. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUAB0.68231.36462.04692.72923.41153.032473.02950MIN: 2.29MIN: 2.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetAB2040608010088.9788.89

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50AB71421283529.6529.67

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAB1.20682.41363.62044.82726.0345.363655.36040MIN: 4.16MIN: 4.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetAB2040608010090.5890.62

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB2468106.08206.0845

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB4080120160200164.41164.34

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB102030405043.3843.38

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB2040608010092.1992.17

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB2040608010076.2976.30

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB369121513.1013.10

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUAB60012001800240030002618.142618.26MIN: 2504.65MIN: 2500.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KAB4812162015.8315.831. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KAB0.05850.1170.17550.2340.29250.260.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIAB132639526556.0556.351. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64AB61218243025.5525.771. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAB81624324033.0033.101. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceAB61218243025.8425.851. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128AB132639526558.6258.981. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32AB71421283527.7727.921. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: Spooky32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32AB81624324033.9134.111. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256AB50010001500200025002314.142340.811. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: wyhash

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashAB4812162018.0018.231. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects


Phoronix Test Suite v10.8.4