Ryzen 7 7700X

Tests for a future article. AMD Ryzen 7 7700X 8-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210280-PTS-RYZEN77736&export=txt&grr&rdt&rro.

Ryzen 7 7700XProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABAMD Ryzen 7 7700X 8-Core @ 5.57GHz (8 Cores / 16 Threads)ASRock X670E PG Lightning (1.11 BIOS)AMD Device 14d832GB1000GB Western Digital WDS100T1X0E-00AFY0XFX AMD Radeon RX 6400 4GB (2320/1000MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbEUbuntu 22.045.17.0-1013-oem (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.2.0-devel (git-a9610ab740) (LLVM 13.0.1 DRM 3.44)1.3.219GCC 11.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Ryzen 7 7700Xtensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNetopenradioss: INIVOL and Fluid Structure Interaction Drop Containersmhasher: SHA3-256smhasher: SHA3-256tensorflow: CPU - 256 - GoogLeNetjpegxl: JPEG - 100openradioss: Bird Strike on Windshieldjpegxl: PNG - 100tensorflow: CPU - 512 - AlexNettensorflow: CPU - 64 - ResNet-50xmrig: Monero - 1Mopenradioss: Rubber O-Ring Seal Installationopenradioss: Bumper Beamtensorflow: CPU - 256 - AlexNettensorflow: CPU - 32 - ResNet-50avifenc: 0jpegxl: JPEG - 80xmrig: Wownero - 1Mjpegxl: PNG - 80openradioss: Cell Phone Drop Testaom-av1: Speed 4 Two-Pass - Bosphorus 4Kjpegxl: JPEG - 90jpegxl: PNG - 90tensorflow: CPU - 64 - GoogLeNetonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 4Konednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUspacy: en_core_web_trfspacy: en_core_web_lgtensorflow: CPU - 16 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamavifenc: 2deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamaom-av1: Speed 6 Two-Pass - Bosphorus 4Kdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 32 - GoogLeNetjpegxl-decode: 1deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamaom-av1: Speed 4 Two-Pass - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 64 - AlexNetdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamy-cruncher: 1Bcpuminer-opt: Deepcoincpuminer-opt: Magicpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: scryptcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: LBC, LBRY Creditscpuminer-opt: Ringcoincpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: x25xcpuminer-opt: Garlicoincpuminer-opt: Blake-2 Saom-av1: Speed 0 Two-Pass - Bosphorus 1080ptensorflow: CPU - 32 - AlexNettensorflow: CPU - 16 - GoogLeNetonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUjpegxl-decode: Allquadray: 5 - 4Kquadray: 2 - 4Kquadray: 3 - 4Kquadray: 1 - 4Kquadray: 5 - 1080pquadray: 3 - 1080pquadray: 2 - 1080pquadray: 1 - 1080paom-av1: Speed 6 Realtime - Bosphorus 4Ktensorflow: CPU - 16 - AlexNetaom-av1: Speed 6 Two-Pass - Bosphorus 1080ponednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUy-cruncher: 500Maom-av1: Speed 8 Realtime - Bosphorus 4Konednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUencode-flac: WAV To FLACsmhasher: FarmHash128smhasher: FarmHash128aom-av1: Speed 9 Realtime - Bosphorus 4Ksmhasher: MeowHash x86_64 AES-NIsmhasher: MeowHash x86_64 AES-NIaom-av1: Speed 10 Realtime - Bosphorus 4Konednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080pavifenc: 6, Losslesssmhasher: Spooky32smhasher: Spooky32onednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUsmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXsmhasher: fasthash32smhasher: fasthash32avifenc: 6smhasher: t1ha2_atoncesmhasher: t1ha2_atoncesmhasher: t1ha0_aes_avx2 x86_64smhasher: t1ha0_aes_avx2 x86_64aom-av1: Speed 8 Realtime - Bosphorus 1080pavifenc: 10, Losslesssmhasher: wyhashsmhasher: wyhashaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUAB29.6588.99526.032314.136171.7588.970.78252.550.9231.5129.617555119.08120.8229.5329.81113.38511.129679.311.4482.668.710.9611.2789.212622.212613.392618.140.261342.131341.091352.669691869729.65649.68776.132885.479746.782553.778648.54716.121915.8374.423853.7261163.88576.1015164.40846.08225.650538.973743.376192.185989.7664.7318.224254.848463.346663.067218.227.8418143.575413.101176.288917.492657.1294199.618.5304117.126729.19610480536.39162810315.65235830729702220.8313468042500570.194639.528438900.76165.4290.589.393985.85680.942884293.80.963.73.1512.963.911.6913.8849.1536.1121.9649.913.684361.462050.70215413.2750.440.5323251.621330.31552111.72558.62115031.7668.6856.0543409.4870.935.363653.032471.1854271.088.13333.91116441.999.470878.968694.2515232.99729625.1827.7676990.215.33325.83916567.3525.54881587.16133.524.01618.00324952173.632024.503042.871351.1306229.6189.08565.632340.808170.0788.890.82277.840.91231.7929.677050.3143.6136.59230.4129.76113.211.039558.711.3793.678.6910.911.1989.322618.842618.972618.260.261339.391346.1713399751844529.67645.71796.178384.895847.102953.887649.40196.106215.8374.580353.5962163.58136.1128164.34396.084525.761138.807943.38392.171890.0465.8118.306454.599164.848761.62418.2527.9049143.251113.100276.296917.517857.0418200.328.5531116.817729.11810420537.52159450318.1623508075300225213419042210563.724617.418898700.75166.290.629.328176.101030.943897318.090.893.693.1413.023.621213.9449.4436.23122.8649.743.705841.466020.70895713.1751.130.5311541.633230.31891711.65258.98214949.1469.4856.34843176.2715.36043.02951.2028364.538.01834.11416367.899.458789.137214.2431433.09829456.0927.9196957.35.29125.8516469.7225.76881123.62139.064.14118.23124812.67176.02183.64.495112.880671.13382OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50BA71421283529.6129.65

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetBA2040608010089.0888.99

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerBA120240360480600565.63526.03

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256BA50010001500200025002340.812314.141. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256BA4080120160200170.07171.751. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetBA2040608010088.8988.97

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100BA0.18450.3690.55350.7380.92250.820.781. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldBA60120180240300277.84252.55

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100BA0.20480.40960.61440.81921.0240.910.901. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetBA50100150200250231.79231.51

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50BA71421283529.6729.61

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MBA160032004800640080007050.37555.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationBA306090120150143.60119.08

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamBA306090120150136.59120.80

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetBA50100150200250230.41229.53

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50BA71421283529.7629.81

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0BA306090120150113.20113.391. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80BA369121511.0311.121. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MBA2K4K6K8K10K9558.79679.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80BA369121511.3711.441. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestBA2040608010093.6782.66

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KBA2468108.698.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90BA369121510.9010.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90BA369121511.1911.271. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetBA2040608010089.3289.21

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUBA60012001800240030002618.842622.21MIN: 2492.95MIN: 2493.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUBA60012001800240030002618.972613.39MIN: 2498.77MIN: 2492.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUBA60012001800240030002618.262618.14MIN: 2500.17MIN: 2504.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KBA0.05850.1170.17550.2340.29250.260.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUBA300600900120015001339.391342.13MIN: 1240.61MIN: 1250.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUBA300600900120015001346.171341.09MIN: 1243.24MIN: 1239.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUBA300600900120015001339.001352.66MIN: 1239.2MIN: 1252.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfBA2004006008001000975969

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgBA4K8K12K16K20K1844518697

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50BA71421283529.6729.65

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamBA140280420560700645.72649.69

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamBA2468106.17836.1328

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamBA2040608010084.9085.48

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamBA112233445547.1046.78

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2BA122436486053.8953.781. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamBA140280420560700649.40648.55

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamBA2468106.10626.1219

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KBA4812162015.8315.831. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamBA2040608010074.5874.42

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamBA122436486053.6053.73

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamBA4080120160200163.58163.89

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamBA2468106.11286.1015

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamBA4080120160200164.34164.41

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamBA2468106.08456.0820

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamBA61218243025.7625.65

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamBA91827364538.8138.97

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamBA102030405043.3843.38

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamBA2040608010092.1792.19

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetBA2040608010090.0489.76

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1BA153045607565.8164.73

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamBA51015202518.3118.22

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamBA122436486054.6054.85

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamBA142842567064.8563.35

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamBA142842567061.6263.07

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pBA4812162018.2518.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamBA71421283527.9027.84

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamBA306090120150143.25143.58

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBA369121513.1013.10

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBA2040608010076.3076.29

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBA4812162017.5217.49

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBA132639526557.0457.13

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetBA4080120160200200.32199.61

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamBA2468108.55318.5304

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamBA306090120150116.82117.13

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BBA71421283529.1229.20

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinBA2K4K6K8K10K10420104801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiBA120240360480600537.52536.391. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteBA30K60K90K120K150K1594501628101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptBA70140210280350318.16315.651. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinBA50K100K150K200K250K2350802358301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsBA16K32K48K64K80K75300729701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinBA50010001500200025002252.002220.831. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinBA30K60K90K120K150K1341901346801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlBA9K18K27K36K45K42210425001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xBA120240360480600563.72570.191. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinBA100020003000400050004617.414639.521. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SBA200K400K600K800K1000K8898708438901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pBA0.1710.3420.5130.6840.8550.750.761. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetBA4080120160200166.20165.42

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetBA2040608010090.6290.58

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUBA36912159.328179.39398MIN: 7.18MIN: 7.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUBA2468106.101035.85680MIN: 4.03MIN: 4.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUBA0.21240.42480.63720.84961.0620.9438970.942884MIN: 0.72MIN: 0.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllBA70140210280350318.09293.80

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KBA0.2160.4320.6480.8641.080.890.961. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KBA0.83251.6652.49753.334.16253.693.701. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KBA0.70881.41762.12642.83523.5443.143.151. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KBA369121513.0212.961. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pBA0.87751.7552.63253.514.38753.623.901. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pBA369121512.0011.691. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pBA4812162013.9413.881. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pBA112233445549.4449.151. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KBA81624324036.2336.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetBA306090120150122.86121.96

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pBA112233445549.7449.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUBA0.83381.66762.50143.33524.1693.705843.68436MIN: 2.78MIN: 2.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUBA0.32990.65980.98971.31961.64951.466021.46205MIN: 1.1MIN: 1.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUBA0.15950.3190.47850.6380.79750.7089570.702154MIN: 0.52MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MBA369121513.1713.27

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KBA122436486051.1350.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUBA0.11980.23960.35940.47920.5990.5311540.532325MIN: 0.39MIN: 0.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUBA0.36750.7351.10251.471.83751.633231.62133MIN: 1.19MIN: 1.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUBA0.07180.14360.21540.28720.3590.3189170.315521MIN: 0.23MIN: 0.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACBA369121511.6511.731. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128BA132639526558.9858.621. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128BA3K6K9K12K15K14949.1415031.761. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KBA153045607569.4868.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIBA132639526556.3556.051. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIBA9K18K27K36K45K43176.2043409.481. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KBA163248648071.0070.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUBA1.20682.41363.62044.82726.0345.360405.36365MIN: 4.17MIN: 4.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUBA0.68231.36462.04692.72923.41153.029503.03247MIN: 2.29MIN: 2.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUBA0.27060.54120.81181.08241.3531.202831.18542MIN: 0.86MIN: 0.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pBA163248648064.5371.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessBA2468108.0188.1331. (CXX) g++ options: -O3 -fPIC -lm

SMHasher

Hash: Spooky32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32BA81624324034.1133.911. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: Spooky32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32BA4K8K12K16K20K16367.8916441.991. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUBA36912159.458789.47087MIN: 7.27MIN: 7.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUBA36912159.137218.96869MIN: 7.1MIN: 7.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUBA0.95661.91322.86983.82644.7834.243144.25152MIN: 3.11MIN: 3.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXBA81624324033.1033.001. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXBA6K12K18K24K30K29456.0929625.181. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32BA71421283527.9227.771. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32BA150030004500600075006957.306990.211. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6BA1.19992.39983.59974.79965.99955.2915.3331. (CXX) g++ options: -O3 -fPIC -lm

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceBA61218243025.8525.841. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceBA4K8K12K16K20K16469.7216567.351. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64BA61218243025.7725.551. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64BA20K40K60K80K100K81123.6281587.161. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pBA306090120150139.06133.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessBA0.93171.86342.79513.72684.65854.1414.0161. (CXX) g++ options: -O3 -fPIC -lm

SMHasher

Hash: wyhash

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashBA4812162018.2318.001. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: wyhash

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashBA5K10K15K20K25K24812.6724952.001. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pBA4080120160200176.02173.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pBA4080120160200183.6202.01. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUBA1.01322.02643.03964.05285.0664.495114.50304MIN: 3.56MIN: 3.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUBA0.64821.29641.94462.59283.2412.880672.87135MIN: 2.18MIN: 2.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUBA0.25510.51020.76531.02041.27551.133821.13062MIN: 0.86MIN: 0.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl


Phoronix Test Suite v10.8.5