Ryzen 7 7700X Tests for a future article. AMD Ryzen 7 7700X 8-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2210280-PTS-RYZEN77736&grr&rdt .
Ryzen 7 7700X Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B AMD Ryzen 7 7700X 8-Core @ 5.57GHz (8 Cores / 16 Threads) ASRock X670E PG Lightning (1.11 BIOS) AMD Device 14d8 32GB 1000GB Western Digital WDS100T1X0E-00AFY0 XFX AMD Radeon RX 6400 4GB (2320/1000MHz) AMD Navi 21 HDMI Audio ASUS MG28U Realtek RTL8125 2.5GbE Ubuntu 22.04 5.17.0-1013-oem (x86_64) GNOME Shell 42.2 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.2.0-devel (git-a9610ab740) (LLVM 13.0.1 DRM 3.44) 1.3.219 GCC 11.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa601203 Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Ryzen 7 7700X tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet openradioss: INIVOL and Fluid Structure Interaction Drop Container smhasher: SHA3-256 smhasher: SHA3-256 tensorflow: CPU - 256 - GoogLeNet jpegxl: JPEG - 100 openradioss: Bird Strike on Windshield jpegxl: PNG - 100 tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 64 - ResNet-50 xmrig: Monero - 1M openradioss: Rubber O-Ring Seal Installation openradioss: Bumper Beam tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 32 - ResNet-50 avifenc: 0 jpegxl: JPEG - 80 xmrig: Wownero - 1M jpegxl: PNG - 80 openradioss: Cell Phone Drop Test aom-av1: Speed 4 Two-Pass - Bosphorus 4K jpegxl: JPEG - 90 jpegxl: PNG - 90 tensorflow: CPU - 64 - GoogLeNet onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - f32 - CPU aom-av1: Speed 0 Two-Pass - Bosphorus 4K onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU spacy: en_core_web_trf spacy: en_core_web_lg tensorflow: CPU - 16 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream avifenc: 2 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream aom-av1: Speed 6 Two-Pass - Bosphorus 4K deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream tensorflow: CPU - 32 - GoogLeNet jpegxl-decode: 1 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream aom-av1: Speed 4 Two-Pass - Bosphorus 1080p deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream tensorflow: CPU - 64 - AlexNet deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream y-cruncher: 1B cpuminer-opt: Deepcoin cpuminer-opt: Magi cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: scrypt cpuminer-opt: Triple SHA-256, Onecoin cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Ringcoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: x25x cpuminer-opt: Garlicoin cpuminer-opt: Blake-2 S aom-av1: Speed 0 Two-Pass - Bosphorus 1080p tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 16 - GoogLeNet onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU jpegxl-decode: All quadray: 5 - 4K quadray: 2 - 4K quadray: 3 - 4K quadray: 1 - 4K quadray: 5 - 1080p quadray: 3 - 1080p quadray: 2 - 1080p quadray: 1 - 1080p aom-av1: Speed 6 Realtime - Bosphorus 4K tensorflow: CPU - 16 - AlexNet aom-av1: Speed 6 Two-Pass - Bosphorus 1080p onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU y-cruncher: 500M aom-av1: Speed 8 Realtime - Bosphorus 4K onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU encode-flac: WAV To FLAC smhasher: FarmHash128 smhasher: FarmHash128 aom-av1: Speed 9 Realtime - Bosphorus 4K smhasher: MeowHash x86_64 AES-NI smhasher: MeowHash x86_64 AES-NI aom-av1: Speed 10 Realtime - Bosphorus 4K onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU aom-av1: Speed 6 Realtime - Bosphorus 1080p avifenc: 6, Lossless smhasher: Spooky32 smhasher: Spooky32 onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU smhasher: FarmHash32 x86_64 AVX smhasher: FarmHash32 x86_64 AVX smhasher: fasthash32 smhasher: fasthash32 avifenc: 6 smhasher: t1ha2_atonce smhasher: t1ha2_atonce smhasher: t1ha0_aes_avx2 x86_64 smhasher: t1ha0_aes_avx2 x86_64 aom-av1: Speed 8 Realtime - Bosphorus 1080p avifenc: 10, Lossless smhasher: wyhash smhasher: wyhash aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU A B 29.65 88.99 526.03 2314.136 171.75 88.97 0.78 252.55 0.9 231.51 29.61 7555 119.08 120.8 229.53 29.81 113.385 11.12 9679.3 11.44 82.66 8.7 10.96 11.27 89.21 2622.21 2613.39 2618.14 0.26 1342.13 1341.09 1352.66 969 18697 29.65 649.6877 6.1328 85.4797 46.7825 53.778 648.5471 6.1219 15.83 74.4238 53.7261 163.8857 6.1015 164.4084 6.082 25.6505 38.9737 43.3761 92.1859 89.76 64.73 18.2242 54.8484 63.3466 63.0672 18.2 27.8418 143.5754 13.1011 76.2889 17.4926 57.1294 199.61 8.5304 117.1267 29.196 10480 536.39 162810 315.65 235830 72970 2220.83 134680 42500 570.19 4639.52 843890 0.76 165.42 90.58 9.39398 5.8568 0.942884 293.8 0.96 3.7 3.15 12.96 3.9 11.69 13.88 49.15 36.1 121.96 49.91 3.68436 1.46205 0.702154 13.27 50.44 0.532325 1.62133 0.315521 11.725 58.621 15031.76 68.68 56.05 43409.48 70.93 5.36365 3.03247 1.18542 71.08 8.133 33.911 16441.99 9.47087 8.96869 4.25152 32.997 29625.18 27.767 6990.21 5.333 25.839 16567.35 25.548 81587.16 133.52 4.016 18.003 24952 173.63 202 4.50304 2.87135 1.13062 29.61 89.08 565.63 2340.808 170.07 88.89 0.82 277.84 0.91 231.79 29.67 7050.3 143.6 136.59 230.41 29.76 113.2 11.03 9558.7 11.37 93.67 8.69 10.9 11.19 89.32 2618.84 2618.97 2618.26 0.26 1339.39 1346.17 1339 975 18445 29.67 645.7179 6.1783 84.8958 47.1029 53.887 649.4019 6.1062 15.83 74.5803 53.5962 163.5813 6.1128 164.3439 6.0845 25.7611 38.8079 43.383 92.1718 90.04 65.81 18.3064 54.5991 64.8487 61.624 18.25 27.9049 143.2511 13.1002 76.2969 17.5178 57.0418 200.32 8.5531 116.8177 29.118 10420 537.52 159450 318.16 235080 75300 2252 134190 42210 563.72 4617.41 889870 0.75 166.2 90.62 9.32817 6.10103 0.943897 318.09 0.89 3.69 3.14 13.02 3.62 12 13.94 49.44 36.23 122.86 49.74 3.70584 1.46602 0.708957 13.17 51.13 0.531154 1.63323 0.318917 11.652 58.982 14949.14 69.48 56.348 43176.2 71 5.3604 3.0295 1.20283 64.53 8.018 34.114 16367.89 9.45878 9.13721 4.24314 33.098 29456.09 27.919 6957.3 5.291 25.85 16469.72 25.768 81123.62 139.06 4.141 18.231 24812.67 176.02 183.6 4.49511 2.88067 1.13382 OpenBenchmarking.org
TensorFlow Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: ResNet-50 A B 7 14 21 28 35 29.65 29.61
TensorFlow Device: CPU - Batch Size: 512 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: GoogLeNet A B 20 40 60 80 100 88.99 89.08
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container A B 120 240 360 480 600 526.03 565.63
SMHasher Hash: SHA3-256 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: SHA3-256 A B 500 1000 1500 2000 2500 2314.14 2340.81 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: SHA3-256 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: SHA3-256 A B 40 80 120 160 200 171.75 170.07 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet A B 20 40 60 80 100 88.97 88.89
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 A B 0.1845 0.369 0.5535 0.738 0.9225 0.78 0.82 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield A B 60 120 180 240 300 252.55 277.84
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 A B 0.2048 0.4096 0.6144 0.8192 1.024 0.90 0.91 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
TensorFlow Device: CPU - Batch Size: 512 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: AlexNet A B 50 100 150 200 250 231.51 231.79
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 A B 7 14 21 28 35 29.61 29.67
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M A B 1600 3200 4800 6400 8000 7555.0 7050.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation A B 30 60 90 120 150 119.08 143.60
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam A B 30 60 90 120 150 120.80 136.59
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet A B 50 100 150 200 250 229.53 230.41
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 A B 7 14 21 28 35 29.81 29.76
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 A B 30 60 90 120 150 113.39 113.20 1. (CXX) g++ options: -O3 -fPIC -lm
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 A B 3 6 9 12 15 11.12 11.03 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M A B 2K 4K 6K 8K 10K 9679.3 9558.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 A B 3 6 9 12 15 11.44 11.37 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test A B 20 40 60 80 100 82.66 93.67
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K A B 2 4 6 8 10 8.70 8.69 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 A B 3 6 9 12 15 10.96 10.90 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 A B 3 6 9 12 15 11.27 11.19 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet A B 20 40 60 80 100 89.21 89.32
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU A B 600 1200 1800 2400 3000 2622.21 2618.84 MIN: 2493.44 MIN: 2492.95 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU A B 600 1200 1800 2400 3000 2613.39 2618.97 MIN: 2492.17 MIN: 2498.77 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU A B 600 1200 1800 2400 3000 2618.14 2618.26 MIN: 2504.65 MIN: 2500.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K A B 0.0585 0.117 0.1755 0.234 0.2925 0.26 0.26 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU A B 300 600 900 1200 1500 1342.13 1339.39 MIN: 1250.2 MIN: 1240.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU A B 300 600 900 1200 1500 1341.09 1346.17 MIN: 1239.39 MIN: 1243.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU A B 300 600 900 1200 1500 1352.66 1339.00 MIN: 1252.52 MIN: 1239.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf A B 200 400 600 800 1000 969 975
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg A B 4K 8K 12K 16K 20K 18697 18445
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 A B 7 14 21 28 35 29.65 29.67
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 140 280 420 560 700 649.69 645.72
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 2 4 6 8 10 6.1328 6.1783
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 20 40 60 80 100 85.48 84.90
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 11 22 33 44 55 46.78 47.10
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 A B 12 24 36 48 60 53.78 53.89 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 140 280 420 560 700 648.55 649.40
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 2 4 6 8 10 6.1219 6.1062
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K A B 4 8 12 16 20 15.83 15.83 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 20 40 60 80 100 74.42 74.58
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 12 24 36 48 60 53.73 53.60
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 40 80 120 160 200 163.89 163.58
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 2 4 6 8 10 6.1015 6.1128
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 40 80 120 160 200 164.41 164.34
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 2 4 6 8 10 6.0820 6.0845
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 6 12 18 24 30 25.65 25.76
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 9 18 27 36 45 38.97 38.81
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 10 20 30 40 50 43.38 43.38
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 20 40 60 80 100 92.19 92.17
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet A B 20 40 60 80 100 89.76 90.04
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 A B 15 30 45 60 75 64.73 65.81
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 5 10 15 20 25 18.22 18.31
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 12 24 36 48 60 54.85 54.60
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 14 28 42 56 70 63.35 64.85
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 14 28 42 56 70 63.07 61.62
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p A B 4 8 12 16 20 18.20 18.25 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 7 14 21 28 35 27.84 27.90
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 30 60 90 120 150 143.58 143.25
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 3 6 9 12 15 13.10 13.10
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 20 40 60 80 100 76.29 76.30
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 4 8 12 16 20 17.49 17.52
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 13 26 39 52 65 57.13 57.04
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet A B 40 80 120 160 200 199.61 200.32
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 2 4 6 8 10 8.5304 8.5531
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 30 60 90 120 150 117.13 116.82
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B A B 7 14 21 28 35 29.20 29.12
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin A B 2K 4K 6K 8K 10K 10480 10420 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi A B 120 240 360 480 600 536.39 537.52 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite A B 30K 60K 90K 120K 150K 162810 159450 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt A B 70 140 210 280 350 315.65 318.16 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin A B 50K 100K 150K 200K 250K 235830 235080 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits A B 16K 32K 48K 64K 80K 72970 75300 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin A B 500 1000 1500 2000 2500 2220.83 2252.00 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin A B 30K 60K 90K 120K 150K 134680 134190 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl A B 9K 18K 27K 36K 45K 42500 42210 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x A B 120 240 360 480 600 570.19 563.72 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin A B 1000 2000 3000 4000 5000 4639.52 4617.41 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S A B 200K 400K 600K 800K 1000K 843890 889870 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p A B 0.171 0.342 0.513 0.684 0.855 0.76 0.75 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet A B 40 80 120 160 200 165.42 166.20
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet A B 20 40 60 80 100 90.58 90.62
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU A B 3 6 9 12 15 9.39398 9.32817 MIN: 7.19 MIN: 7.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A B 2 4 6 8 10 5.85680 6.10103 MIN: 4.04 MIN: 4.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU A B 0.2124 0.4248 0.6372 0.8496 1.062 0.942884 0.943897 MIN: 0.72 MIN: 0.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All A B 70 140 210 280 350 293.80 318.09
QuadRay Scene: 5 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K A B 0.216 0.432 0.648 0.864 1.08 0.96 0.89 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K A B 0.8325 1.665 2.4975 3.33 4.1625 3.70 3.69 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K A B 0.7088 1.4176 2.1264 2.8352 3.544 3.15 3.14 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K A B 3 6 9 12 15 12.96 13.02 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 5 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p A B 0.8775 1.755 2.6325 3.51 4.3875 3.90 3.62 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p A B 3 6 9 12 15 11.69 12.00 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p A B 4 8 12 16 20 13.88 13.94 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p A B 11 22 33 44 55 49.15 49.44 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K A B 8 16 24 32 40 36.10 36.23 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet A B 30 60 90 120 150 121.96 122.86
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p A B 11 22 33 44 55 49.91 49.74 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU A B 0.8338 1.6676 2.5014 3.3352 4.169 3.68436 3.70584 MIN: 2.76 MIN: 2.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU A B 0.3299 0.6598 0.9897 1.3196 1.6495 1.46205 1.46602 MIN: 1.1 MIN: 1.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU A B 0.1595 0.319 0.4785 0.638 0.7975 0.702154 0.708957 MIN: 0.53 MIN: 0.52 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M A B 3 6 9 12 15 13.27 13.17
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K A B 12 24 36 48 60 50.44 51.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU A B 0.1198 0.2396 0.3594 0.4792 0.599 0.532325 0.531154 MIN: 0.39 MIN: 0.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU A B 0.3675 0.735 1.1025 1.47 1.8375 1.62133 1.63323 MIN: 1.18 MIN: 1.19 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU A B 0.0718 0.1436 0.2154 0.2872 0.359 0.315521 0.318917 MIN: 0.23 MIN: 0.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC A B 3 6 9 12 15 11.73 11.65 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
SMHasher Hash: FarmHash128 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash128 A B 13 26 39 52 65 58.62 58.98 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash128 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash128 A B 3K 6K 9K 12K 15K 15031.76 14949.14 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K A B 15 30 45 60 75 68.68 69.48 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI A B 13 26 39 52 65 56.05 56.35 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI A B 9K 18K 27K 36K 45K 43409.48 43176.20 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K A B 16 32 48 64 80 70.93 71.00 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU A B 1.2068 2.4136 3.6204 4.8272 6.034 5.36365 5.36040 MIN: 4.16 MIN: 4.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU A B 0.6823 1.3646 2.0469 2.7292 3.4115 3.03247 3.02950 MIN: 2.29 MIN: 2.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU A B 0.2706 0.5412 0.8118 1.0824 1.353 1.18542 1.20283 MIN: 0.86 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p A B 16 32 48 64 80 71.08 64.53 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless A B 2 4 6 8 10 8.133 8.018 1. (CXX) g++ options: -O3 -fPIC -lm
SMHasher Hash: Spooky32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: Spooky32 A B 8 16 24 32 40 33.91 34.11 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: Spooky32 A B 4K 8K 12K 16K 20K 16441.99 16367.89 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A B 3 6 9 12 15 9.47087 9.45878 MIN: 7.28 MIN: 7.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU A B 3 6 9 12 15 8.96869 9.13721 MIN: 7.11 MIN: 7.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU A B 0.9566 1.9132 2.8698 3.8264 4.783 4.25152 4.24314 MIN: 3.11 MIN: 3.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX A B 8 16 24 32 40 33.00 33.10 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX A B 6K 12K 18K 24K 30K 29625.18 29456.09 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: fasthash32 A B 7 14 21 28 35 27.77 27.92 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: fasthash32 A B 1500 3000 4500 6000 7500 6990.21 6957.30 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 A B 1.1999 2.3998 3.5997 4.7996 5.9995 5.333 5.291 1. (CXX) g++ options: -O3 -fPIC -lm
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce A B 6 12 18 24 30 25.84 25.85 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce A B 4K 8K 12K 16K 20K 16567.35 16469.72 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 A B 6 12 18 24 30 25.55 25.77 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 A B 20K 40K 60K 80K 100K 81587.16 81123.62 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p A B 30 60 90 120 150 133.52 139.06 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless A B 0.9317 1.8634 2.7951 3.7268 4.6585 4.016 4.141 1. (CXX) g++ options: -O3 -fPIC -lm
SMHasher Hash: wyhash OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: wyhash A B 4 8 12 16 20 18.00 18.23 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: wyhash OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: wyhash A B 5K 10K 15K 20K 25K 24952.00 24812.67 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p A B 40 80 120 160 200 173.63 176.02 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p A B 40 80 120 160 200 202.0 183.6 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU A B 1.0132 2.0264 3.0396 4.0528 5.066 4.50304 4.49511 MIN: 3.56 MIN: 3.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU A B 0.6482 1.2964 1.9446 2.5928 3.241 2.87135 2.88067 MIN: 2.17 MIN: 2.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU A B 0.2551 0.5102 0.7653 1.0204 1.2755 1.13062 1.13382 MIN: 0.86 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Phoronix Test Suite v10.8.5