2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302Python Notes: Python 2.7.18 + Python 3.8.10Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
b bb Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 6.1.0-phx (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
eee OpenBenchmarking.org Phoronix Test Suite 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads) TYAN S7106 (V2.01.B40 BIOS) Intel Sky Lake-E DMI3 Registers 94GB 500GB Samsung SSD 860 ASPEED VE228 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE Ubuntu 20.04 6.1.0-phx (x86_64) GNOME Shell 3.36.9 X Server 1.20.13 GCC 9.4.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution Eee Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302 - Python 2.7.18 + Python 3.8.10 - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
a b bb Result Overview Phoronix Test Suite 100% 103% 105% 108% 111% Memcached PostgreSQL TensorFlow Timed LLVM Compilation ClickHouse oneDNN SVT-AV1 John The Ripper Build2 Zstd Compression FFmpeg Timed Godot Game Engine Compilation Timed FFmpeg Compilation SPECFEM3D Timed Node.js Compilation Embree OpenSSL VVenC uvg266
eee openssl: SHA256 openssl: SHA512 openssl: ChaCha20 openssl: AES-128-GCM openssl: AES-256-GCM openssl: ChaCha20-Poly1305 ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p uvg266: Bosphorus 4K - Slow uvg266: Bosphorus 4K - Medium uvg266: Bosphorus 1080p - Slow uvg266: Bosphorus 1080p - Medium uvg266: Bosphorus 4K - Very Fast uvg266: Bosphorus 4K - Super Fast uvg266: Bosphorus 4K - Ultra Fast uvg266: Bosphorus 1080p - Very Fast uvg266: Bosphorus 1080p - Super Fast uvg266: Bosphorus 1080p - Ultra Fast vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet tensorflow: CPU - 512 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream compress-zstd: 3 - Compression Speed compress-zstd: 3 - Decompression Speed compress-zstd: 8 - Compression Speed compress-zstd: 8 - Decompression Speed compress-zstd: 12 - Compression Speed compress-zstd: 12 - Decompression Speed compress-zstd: 19 - Compression Speed compress-zstd: 19 - Decompression Speed compress-zstd: 3, Long Mode - Compression Speed compress-zstd: 3, Long Mode - Decompression Speed compress-zstd: 8, Long Mode - Compression Speed compress-zstd: 8, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19, Long Mode - Decompression Speed rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand memcached: 1:5 memcached: 1:10 memcached: 1:100 clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache clickhouse: 100M Rows Hits Dataset, Second Run clickhouse: 100M Rows Hits Dataset, Third Run john-the-ripper: bcrypt john-the-ripper: WPA PSK john-the-ripper: Blowfish john-the-ripper: HMAC-SHA512 john-the-ripper: MD5 nginx: 100 nginx: 200 nginx: 500 nginx: 1000 openssl: RSA4096 pgbench: 1 - 50 - Read Only pgbench: 1 - 50 - Read Write pgbench: 100 - 50 - Read Only pgbench: 100 - 50 - Read Write openssl: RSA4096 onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU pgbench: 1 - 50 - Read Only - Average Latency pgbench: 1 - 50 - Read Write - Average Latency pgbench: 100 - 50 - Read Only - Average Latency pgbench: 100 - 50 - Read Write - Average Latency draco: Lion draco: Church Facade deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream specfem3d: Mount St. Helens specfem3d: Layered Halfspace specfem3d: Tomographic Model specfem3d: Homogeneous Halfspace specfem3d: Water-layered Halfspace build-ffmpeg: Time To Compile build-godot: Time To Compile build-llvm: Ninja build-llvm: Unix Makefiles build-nodejs: Time To Compile build2: Time To Compile ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only a b bb 9173311330 10145725330 150176760350 159151287770 118250262800 80516466550 152.11 40.28 10.44 10.64 38.19 20.45 38.03 20.66 28.6383 29.7114 33.0349 29.595 37.8838 32.4996 2.55 36.413 110.282 110.638 6.223 85.659 232.248 200.802 7.91 8.83 18.3 20.04 18.07 20.29 21.8 42.93 44.64 48.17 3.657 5.991 9.605 16.729 83.2 109.53 146.93 212.65 227.16 50.71 15.11 43.32 18.25 77.76 22.2 103.29 29.63 109.76 31.1 18.3814 8.6827 403.0469 112.5355 99.4868 42.6499 139.5608 54.3343 351.8332 112.5065 178.9198 65.3784 40.3347 19.5562 91.4064 34.7145 18.4172 8.668 1664.3 1102.7 579.4 1093.3 146.2 987.6 12.8 840.6 325.9 1137 282.7 1105.7 6.71 826 251487 115209715 233375 259650 6971 4910200 2228918 2007905.01 1719545.48 1290869.25 163.13 205.05 206.05 48821 212818 48756 91245000 4691000 152180.63 147630.66 141761.59 139200.66 8029.4 1068707 809 994655 3946 534883.8 1.73354 3.45599 1.63878 1.29978 5.69353 4.0573 3.5297 14.7513 2.74347 4.39463 0.563515 0.693934 1384.14 786.003 1379.97 6.35319 8.67071 9.56667 788.804 1374.37 786.919 0.047 61.842 0.05 12.67 6694 9410 977.294 115.1583 44.6206 8.8732 180.7166 23.4331 128.7775 18.3878 51.1119 8.8739 100.5244 15.2836 446.1672 51.1117 196.5611 28.7927 965.2134 115.3529 24.148005301 63.682105721 25.349439354 31.379210956 60.237167581 33.48 232.641 343.242 439.12 309.57 100.27 33.20 125.374756424 241.764027512 237.23567868 198.347553986 370.338638914 199.18 366.57 56.07 159.56 74.47 579.52 191.67 9102032020 10167598450 150074005090 159781972650 118596420950 80512557330 153.96 43.62 10.42 10.58 38.07 20.52 38.16 20.61 28.4938 29.5023 33.1778 29.4408 37.7104 32.5565 2.608 36.225 117.742 111.154 6.344 86.951 235.271 201.732 7.95 8.82 18.45 20.05 18.11 20.33 21.48 42.62 45.1 48.24 3.579 6.026 9.641 16.897 84.48 109.74 146.48 212.5 227.85 50.96 15.1 64.04 18.31 77.64 22.27 104.62 1640.6 1095.1 545.9 1093.7 153.9 996.4 12.8 846.7 283 1144.5 289.9 1097.9 6.45 833.2 1672007.69 1478446.78 1550585.25 157.77 211.36 210.46 48864 213811 48405 91970000 4719000 8026.7 1052848 811 991262 3610 536533.8 1.7178 3.09364 1.74816 1.28042 5.68728 4.30617 4.78799 14.8033 2.73821 4.90144 0.562744 0.688332 1386.62 788.652 1385.41 6.35868 8.68912 9.51019 785.39 1381.33 797.591 0.047 61.656 0.05 13.85 24.125490298 64.462549436 25.268860658 31.089935816 61.367356552 33.273 234.221 339.912 439.282 309.311 99.151 32.80 115.765549178 242.24275493 238.65 198.960340617 369.219786333 198.50 367.613657159 9206384930 10193373610 150059242840 159160934620 118499993460 80513755000 153.822107334 40.56 10.47 10.70 38.21 20.50 38.28 20.68 28.7455 29.5362 33.1711 29.4853 37.7176 32.4438 2.647 35.893 117.555 110.286 6.222 86.484 228.891 201.444 7.91 8.85 18.47 20.22 18.14 20.07 21.62 42.32 45.23 48.4 3.634 5.97 9.746 16.675 82.22 109.07 145.41 213.32 227 51.15 15.09 63.78 18.3 77.79 22.35 104.2 29.43 109.27 30.95 18.7109 8.6642 394.4534 112.2279 99.7544 42.2697 139.499 54.3298 352.0435 109.4518 179.5686 66.0925 40.8629 19.7168 91.353 34.9308 18.7079 8.6925 1584.1 1095.8 572.3 1097.6 151.4 996.5 12.9 849 310.5 1140.2 279.4 1100.7 6.79 834.2 243680 114830146 228703 249987 7527 4875480 2251517 1937351.98 1700246.18 1587757.54 154.30 200.24 211.12 48912 214200 48562 98271000 4712000 151733.73 149820.05 143874.15 139997.58 8109.8 1067954 808 993660 3031 534630.5 1.73257 3.17173 1.69336 1.28253 5.68651 3.33073 4.77498 14.6464 2.7383 4.35862 0.562273 0.691037 1374.19 789.003 1377.45 6.35249 8.67211 9.51252 787.693 1380.03 789.808 0.047 61.913 0.05 16.497 6713 9381 961.1346 115.4054 45.5898 8.8977 180.2832 23.6443 128.8102 18.3879 51.0891 9.1226 100.1536 15.1185 440.4085 50.6953 196.7412 28.6153 961.9707 115.0291 24.639822177 64.480914525 25.0077492 31.065733738 60.513328909 33.178 232.02 338.885 419.847 309.086 100.747 32.83 124.51 241.13 235.941449418 198.27 369.540182188 197.896164664 366.29 55.76 159.27 74.63 580.85 192.11 OpenBenchmarking.org
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA256 bb a b 2000M 4000M 6000M 8000M 10000M 9206384930 9173311330 9102032020 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA512 bb b a 2000M 4000M 6000M 8000M 10000M 10193373610 10167598450 10145725330 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20 a b bb 30000M 60000M 90000M 120000M 150000M 150176760350 150074005090 150059242840 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-128-GCM b bb a 30000M 60000M 90000M 120000M 150000M 159781972650 159160934620 159151287770 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-256-GCM b bb a 30000M 60000M 90000M 120000M 150000M 118596420950 118499993460 118250262800 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 a bb b 20000M 40000M 60000M 80000M 100000M 80516466550 80513755000 80512557330 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Live b bb a 30 60 90 120 150 153.96 153.82 152.11 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live b bb a 10 20 30 40 50 43.62 40.56 40.28 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Upload bb a b 3 6 9 12 15 10.47 10.44 10.42 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload bb a b 3 6 9 12 15 10.70 10.64 10.58 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Platform bb a b 9 18 27 36 45 38.21 38.19 38.07 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform b bb a 5 10 15 20 25 20.52 20.50 20.45 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand bb b a 9 18 27 36 45 38.28 38.16 38.03 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand bb a b 5 10 15 20 25 20.68 20.66 20.61 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Crown bb a b 7 14 21 28 35 28.75 28.64 28.49 MIN: 28.39 / MAX: 29.15 MIN: 28.27 / MAX: 29.08 MIN: 28.14 / MAX: 29
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Crown a bb b 7 14 21 28 35 29.71 29.54 29.50 MIN: 29.3 / MAX: 30.17 MIN: 29.06 / MAX: 30.1 MIN: 29.05 / MAX: 30.07
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon b bb a 8 16 24 32 40 33.18 33.17 33.03 MIN: 32.97 / MAX: 33.5 MIN: 32.95 / MAX: 33.5 MIN: 32.82 / MAX: 33.48
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon Obj a bb b 7 14 21 28 35 29.60 29.49 29.44 MIN: 29.37 / MAX: 30.06 MIN: 29.24 / MAX: 29.77 MIN: 29.24 / MAX: 29.86
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon a bb b 9 18 27 36 45 37.88 37.72 37.71 MIN: 37.64 / MAX: 38.3 MIN: 37.47 / MAX: 38.19 MIN: 37.4 / MAX: 38.13
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj b a bb 8 16 24 32 40 32.56 32.50 32.44 MIN: 32.27 / MAX: 32.95 MIN: 32.25 / MAX: 32.9 MIN: 32.17 / MAX: 32.79
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K bb b a 0.5956 1.1912 1.7868 2.3824 2.978 2.647 2.608 2.550 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b bb 8 16 24 32 40 36.41 36.23 35.89 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K b bb a 30 60 90 120 150 117.74 117.56 110.28 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a bb 20 40 60 80 100 111.15 110.64 110.29 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p b a bb 2 4 6 8 10 6.344 6.223 6.222 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b bb a 20 40 60 80 100 86.95 86.48 85.66 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a bb 50 100 150 200 250 235.27 232.25 228.89 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p b bb a 40 80 120 160 200 201.73 201.44 200.80 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 4K - Video Preset: Fast a bb b 0.8228 1.6456 2.4684 3.2912 4.114 3.657 3.634 3.579 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 4K - Video Preset: Faster b a bb 2 4 6 8 10 6.026 5.991 5.970 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 1080p - Video Preset: Fast bb b a 3 6 9 12 15 9.746 9.641 9.605 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 1080p - Video Preset: Faster b a bb 4 8 12 16 20 16.90 16.73 16.68 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet b a bb 20 40 60 80 100 84.48 83.20 82.22
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Read a bb 20M 40M 60M 80M 100M 115209715 114830146 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random a bb 50K 100K 150K 200K 250K 233375 228703 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Sequential Fill a bb 60K 120K 180K 240K 300K 259650 249987 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Random Fill Sync bb a 1600 3200 4800 6400 8000 7527 6971 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing a bb 1.1M 2.2M 3.3M 4.4M 5.5M 4910200 4875480 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random bb a 500K 1000K 1500K 2000K 2500K 2251517 2228918 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 a bb b 400K 800K 1200K 1600K 2000K 2007905.01 1937351.98 1672007.69 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 a bb b 400K 800K 1200K 1600K 2000K 1719545.48 1700246.18 1478446.78 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 bb b a 300K 600K 900K 1200K 1500K 1587757.54 1550585.25 1290869.25 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache a b bb 40 80 120 160 200 163.13 157.77 154.30 MIN: 16.41 / MAX: 1200 MIN: 16.57 / MAX: 1621.62 MIN: 15.97 / MAX: 1578.95
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run b a bb 50 100 150 200 250 211.36 205.05 200.24 MIN: 18.65 / MAX: 2500 MIN: 18.52 / MAX: 1363.64 MIN: 18.21 / MAX: 1714.29
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run bb b a 50 100 150 200 250 211.12 210.46 206.05 MIN: 18.35 / MAX: 1875 MIN: 18.39 / MAX: 1276.6 MIN: 18.75 / MAX: 1090.91
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a bb 30K 60K 90K 120K 150K 152180.63 151733.73 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 bb a 30K 60K 90K 120K 150K 149820.05 147630.66 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 bb a 30K 60K 90K 120K 150K 143874.15 141761.59 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 bb a 30K 60K 90K 120K 150K 139997.58 139200.66 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
Concurrent Requests: 100
a: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
bb: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
Concurrent Requests: 200
a: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
bb: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
Concurrent Requests: 500
a: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
bb: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
Concurrent Requests: 1000
a: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
bb: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8088 Connection refused
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 bb a b 2K 4K 6K 8K 10K 8109.8 8029.4 8026.7 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write b a bb 200 400 600 800 1000 811 809 808 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only a bb b 200K 400K 600K 800K 1000K 994655 993660 991262 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write a b bb 800 1600 2400 3200 4000 3946 3610 3031 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 b a bb 110K 220K 330K 440K 550K 536533.8 534883.8 534630.5 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b bb a 0.39 0.78 1.17 1.56 1.95 1.71780 1.73257 1.73354 MIN: 1.64 MIN: 1.64 MIN: 1.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU b bb a 0.7776 1.5552 2.3328 3.1104 3.888 3.09364 3.17173 3.45599 MIN: 1.81 MIN: 1.84 MIN: 1.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a bb b 0.3933 0.7866 1.1799 1.5732 1.9665 1.63878 1.69336 1.74816 MIN: 1.5 MIN: 1.56 MIN: 1.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU b bb a 0.2925 0.585 0.8775 1.17 1.4625 1.28042 1.28253 1.29978 MIN: 1.07 MIN: 1.1 MIN: 1.08 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU bb b a 1.281 2.562 3.843 5.124 6.405 5.68651 5.68728 5.69353 MIN: 5.55 MIN: 5.54 MIN: 5.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU bb a b 0.9689 1.9378 2.9067 3.8756 4.8445 3.33073 4.05730 4.30617 MIN: 2.9 MIN: 2.84 MIN: 2.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a bb b 1.0773 2.1546 3.2319 4.3092 5.3865 3.52970 4.77498 4.78799 MIN: 2.77 MIN: 2.76 MIN: 2.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU bb a b 4 8 12 16 20 14.65 14.75 14.80 MIN: 12.43 MIN: 12.24 MIN: 12.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU b bb a 0.6173 1.2346 1.8519 2.4692 3.0865 2.73821 2.73830 2.74347 MIN: 2.72 MIN: 2.72 MIN: 2.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU bb a b 1.1028 2.2056 3.3084 4.4112 5.514 4.35862 4.39463 4.90144 MIN: 2.47 MIN: 2.48 MIN: 2.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU bb b a 0.1268 0.2536 0.3804 0.5072 0.634 0.562273 0.562744 0.563515 MIN: 0.54 MIN: 0.54 MIN: 0.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU b bb a 0.1561 0.3122 0.4683 0.6244 0.7805 0.688332 0.691037 0.693934 MIN: 0.68 MIN: 0.68 MIN: 0.69 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU bb a b 300 600 900 1200 1500 1374.19 1384.14 1386.62 MIN: 1367.79 MIN: 1379.57 MIN: 1378.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b bb 200 400 600 800 1000 786.00 788.65 789.00 MIN: 779.97 MIN: 781.04 MIN: 779.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU bb a b 300 600 900 1200 1500 1377.45 1379.97 1385.41 MIN: 1371.04 MIN: 1367.35 MIN: 1377.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU bb a b 2 4 6 8 10 6.35249 6.35319 6.35868 MIN: 6.3 MIN: 6.3 MIN: 6.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a bb b 2 4 6 8 10 8.67071 8.67211 8.68912 MIN: 8.58 MIN: 8.57 MIN: 8.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU b bb a 3 6 9 12 15 9.51019 9.51252 9.56667 MIN: 9.44 MIN: 9.44 MIN: 9.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU b bb a 200 400 600 800 1000 785.39 787.69 788.80 MIN: 775.34 MIN: 778.68 MIN: 782.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a bb b 300 600 900 1200 1500 1374.37 1380.03 1381.33 MIN: 1363.77 MIN: 1374.88 MIN: 1376.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a bb b 200 400 600 800 1000 786.92 789.81 797.59 MIN: 779.8 MIN: 783.36 MIN: 783.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
PostgreSQL This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency a b bb 0.0106 0.0212 0.0318 0.0424 0.053 0.047 0.047 0.047 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency b a bb 14 28 42 56 70 61.66 61.84 61.91 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency a b bb 0.0113 0.0226 0.0339 0.0452 0.0565 0.05 0.05 0.05 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency a b bb 4 8 12 16 20 12.67 13.85 16.50 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens b a bb 6 12 18 24 30 24.13 24.15 24.64 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace a b bb 14 28 42 56 70 63.68 64.46 64.48 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model bb b a 6 12 18 24 30 25.01 25.27 25.35 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace bb b a 7 14 21 28 35 31.07 31.09 31.38 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a bb b 14 28 42 56 70 60.24 60.51 61.37 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Live b bb a 8 16 24 32 40 32.80 32.83 33.20 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live b bb a 30 60 90 120 150 115.77 124.51 125.37 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Upload bb a b 50 100 150 200 250 241.13 241.76 242.24 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload bb a b 50 100 150 200 250 235.94 237.24 238.65 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Platform bb a b 40 80 120 160 200 198.27 198.35 198.96 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform b bb a 80 160 240 320 400 369.22 369.54 370.34 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand bb b a 40 80 120 160 200 197.90 198.50 199.18 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand bb a b 80 160 240 320 400 366.29 366.57 367.61 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.5 Blend File: BMW27 - Compute: CPU-Only bb a 13 26 39 52 65 55.76 56.07
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302Python Notes: Python 2.7.18 + Python 3.8.10Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 2 April 2023 03:16 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302Python Notes: Python 2.7.18 + Python 3.8.10Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 1 April 2023 18:24 by user phoronix.
bb Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 6.1.0-phx (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302Python Notes: Python 2.7.18 + Python 3.8.10Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 2 April 2023 02:36 by user phoronix.