AMD Ryzen 9 7900X Linux AMD Ryzen 9 7900X 12-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211114-PTS-AMDRYZEN44&sro&grs .
AMD Ryzen 9 7900X Linux Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a b AMD Ryzen 9 7900X 12-Core @ 5.73GHz (12 Cores / 24 Threads) ASRock X670E PG Lightning (1.11 BIOS) AMD Device 14d8 32GB 1000GB Western Digital WDS100T1X0E-00AFY0 XFX AMD Radeon RX 6400 4GB (2320/1000MHz) AMD Navi 21/23 ASUS MG28U Realtek RTL8125 2.5GbE Ubuntu 22.10 5.19.0-23-generic (x86_64) GNOME Shell 43.0 X Server + Wayland 4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47) 1.3.224 GCC 12.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa601203 Python Details - Python 3.10.7 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
AMD Ryzen 9 7900X Linux cpuminer-opt: Blake-2 S aom-av1: Speed 6 Realtime - Bosphorus 1080p pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 50 - Read Write stress-ng: Mutex pgbench: 100 - 1 - Read Only - Average Latency srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM stress-ng: Socket Activity stress-ng: IO_uring jpegxl-decode: All onednn: IP Shapes 3D - u8s8f32 - CPU clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache stress-ng: Futex pgbench: 100 - 1 - Read Only quadray: 5 - 4K stress-ng: Context Switching stress-ng: Crypto jpegxl-decode: 1 srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM mnn: MobileNetV2_224 aom-av1: Speed 9 Realtime - Bosphorus 4K jpegxl: PNG - 80 stress-ng: NUMA jpegxl: PNG - 90 aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 8 Realtime - Bosphorus 4K jpegxl: JPEG - 90 clickhouse: 100M Rows Web Analytics Dataset, Second Run cpuminer-opt: Triple SHA-256, Onecoin onednn: IP Shapes 1D - u8s8f32 - CPU smhasher: FarmHash32 x86_64 AVX xmrig: Monero - 1M stress-ng: Matrix Math mnn: nasnet jpegxl: JPEG - 80 quadray: 5 - 1080p jpegxl: JPEG - 100 srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM pgbench: 1 - 100 - Read Only smhasher: SHA3-256 onednn: IP Shapes 3D - bf16bf16bf16 - CPU aom-av1: Speed 8 Realtime - Bosphorus 1080p mnn: squeezenetv1.1 pgbench: 1 - 100 - Read Only - Average Latency onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU aom-av1: Speed 6 Realtime - Bosphorus 4K onednn: Deconvolution Batch shapes_1d - f32 - CPU stress-ng: CPU Cache pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 100 - Read Write nginx: 1000 cpuminer-opt: Garlicoin stress-ng: Forking aom-av1: Speed 0 Two-Pass - Bosphorus 4K mnn: mobilenetV3 avifenc: 2 cpuminer-opt: Deepcoin openfoam: drivaerFastback, Small Mesh Size - Mesh Time mnn: resnet-v2-50 blosc: blosclz bitshuffle stress-ng: Atomic blosc: blosclz shuffle dragonflydb: 50 - 1:5 mnn: inception-v3 pgbench: 100 - 50 - Read Only jpegxl: PNG - 100 cpuminer-opt: Quad SHA-256, Pyrite openfoam: drivaerFastback, Medium Mesh Size - Mesh Time pgbench: 1 - 1 - Read Only natron: Spaceship build-erlang: Time To Compile quadray: 2 - 4K dragonflydb: 200 - 1:5 clickhouse: 100M Rows Web Analytics Dataset, Third Run quadray: 2 - 1080p onednn: IP Shapes 1D - bf16bf16bf16 - CPU aom-av1: Speed 10 Realtime - Bosphorus 1080p mnn: SqueezeNetV1.0 deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM pgbench: 1 - 50 - Read Only - Average Latency nginx: 500 pgbench: 100 - 50 - Read Only - Average Latency avifenc: 6, Lossless pgbench: 1 - 1 - Read Write - Average Latency dragonflydb: 200 - 1:1 openfoam: drivaerFastback, Small Mesh Size - Execution Time pgbench: 1 - 1 - Read Write cpuminer-opt: Magi srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM aom-av1: Speed 6 Two-Pass - Bosphorus 4K brl-cad: VGR Performance Metric openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU smhasher: MeowHash x86_64 AES-NI nginx: 200 cpuminer-opt: scrypt openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU dragonflydb: 50 - 1:1 ffmpeg: libx265 - Platform ffmpeg: libx265 - Platform dragonflydb: 50 - 5:1 cpuminer-opt: Myriad-Groestl srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM pgbench: 1 - 50 - Read Only nginx: 100 quadray: 1 - 4K srsran: OFDM_Test smhasher: wyhash avifenc: 6 deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream stress-ng: Glibc Qsort Data Sorting deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream avifenc: 0 openvino: Person Detection FP16 - CPU build-nodejs: Time To Compile pgbench: 100 - 1 - Read Write - Average Latency deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream build-python: Default deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream openradioss: Bird Strike on Windshield stress-ng: Malloc pgbench: 100 - 1 - Read Write openradioss: Rubber O-Ring Seal Installation ffmpeg: libx264 - Video On Demand pgbench: 100 - 100 - Read Only - Average Latency ffmpeg: libx264 - Video On Demand encodec: 24 kbps smhasher: fasthash32 build-wasmer: Time To Compile aom-av1: Speed 4 Two-Pass - Bosphorus 4K mnn: mobilenet-v1-1.0 y-cruncher: 500M ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Video On Demand stress-ng: Memory Copying cpuminer-opt: x25x pgbench: 100 - 100 - Read Only openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU encode-flac: WAV To FLAC blender: Fishy Cat - CPU-Only stress-ng: MMAP openvino: Person Detection FP16 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU quadray: 1 - 1080p srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM smhasher: Spooky32 aom-av1: Speed 6 Two-Pass - Bosphorus 1080p ffmpeg: libx264 - Platform ffmpeg: libx264 - Platform build-python: Released Build, PGO + LTO Optimized nekrs: TurboPipe Periodic smhasher: t1ha2_atonce openvino: Person Vehicle Bike Detection FP16 - CPU xmrig: Wownero - 1M openvino: Age Gender Recognition Retail 0013 FP16 - CPU encodec: 3 kbps pgbench: 1 - 50 - Read Write - Average Latency pgbench: 1 - 50 - Read Write openvino: Face Detection FP16 - CPU ffmpeg: libx264 - Upload ffmpeg: libx264 - Upload aom-av1: Speed 9 Realtime - Bosphorus 1080p blender: Classroom - CPU-Only onednn: Recurrent Neural Network Inference - f32 - CPU build-php: Time To Compile deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream spacy: en_core_web_trf ffmpeg: libx265 - Upload stress-ng: SENDFILE onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU pgbench: 1 - 100 - Read Write - Average Latency pgbench: 1 - 100 - Read Write encodec: 6 kbps stress-ng: Semaphores openvino: Vehicle Detection FP16-INT8 - CPU openvino: Face Detection FP16 - CPU avifenc: 10, Lossless ffmpeg: libx265 - Upload encodec: 1.5 kbps onednn: IP Shapes 3D - f32 - CPU openradioss: Bumper Beam ffmpeg: libx265 - Live unpack-linux: linux-5.19.tar.xz stress-ng: System V Message Passing aom-av1: Speed 4 Two-Pass - Bosphorus 1080p openvino: Person Vehicle Bike Detection FP16 - CPU quadray: 3 - 4K ffmpeg: libx265 - Live stream: Add openfoam: drivaerFastback, Medium Mesh Size - Execution Time openradioss: INIVOL and Fluid Structure Interaction Drop Container tensorflow: CPU - 16 - AlexNet minibude: OpenMP - BM2 stress-ng: Vector Math minibude: OpenMP - BM2 stress-ng: CPU Stress deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream blender: BMW27 - CPU-Only deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream smhasher: FarmHash128 deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream compress-7zip: Decompression Rating onednn: Recurrent Neural Network Training - f32 - CPU tensorflow: CPU - 64 - GoogLeNet quadray: 3 - 1080p deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream cpuminer-opt: Ringcoin compress-7zip: Compression Rating ffmpeg: libx264 - Live onednn: Recurrent Neural Network Inference - u8s8f32 - CPU ffmpeg: libx264 - Live deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream tensorflow: CPU - 32 - AlexNet dragonflydb: 200 - 5:1 cpuminer-opt: Skeincoin onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream tensorflow: CPU - 64 - ResNet-50 stream: Triad openradioss: Cell Phone Drop Test smhasher: t1ha0_aes_avx2 x86_64 openvino: Vehicle Detection FP16-INT8 - CPU tensorflow: CPU - 256 - AlexNet deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream stream: Copy onednn: Convolution Batch Shapes Auto - f32 - CPU tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - AlexNet srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM tensorflow: CPU - 256 - GoogLeNet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream cpuminer-opt: LBC, LBRY Credits deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream stream: Scale tensorflow: CPU - 512 - GoogLeNet onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU spacy: en_core_web_lg openvino: Face Detection FP16-INT8 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU stress-ng: Glibc C String Functions deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream tensorflow: CPU - 256 - ResNet-50 onednn: IP Shapes 1D - f32 - CPU openvino: Face Detection FP16-INT8 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU tensorflow: CPU - 32 - GoogLeNet y-cruncher: 1B deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream blender: Barbershop - CPU-Only tensorflow: CPU - 16 - ResNet-50 deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream minibude: OpenMP - BM1 openvino: Weld Porosity Detection FP16 - CPU deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream minibude: OpenMP - BM1 openvino: Weld Porosity Detection FP16-INT8 - CPU deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU stress-ng: MEMFD deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream onednn: Deconvolution Batch shapes_3d - f32 - CPU tensorflow: CPU - 512 - AlexNet openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openfoam: motorBike - Execution Time openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU blender: Pabellon Barcelona - CPU-Only tensorflow: CPU - 16 - GoogLeNet pgbench: 1 - 1 - Read Only - Average Latency aom-av1: Speed 0 Two-Pass - Bosphorus 1080p srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM smhasher: MeowHash x86_64 AES-NI smhasher: t1ha0_aes_avx2 x86_64 smhasher: FarmHash32 x86_64 AVX smhasher: t1ha2_atonce smhasher: FarmHash128 smhasher: fasthash32 smhasher: Spooky32 smhasher: SHA3-256 smhasher: wyhash a b 1343070 66.86 1.129 44304 10556564.45 0.016 122.9 21588.47 24405.52 179.93 0.337307 231.71 3968210.34 61322 1.44 7269058.58 34722.62 68.43 626.3 2.857 68.39 12.02 604.96 11.96 69.66 51.88 11.78 259.88 322550 0.521835 29550.62 12605.1 105410.52 9.676 11.93 5.71 0.97 239.3 674178 161.66 1.52187 119.52 2.308 0.148 0.789632 35.96 4.04957 140.39 1.723 58054 125963.01 3641.88 85644.09 0.35 1.428 41.365 15910 25.203143 12.864 11242.2 203812.22 20713.1 5133428.52 21.158 722315 1.05 231140 205.73347 59102 5.5 65.646 5.52 4968213.38 269.06 20.95 0.973041 150.63 3.692 20.0699 49.816 227.6 0.064 135867.53 0.069 6.224 0.352 4831108.14 178.55423 2838 863.91 603.1 239.2 17.6 308235 55.16 108.68 42374.8 137835.76 515.94 8.35 717.9 4846900.63 169.47 44.70 4692530.64 50470 650.9 779915 137650.85 19.81 222900000 23038 3.769 95.7092 285.64 10.4443 84.035 5.9 314.638 0.37 72.282 13.037 82.9905 193.52 24051732.07 2705 78.35 109.33 0.152 69.28 28.898 6548.49 35.704 10.08 3.251 10.28 168.30 45.01 6023.06 885.22 656243 1015.76 5.87 1468.84 11.732 85.49 288.78 1014.92 1461.42 76.23 601.3 15384.04 44.2 69.14 109.552711309 183.357 64871200000 14943.5 1249.34 15466.9 33642.44 24.989 18.369 2722 10.69 18.13 139.275300113 148.98 169.11 751.24 39.368 114.3078 8.748 1514 113.65 386040.06 1.8424 41.058 2436 25.189 2647043.16 4.33 559.91 3.535 22.22 24.334 3.2857 100.34 114.72 4.64 12824745.14 19.19 4.8 4.83 44.02 44030.5 2217.7849 337.94 158.36 40.974 107491.73 1024.349 44344.82 56.905 66.07 17.567 15702.24 223.1411 136525 1464.71 123.2 18.52 26.8652 41.8197 3286.46 154326 299.42 749.498 16.865922261 143.4093 63.412 227.01 4624845.97 209170 0.547255 630.9481 39.48 44032.4 66.53 75924.15 1384.25 348.9 94.5869 60212.6 5.75764 39.35 289.65 200.6 121.77 9.4344 109350 632.756 39775.5 121.89 749.601 18996 287.38 0.553097 3786314.58 137.9275 5.19899 91.5744 5.38305 0.179975 7.2448 39.8 2.20161 20.84 0.293989 124.88 21.889 9.4567 635.91 38.17 13.5387 1007.166 1071.83 8.7785 113.91 40.287 2168.46 73.8179 1.85387 1052.54 65.4985 3.11616 357.68 47791.89 0.25 0.35 5.53 5.59 211.89 123.22 0.017 1.01 251 57.823 27.632 35.607 27.273 64.582 30.057 36.371 2499.008 19.608 1029150 78.1 0.995 50233 11885212.76 0.018 137.5 19398.62 27013 198.29 0.369351 252.20 3648065 56707 1.55 7822736.2 32274.88 73.37 668.5 3.04 72.65 12.74 570.87 12.67 73.51 54.63 12.4 273.31 338820 0.496825 31036.72 13238.9 100434.75 10.135 12.48 5.96 0.93 249.1 700101 155.7 1.5797 124.05 2.389 0.143 0.763137 37.19 3.91733 145.06 1.668 59960 122285.31 3537.11 88150.54 0.36 1.466 42.38 16300 25.805769 12.582 11489.5 199497.81 21142.2 5031752.18 20.742 736786 1.03 235620 209.58637 58025 5.4 64.454 5.62 5055548.28 273.74 21.31 0.989482 153.13 3.751 20.3906 49.033 231.2 0.065 133787.04 0.068 6.315 0.357 4763803.93 176.08591 2799 852.17 611.3 242.4 17.83 304290 55.87 107.32 41856.36 136170.71 509.75 8.25 726.25 4791381.14 167.5404742 45.21 4640660.41 51020 643.9 772349 136318.84 20 225000000 22824.89 3.736 94.8741 283.15 10.5361 84.756 5.85 312.035 0.367 71.7045 13.14 83.6437 192.08 24222411.86 2724 77.83 110.05 0.153 68.83 28.712 6508.85 35.917 10.14 3.233 10.225 167.40 45.25 6055.08 889.91 652856 1020.98 5.84 1461.51 11.79 85.08 290.15 1019.72 1468.12 76.57 598.7 15317.97 44.39 69.43 109.10 182.609 65118300000 14998.97 1244.72 15523.4 33761.4 25.064 18.424 2714 10.72 18.08 139.66 148.57 169.57 749.215 39.473 114.0039 8.7713 1518 113.360639293 387020.54 1.83776 40.957 2442 25.251 2653440.36 4.34 558.64 3.543 22.27 24.28 3.27851 100.56 114.97 4.63 12852176.64 19.23 4.81 4.84 43.93 43942.1 2222.2378 338.61 158.05 41.054 107282.73 1026.341 44430.82 57.0091 66.19 17.5355 15674.6 223.5328 136291 1462.28 123.4 18.55 26.8227 41.7556 3291.46 154559 298.97 748.408 16.89 143.6075 63.3274 227.31 4630859.67 209440 0.54655 631.7603 39.53 44086.5 66.45 76015.27 1382.6 348.49 94.6963 60281.9 5.75116 39.31 289.36 200.8 121.89 9.4433 109450 632.2099 39742.5 121.98 749.088 19009 287.56 0.553435 3784061.09 138.0086 5.19599 91.5226 5.38606 0.179875 7.2408 39.82 2.2027 20.83 0.293853 124.83 21.881 9.4533 635.69 38.16 13.542 1007.41 1071.58 8.7765 113.9358 40.296 2168.92 73.8024 1.85422 1052.36 65.5084 3.11582 357.7 47792.77 2.65814 0.25 0.35 5.53 5.59 211.89 123.22 0.017 1.01 251 58.316 27.511 34.029 27.237 64.658 30.098 36.577 2530.422 19.924 OpenBenchmarking.org
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S a b 300K 600K 900K 1200K 1500K 1343070 1029150 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p a b 20 40 60 80 100 66.86 78.10 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency a b 0.254 0.508 0.762 1.016 1.27 1.129 0.995 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write a b 11K 22K 33K 44K 55K 44304 50233 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex a b 3M 6M 9M 12M 15M 10556564.45 11885212.76 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency a b 0.0041 0.0082 0.0123 0.0164 0.0205 0.016 0.018 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 30 60 90 120 150 122.9 137.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity a b 5K 10K 15K 20K 25K 21588.47 19398.62 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring a b 6K 12K 18K 24K 30K 24405.52 27013.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All a b 40 80 120 160 200 179.93 198.29
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 0.0831 0.1662 0.2493 0.3324 0.4155 0.337307 0.369351 MIN: 0.32 MIN: 0.34 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache a b 60 120 180 240 300 231.71 252.20 MIN: 16.43 / MAX: 8571.43 MIN: 15.78 / MAX: 30000 1. ClickHouse server version 22.5.4.19 (official build).
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex a b 800K 1600K 2400K 3200K 4000K 3968210.34 3648065.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only a b 13K 26K 39K 52K 65K 61322 56707 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
QuadRay Scene: 5 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K a b 0.3488 0.6976 1.0464 1.3952 1.744 1.44 1.55 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching a b 2M 4M 6M 8M 10M 7269058.58 7822736.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto a b 7K 14K 21K 28K 35K 34722.62 32274.88 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 a b 16 32 48 64 80 68.43 73.37
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 140 280 420 560 700 626.3 668.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 0.684 1.368 2.052 2.736 3.42 2.857 3.040 MIN: 2.82 / MAX: 3.32 MIN: 3.01 / MAX: 3.95 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K a b 16 32 48 64 80 68.39 72.65 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 a b 3 6 9 12 15 12.02 12.74 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 130 260 390 520 650 604.96 570.87 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 a b 3 6 9 12 15 11.96 12.67 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K a b 16 32 48 64 80 69.66 73.51 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K a b 12 24 36 48 60 51.88 54.63 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 a b 3 6 9 12 15 11.78 12.40 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run a b 60 120 180 240 300 259.88 273.31 MIN: 15.29 / MAX: 10000 MIN: 18.51 / MAX: 20000 1. ClickHouse server version 22.5.4.19 (official build).
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin a b 70K 140K 210K 280K 350K 322550 338820 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.1174 0.2348 0.3522 0.4696 0.587 0.521835 0.496825 MIN: 0.48 MIN: 0.48 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 7K 14K 21K 28K 35K 29550.62 31036.72 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a b 3K 6K 9K 12K 15K 12605.1 13238.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 20K 40K 60K 80K 100K 105410.52 100434.75 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet a b 3 6 9 12 15 9.676 10.135 MIN: 9.56 / MAX: 25.58 MIN: 9.9 / MAX: 59.36 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 a b 3 6 9 12 15 11.93 12.48 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
QuadRay Scene: 5 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p a b 1.341 2.682 4.023 5.364 6.705 5.71 5.96 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 a b 0.2183 0.4366 0.6549 0.8732 1.0915 0.97 0.93 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 50 100 150 200 250 239.3 249.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only a b 150K 300K 450K 600K 750K 674178 700101 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
SMHasher Hash: SHA3-256 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: SHA3-256 a b 40 80 120 160 200 161.66 155.70 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b 0.3554 0.7108 1.0662 1.4216 1.777 1.52187 1.57970 MIN: 1.45 MIN: 1.48 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p a b 30 60 90 120 150 119.52 124.05 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 a b 0.5375 1.075 1.6125 2.15 2.6875 2.308 2.389 MIN: 2.28 / MAX: 8.5 MIN: 2.35 / MAX: 2.65 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0333 0.0666 0.0999 0.1332 0.1665 0.148 0.143 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.1777 0.3554 0.5331 0.7108 0.8885 0.789632 0.763137 MIN: 0.76 MIN: 0.73 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 9 18 27 36 45 35.96 37.19 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 0.9112 1.8224 2.7336 3.6448 4.556 4.04957 3.91733 MIN: 3.23 MIN: 3.25 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache a b 30 60 90 120 150 140.39 145.06 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a b 0.3877 0.7754 1.1631 1.5508 1.9385 1.723 1.668 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a b 13K 26K 39K 52K 65K 58054 59960 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
nginx Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 a b 30K 60K 90K 120K 150K 125963.01 122285.31 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin a b 800 1600 2400 3200 4000 3641.88 3537.11 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking a b 20K 40K 60K 80K 100K 85644.09 88150.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K a b 0.081 0.162 0.243 0.324 0.405 0.35 0.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 a b 0.3299 0.6598 0.9897 1.3196 1.6495 1.428 1.466 MIN: 1.41 / MAX: 1.8 MIN: 1.45 / MAX: 1.9 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 a b 10 20 30 40 50 41.37 42.38 1. (CXX) g++ options: -O3 -fPIC -lm
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin a b 3K 6K 9K 12K 15K 15910 16300 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 6 12 18 24 30 25.20 25.81 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 a b 3 6 9 12 15 12.86 12.58 MIN: 12.77 / MAX: 13.9 MIN: 12.5 / MAX: 17.47 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle a b 2K 4K 6K 8K 10K 11242.2 11489.5 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic a b 40K 80K 120K 160K 200K 203812.22 199497.81 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle a b 5K 10K 15K 20K 25K 20713.1 21142.2 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 a b 1.1M 2.2M 3.3M 4.4M 5.5M 5133428.52 5031752.18 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 a b 5 10 15 20 25 21.16 20.74 MIN: 20.91 / MAX: 27.75 MIN: 20.46 / MAX: 22.29 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only a b 160K 320K 480K 640K 800K 722315 736786 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 a b 0.2363 0.4726 0.7089 0.9452 1.1815 1.05 1.03 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite a b 50K 100K 150K 200K 250K 231140 235620 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time a b 50 100 150 200 250 205.73 209.59 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Only a b 13K 26K 39K 52K 65K 59102 58025 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship a b 1.2375 2.475 3.7125 4.95 6.1875 5.5 5.4
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile a b 15 30 45 60 75 65.65 64.45
QuadRay Scene: 2 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K a b 1.2645 2.529 3.7935 5.058 6.3225 5.52 5.62 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 a b 1.1M 2.2M 3.3M 4.4M 5.5M 4968213.38 5055548.28 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run a b 60 120 180 240 300 269.06 273.74 MIN: 18.14 / MAX: 20000 MIN: 19.14 / MAX: 30000 1. ClickHouse server version 22.5.4.19 (official build).
QuadRay Scene: 2 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p a b 5 10 15 20 25 20.95 21.31 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b 0.2226 0.4452 0.6678 0.8904 1.113 0.973041 0.989482 MIN: 0.93 MIN: 0.94 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p a b 30 60 90 120 150 150.63 153.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 0.844 1.688 2.532 3.376 4.22 3.692 3.751 MIN: 3.63 / MAX: 6.2 MIN: 3.69 / MAX: 6.32 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 5 10 15 20 25 20.07 20.39
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 11 22 33 44 55 49.82 49.03
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 50 100 150 200 250 227.6 231.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0146 0.0292 0.0438 0.0584 0.073 0.064 0.065 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
nginx Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 a b 30K 60K 90K 120K 150K 135867.53 133787.04 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0155 0.031 0.0465 0.062 0.0775 0.069 0.068 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless a b 2 4 6 8 10 6.224 6.315 1. (CXX) g++ options: -O3 -fPIC -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency a b 0.0803 0.1606 0.2409 0.3212 0.4015 0.352 0.357 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 a b 1000K 2000K 3000K 4000K 5000K 4831108.14 4763803.93 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 40 80 120 160 200 178.55 176.09 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write a b 600 1200 1800 2400 3000 2838 2799 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi a b 200 400 600 800 1000 863.91 852.17 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 130 260 390 520 650 603.1 611.3 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 50 100 150 200 250 239.2 242.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K a b 4 8 12 16 20 17.60 17.83 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 70K 140K 210K 280K 350K 308235 304290 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 13 26 39 52 65 55.16 55.87 MIN: 45.9 / MAX: 75.35 MIN: 43.51 / MAX: 79.9 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 20 40 60 80 100 108.68 107.32 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 9K 18K 27K 36K 45K 42374.80 41856.36 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
nginx Connections: 200 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 30K 60K 90K 120K 150K 137835.76 136170.71 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt a b 110 220 330 440 550 515.94 509.75 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 2 4 6 8 10 8.35 8.25 MIN: 6.05 / MAX: 14.71 MIN: 4.84 / MAX: 17.53 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 160 320 480 640 800 717.90 726.25 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 a b 1000K 2000K 3000K 4000K 5000K 4846900.63 4791381.14 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a b 40 80 120 160 200 169.47 167.54 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a b 10 20 30 40 50 44.70 45.21 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 a b 1000K 2000K 3000K 4000K 5000K 4692530.64 4640660.41 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl a b 11K 22K 33K 44K 55K 50470 51020 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 140 280 420 560 700 650.9 643.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only a b 200K 400K 600K 800K 1000K 779915 772349 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
nginx Connections: 100 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a b 30K 60K 90K 120K 150K 137650.85 136318.84 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
QuadRay Scene: 1 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K a b 5 10 15 20 25 19.81 20.00 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test a b 50M 100M 150M 200M 250M 222900000 225000000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
SMHasher Hash: wyhash OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: wyhash a b 5K 10K 15K 20K 25K 23038.00 22824.89 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 a b 0.848 1.696 2.544 3.392 4.24 3.769 3.736 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 95.71 94.87
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting a b 60 120 180 240 300 285.64 283.15 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 10.44 10.54
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 a b 20 40 60 80 100 84.04 84.76 1. (CXX) g++ options: -O3 -fPIC -lm
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 1.3275 2.655 3.9825 5.31 6.6375 5.90 5.85 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile a b 70 140 210 280 350 314.64 312.04
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency a b 0.0833 0.1666 0.2499 0.3332 0.4165 0.370 0.367 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 16 32 48 64 80 72.28 71.70
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default a b 3 6 9 12 15 13.04 13.14
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 82.99 83.64
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield a b 40 80 120 160 200 193.52 192.08
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 5M 10M 15M 20M 25M 24051732.07 24222411.86 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write a b 600 1200 1800 2400 3000 2705 2724 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation a b 20 40 60 80 100 78.35 77.83
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 20 40 60 80 100 109.33 110.05 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0344 0.0688 0.1032 0.1376 0.172 0.152 0.153 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 15 30 45 60 75 69.28 68.83 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
EnCodec Target Bandwidth: 24 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps a b 7 14 21 28 35 28.90 28.71
SMHasher Hash: fasthash32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 1400 2800 4200 5600 7000 6548.49 6508.85 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile a b 8 16 24 32 40 35.70 35.92 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 3 6 9 12 15 10.08 10.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 a b 0.7315 1.463 2.1945 2.926 3.6575 3.251 3.233 MIN: 3.21 / MAX: 3.45 MIN: 3.19 / MAX: 3.48 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M a b 3 6 9 12 15 10.28 10.23
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a b 40 80 120 160 200 168.30 167.40 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a b 10 20 30 40 50 45.01 45.25 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying a b 1300 2600 3900 5200 6500 6023.06 6055.08 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x a b 200 400 600 800 1000 885.22 889.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only a b 140K 280K 420K 560K 700K 656243 652856 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 200 400 600 800 1000 1015.76 1020.98 MIN: 925.99 / MAX: 1191.5 MIN: 907.77 / MAX: 1188.53 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 1.3208 2.6416 3.9624 5.2832 6.604 5.87 5.84 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 300 600 900 1200 1500 1468.84 1461.51 MIN: 1463.43 MIN: 1456.01 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC a b 3 6 9 12 15 11.73 11.79 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Fishy Cat - Compute: CPU-Only a b 20 40 60 80 100 85.49 85.08
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP a b 60 120 180 240 300 288.78 290.15 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 200 400 600 800 1000 1014.92 1019.72 MIN: 862.19 / MAX: 1179.73 MIN: 606.77 / MAX: 1204.46 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 300 600 900 1200 1500 1461.42 1468.12 MIN: 1455.91 MIN: 1463.07 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
QuadRay Scene: 1 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p a b 20 40 60 80 100 76.23 76.57 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 130 260 390 520 650 601.3 598.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
SMHasher Hash: Spooky32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: Spooky32 a b 3K 6K 9K 12K 15K 15384.04 15317.97 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a b 10 20 30 40 50 44.20 44.39 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 15 30 45 60 75 69.14 69.43 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 20 40 60 80 100 109.55 109.10 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized a b 40 80 120 160 200 183.36 182.61
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 14000M 28000M 42000M 56000M 70000M 64871200000 65118300000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 3K 6K 9K 12K 15K 14943.50 14998.97 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 300 600 900 1200 1500 1249.34 1244.72 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a b 3K 6K 9K 12K 15K 15466.9 15523.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 7K 14K 21K 28K 35K 33642.44 33761.40 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
EnCodec Target Bandwidth: 3 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps a b 6 12 18 24 30 24.99 25.06
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency a b 5 10 15 20 25 18.37 18.42 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write a b 600 1200 1800 2400 3000 2722 2714 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU a b 3 6 9 12 15 10.69 10.72 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 4 8 12 16 20 18.13 18.08 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 30 60 90 120 150 139.28 139.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p a b 30 60 90 120 150 148.98 148.57 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Classroom - Compute: CPU-Only a b 40 80 120 160 200 169.11 169.57
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 160 320 480 640 800 751.24 749.22 MIN: 746.29 MIN: 744.44 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile a b 9 18 27 36 45 39.37 39.47
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 30 60 90 120 150 114.31 114.00
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 2 4 6 8 10 8.7480 8.7713
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf a b 300 600 900 1200 1500 1514 1518
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a b 30 60 90 120 150 113.65 113.36 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE a b 80K 160K 240K 320K 400K 386040.06 387020.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b 0.4145 0.829 1.2435 1.658 2.0725 1.84240 1.83776 MIN: 1.8 MIN: 1.79 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency a b 9 18 27 36 45 41.06 40.96 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write a b 500 1000 1500 2000 2500 2436 2442 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
EnCodec Target Bandwidth: 6 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 6 kbps a b 6 12 18 24 30 25.19 25.25
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 600K 1200K 1800K 2400K 3000K 2647043.16 2653440.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 0.9765 1.953 2.9295 3.906 4.8825 4.33 4.34 MIN: 3.47 / MAX: 11.88 MIN: 3.47 / MAX: 12 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU a b 120 240 360 480 600 559.91 558.64 MIN: 532.56 / MAX: 595.37 MIN: 531.73 / MAX: 584.86 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless a b 0.7972 1.5944 2.3916 3.1888 3.986 3.535 3.543 1. (CXX) g++ options: -O3 -fPIC -lm
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a b 5 10 15 20 25 22.22 22.27 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
EnCodec Target Bandwidth: 1.5 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 1.5 kbps a b 6 12 18 24 30 24.33 24.28
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 0.7393 1.4786 2.2179 2.9572 3.6965 3.28570 3.27851 MIN: 3.23 MIN: 3.23 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b 20 40 60 80 100 100.34 100.56
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 30 60 90 120 150 114.72 114.97 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz a b 1.044 2.088 3.132 4.176 5.22 4.64 4.63
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing a b 3M 6M 9M 12M 15M 12824745.14 12852176.64 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p a b 5 10 15 20 25 19.19 19.23 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 1.0823 2.1646 3.2469 4.3292 5.4115 4.80 4.81 MIN: 3.6 / MAX: 10.09 MIN: 3.75 / MAX: 12.95 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
QuadRay Scene: 3 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K a b 1.089 2.178 3.267 4.356 5.445 4.83 4.84 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 10 20 30 40 50 44.02 43.93 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add a b 9K 18K 27K 36K 45K 44030.5 43942.1 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time a b 500 1000 1500 2000 2500 2217.78 2222.24 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container a b 70 140 210 280 350 337.94 338.61
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet a b 30 60 90 120 150 158.36 158.05
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a b 9 18 27 36 45 40.97 41.05 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 20K 40K 60K 80K 100K 107491.73 107282.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a b 200 400 600 800 1000 1024.35 1026.34 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress a b 10K 20K 30K 40K 50K 44344.82 44430.82 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b 13 26 39 52 65 56.91 57.01
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: BMW27 - Compute: CPU-Only a b 15 30 45 60 75 66.07 66.19
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b 4 8 12 16 20 17.57 17.54
SMHasher Hash: FarmHash128 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 3K 6K 9K 12K 15K 15702.24 15674.60 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 50 100 150 200 250 223.14 223.53
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating a b 30K 60K 90K 120K 150K 136525 136291 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 300 600 900 1200 1500 1464.71 1462.28 MIN: 1458.51 MIN: 1457.55 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b 30 60 90 120 150 123.2 123.4
QuadRay Scene: 3 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p a b 5 10 15 20 25 18.52 18.55 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 6 12 18 24 30 26.87 26.82
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 10 20 30 40 50 41.82 41.76
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin a b 700 1400 2100 2800 3500 3286.46 3291.46 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating a b 30K 60K 90K 120K 150K 154326 154559 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a b 70 140 210 280 350 299.42 298.97 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 160 320 480 640 800 749.50 748.41 MIN: 744.26 MIN: 743.53 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a b 4 8 12 16 20 16.87 16.89 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 143.41 143.61
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 14 28 42 56 70 63.41 63.33
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet a b 50 100 150 200 250 227.01 227.31
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 a b 1000K 2000K 3000K 4000K 5000K 4624845.97 4630859.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin a b 40K 80K 120K 160K 200K 209170 209440 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 0.1231 0.2462 0.3693 0.4924 0.6155 0.547255 0.546550 MIN: 0.53 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 140 280 420 560 700 630.95 631.76
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 9 18 27 36 45 39.48 39.53
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad a b 9K 18K 27K 36K 45K 44032.4 44086.5 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test a b 15 30 45 60 75 66.53 66.45
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 16K 32K 48K 64K 80K 75924.15 76015.27 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 300 600 900 1200 1500 1384.25 1382.60 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet a b 80 160 240 320 400 348.90 348.49
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 94.59 94.70
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy a b 13K 26K 39K 52K 65K 60212.6 60281.9 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 1.2955 2.591 3.8865 5.182 6.4775 5.75764 5.75116 MIN: 5.67 MIN: 5.63 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 9 18 27 36 45 39.35 39.31
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet a b 60 120 180 240 300 289.65 289.36
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 40 80 120 160 200 200.6 200.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet a b 30 60 90 120 150 121.77 121.89
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 3 6 9 12 15 9.4344 9.4433
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits a b 20K 40K 60K 80K 100K 109350 109450 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 140 280 420 560 700 632.76 632.21
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale a b 9K 18K 27K 36K 45K 39775.5 39742.5 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
TensorFlow Device: CPU - Batch Size: 512 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: GoogLeNet a b 30 60 90 120 150 121.89 121.98
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 160 320 480 640 800 749.60 749.09 MIN: 744.63 MIN: 744.22 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 4K 8K 12K 16K 20K 18996 19009
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 60 120 180 240 300 287.38 287.56 MIN: 273.25 / MAX: 343.02 MIN: 272.86 / MAX: 322.17 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU a b 0.1245 0.249 0.3735 0.498 0.6225 0.553097 0.553435 MIN: 0.54 MIN: 0.54 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions a b 800K 1600K 2400K 3200K 4000K 3786314.58 3784061.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b 30 60 90 120 150 137.93 138.01
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b 1.1698 2.3396 3.5094 4.6792 5.849 5.19899 5.19599 MIN: 5.05 MIN: 5.07 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 91.57 91.52
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b 1.2119 2.4238 3.6357 4.8476 6.0595 5.38305 5.38606 MIN: 5.3 MIN: 5.31 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU a b 0.0405 0.081 0.1215 0.162 0.2025 0.179975 0.179875 MIN: 0.17 MIN: 0.17 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b 2 4 6 8 10 7.2448 7.2408
TensorFlow Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b 9 18 27 36 45 39.80 39.82
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 0.4956 0.9912 1.4868 1.9824 2.478 2.20161 2.20270 MIN: 2.04 MIN: 2.04 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 5 10 15 20 25 20.84 20.83 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU a b 0.0661 0.1322 0.1983 0.2644 0.3305 0.293989 0.293853 MIN: 0.28 MIN: 0.28 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet a b 30 60 90 120 150 124.88 124.83
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B a b 5 10 15 20 25 21.89 21.88
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 3 6 9 12 15 9.4567 9.4533
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Barbershop - Compute: CPU-Only a b 140 280 420 560 700 635.91 635.69
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 9 18 27 36 45 38.17 38.16
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.54 13.54
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b 200 400 600 800 1000 1007.17 1007.41 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 200 400 600 800 1000 1071.83 1071.58 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 2 4 6 8 10 8.7785 8.7765
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 30 60 90 120 150 113.91 113.94
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b 9 18 27 36 45 40.29 40.30 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 500 1000 1500 2000 2500 2168.46 2168.92 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 16 32 48 64 80 73.82 73.80
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b 0.4172 0.8344 1.2516 1.6688 2.086 1.85387 1.85422 MIN: 1.78 MIN: 1.78 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 200 400 600 800 1000 1052.54 1052.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 15 30 45 60 75 65.50 65.51
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 0.7011 1.4022 2.1033 2.8044 3.5055 3.11616 3.11582 MIN: 3.01 MIN: 3.02 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl
TensorFlow Device: CPU - Batch Size: 512 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: AlexNet a b 80 160 240 320 400 357.68 357.70
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 10K 20K 30K 40K 50K 47791.89 47792.77 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenFOAM Input: motorBike - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time b 0.5981 1.1962 1.7943 2.3924 2.9905 2.65814 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.0563 0.1126 0.1689 0.2252 0.2815 0.25 0.25 MIN: 0.16 / MAX: 10.57 MIN: 0.15 / MAX: 7.46 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.0788 0.1576 0.2364 0.3152 0.394 0.35 0.35 MIN: 0.22 / MAX: 7.64 MIN: 0.22 / MAX: 7.62 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 1.2443 2.4886 3.7329 4.9772 6.2215 5.53 5.53 MIN: 2.94 / MAX: 12.44 MIN: 2.88 / MAX: 12.68 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 1.2578 2.5156 3.7734 5.0312 6.289 5.59 5.59 MIN: 3.05 / MAX: 12.81 MIN: 2.83 / MAX: 10.65 1. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 50 100 150 200 250 211.89 211.89
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 30 60 90 120 150 123.22 123.22
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency a b 0.0038 0.0076 0.0114 0.0152 0.019 0.017 0.017 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p a b 0.2273 0.4546 0.6819 0.9092 1.1365 1.01 1.01 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 50 100 150 200 250 251 251 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 13 26 39 52 65 57.82 58.32 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 7 14 21 28 35 27.63 27.51 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 8 16 24 32 40 35.61 34.03 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 6 12 18 24 30 27.27 27.24 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: FarmHash128 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 14 28 42 56 70 64.58 64.66 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 7 14 21 28 35 30.06 30.10 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: Spooky32 a b 8 16 24 32 40 36.37 36.58 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: SHA3-256 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: SHA3-256 a b 500 1000 1500 2000 2500 2499.01 2530.42 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
SMHasher Hash: wyhash OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: wyhash a b 5 10 15 20 25 19.61 19.92 1. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects
Phoronix Test Suite v10.8.5