cascade lake refresh Tests for a future article. 2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2212183-NE-CASCADELA10&grr&sor .
cascade lake refresh Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution a b 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads) TYAN S7106 (V2.01.B40 BIOS) Intel Sky Lake-E DMI3 Registers 94GB 500GB Samsung SSD 860 ASPEED VE228 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE Ubuntu 20.04 5.9.0-050900rc6-generic (x86_64) 20200920 GNOME Shell 3.36.4 X Server 1.20.13 GCC 9.4.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102 Java Details - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu220.04) Python Details - Python 2.7.18 + Python 3.8.10 Security Details - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
cascade lake refresh brl-cad: VGR Performance Metric openfoam: drivaerFastback, Medium Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time webp2: Quality 100, Lossless Compression tensorflow: CPU - 64 - VGG-16 blender: Barbershop - CPU-Only lammps: 20k Atoms build-linux-kernel: allmodconfig jpegxl: JPEG - 100 jpegxl: PNG - 100 openvkl: vklBenchmark ISPC openvkl: vklBenchmark Scalar build-python: Released Build, PGO + LTO Optimized tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 32 - VGG-16 openradioss: INIVOL and Fluid Structure Interaction Drop Container build-nodejs: Time To Compile nekrs: TurboPipe Periodic mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet openradioss: Bird Strike on Windshield tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 16 - VGG-16 ospray-studio: 2 - 1080p - 32 - Path Tracer blender: Pabellon Barcelona - CPU-Only ospray-studio: 1 - 1080p - 32 - Path Tracer webp2: Quality 95, Compression Effort 7 ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet ospray-studio: 3 - 1080p - 16 - Path Tracer ospray-studio: 1 - 1080p - 16 - Path Tracer ospray-studio: 2 - 1080p - 16 - Path Tracer minibude: OpenMP - BM2 minibude: OpenMP - BM2 blender: Classroom - CPU-Only ospray-studio: 3 - 4K - 32 - Path Tracer jpegxl: JPEG - 80 stargate: 192000 - 512 jpegxl: PNG - 80 pgbench: 100 - 500 - Read Write - Average Latency pgbench: 100 - 500 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 500 - Read Only - Average Latency pgbench: 100 - 500 - Read Only pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 100 - Read Only pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 50 - Read Write pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 1 - Read Write - Average Latency pgbench: 100 - 1 - Read Write pgbench: 100 - 50 - Read Only - Average Latency pgbench: 100 - 50 - Read Only pgbench: 100 - 1 - Read Only - Average Latency pgbench: 100 - 1 - Read Only aom-av1: Speed 4 Two-Pass - Bosphorus 4K numenta-nab: KNN CAD ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 3 - 4K - 1 - Path Tracer openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Small Mesh Size - Mesh Time ospray-studio: 1 - 4K - 1 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer pgbench: 1 - 500 - Read Write - Average Latency pgbench: 1 - 500 - Read Write pgbench: 1 - 250 - Read Write - Average Latency pgbench: 1 - 250 - Read Write pgbench: 1 - 100 - Read Write - Average Latency pgbench: 1 - 100 - Read Write pgbench: 1 - 500 - Read Only - Average Latency pgbench: 1 - 500 - Read Only pgbench: 1 - 50 - Read Write - Average Latency pgbench: 1 - 50 - Read Write pgbench: 1 - 1 - Read Only - Average Latency pgbench: 1 - 1 - Read Only pgbench: 1 - 250 - Read Only - Average Latency pgbench: 1 - 250 - Read Only pgbench: 1 - 1 - Read Write - Average Latency pgbench: 1 - 1 - Read Write pgbench: 1 - 100 - Read Only - Average Latency pgbench: 1 - 100 - Read Only pgbench: 1 - 50 - Read Only - Average Latency pgbench: 1 - 50 - Read Only ospray-studio: 2 - 4K - 1 - Path Tracer tensorflow: CPU - 16 - ResNet-50 openradioss: Bumper Beam jpegxl: JPEG - 90 ospray-studio: 3 - 1080p - 32 - Path Tracer build-erlang: Time To Compile jpegxl: PNG - 90 openradioss: Rubber O-Ring Seal Installation ospray-studio: 3 - 1080p - 1 - Path Tracer avifenc: 0 ospray-studio: 2 - 1080p - 1 - Path Tracer spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - SHA-512 Benchmark Time ospray-studio: 1 - 1080p - 1 - Path Tracer cockroach: KV, 50% Reads - 1024 cockroach: KV, 60% Reads - 1024 cockroach: KV, 95% Reads - 1024 cockroach: KV, 10% Reads - 1024 cockroach: KV, 50% Reads - 512 cockroach: KV, 95% Reads - 512 cockroach: KV, 60% Reads - 512 cockroach: KV, 10% Reads - 512 cockroach: KV, 60% Reads - 256 cockroach: KV, 10% Reads - 256 cockroach: KV, 95% Reads - 256 cockroach: KV, 50% Reads - 256 cockroach: KV, 10% Reads - 128 cockroach: KV, 50% Reads - 128 cockroach: KV, 60% Reads - 128 cockroach: KV, 95% Reads - 128 stargate: 192000 - 1024 spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time xmrig: Monero - 1M cockroach: MoVR - 512 cockroach: MoVR - 1024 cockroach: MoVR - 256 cockroach: MoVR - 128 numenta-nab: Earthgecko Skyline tensorflow: CPU - 64 - GoogLeNet aom-av1: Speed 6 Two-Pass - Bosphorus 4K ospray-studio: 3 - 4K - 16 - Path Tracer stargate: 96000 - 512 webp2: Quality 75, Compression Effort 7 node-web-tooling: ospray-studio: 2 - 4K - 16 - Path Tracer jpegxl-decode: 1 onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU ospray-studio: 1 - 4K - 16 - Path Tracer aom-av1: Speed 4 Two-Pass - Bosphorus 1080p blender: Fishy Cat - CPU-Only aom-av1: Speed 0 Two-Pass - Bosphorus 4K onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU clickhouse: 100M Rows Web Analytics Dataset, Third Run clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream stargate: 96000 - 1024 svt-av1: Preset 4 - Bosphorus 4K memtier-benchmark: Redis - 50 - 10:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 5:1 dragonflydb: 200 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 50 - 1:1 dragonflydb: 50 - 1:5 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream jpegxl-decode: All openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream openradioss: Cell Phone Drop Test openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU xmrig: Wownero - 1M openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU tensorflow: CPU - 32 - GoogLeNet openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU rocksdb: Update Rand rocksdb: Read Rand Write Rand graphics-magick: Resizing deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream rocksdb: Read While Writing rocksdb: Rand Read graphics-magick: Sharpen graphics-magick: Noise-Gaussian graphics-magick: Swirl graphics-magick: Enhanced graphics-magick: Rotate graphics-magick: HWB Color Space deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream avifenc: 2 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream build-php: Time To Compile blender: BMW27 - CPU-Only stargate: 480000 - 512 spacy: en_core_web_trf spacy: en_core_web_lg webp: Quality 100, Lossless, Highest Compression stargate: 480000 - 1024 build-linux-kernel: defconfig tensorflow: CPU - 64 - AlexNet stargate: 44100 - 512 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream encodec: 24 kbps numenta-nab: Contextual Anomaly Detector OSE srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM compress-7zip: Decompression Rating compress-7zip: Compression Rating aom-av1: Speed 6 Realtime - Bosphorus 4K deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream stargate: 44100 - 1024 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream aom-av1: Speed 6 Two-Pass - Bosphorus 1080p encodec: 1.5 kbps tensorflow: CPU - 16 - GoogLeNet srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream srsran: OFDM_Test encodec: 6 kbps aom-av1: Speed 8 Realtime - Bosphorus 4K deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream encodec: 3 kbps deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream tensorflow: CPU - 32 - AlexNet numenta-nab: Bayesian Changepoint astcenc: Exhaustive stress-ng: Futex cpuminer-opt: Skeincoin cpuminer-opt: Magi stress-ng: Context Switching stress-ng: x86_64 RdRand stress-ng: Atomic aircrack-ng: stress-ng: System V Message Passing stress-ng: Semaphores stress-ng: Malloc stress-ng: NUMA stress-ng: Memory Copying stress-ng: Forking stress-ng: MEMFD stress-ng: Matrix Math stress-ng: Crypto stress-ng: Glibc Qsort Data Sorting cpuminer-opt: Garlicoin stress-ng: CPU Cache stress-ng: MMAP stress-ng: Socket Activity stress-ng: CPU Stress stress-ng: SENDFILE stress-ng: Mutex stress-ng: Vector Math stress-ng: Glibc C String Functions cpuminer-opt: Myriad-Groestl cpuminer-opt: x25x cpuminer-opt: Ringcoin cpuminer-opt: Deepcoin cpuminer-opt: Blake-2 S cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: scrypt natron: Spaceship redis: SET - 1000 cpuminer-opt: Triple SHA-256, Onecoin cpuminer-opt: LBC, LBRY Credits redis: SET - 500 aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 0 Two-Pass - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 4K redis: SET - 50 svt-av1: Preset 4 - Bosphorus 1080p y-cruncher: 1B minibude: OpenMP - BM1 minibude: OpenMP - BM1 redis: GET - 50 tensorflow: CPU - 16 - AlexNet aom-av1: Speed 6 Realtime - Bosphorus 1080p redis: GET - 1000 redis: GET - 500 srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM svt-av1: Preset 8 - Bosphorus 4K encode-flac: WAV To FLAC onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM build-python: Default webp: Quality 100, Lossless astcenc: Thorough aom-av1: Speed 8 Realtime - Bosphorus 1080p onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU y-cruncher: 500M astcenc: Fast aom-av1: Speed 9 Realtime - Bosphorus 1080p numenta-nab: Relative Entropy aom-av1: Speed 10 Realtime - Bosphorus 1080p onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU svt-av1: Preset 12 - Bosphorus 4K avifenc: 6, Lossless svt-av1: Preset 8 - Bosphorus 1080p onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU unpack-linux: linux-5.19.tar.xz webp: Quality 100, Highest Compression astcenc: Medium svt-av1: Preset 13 - Bosphorus 4K numenta-nab: Windowed Gaussian avifenc: 10, Lossless onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU avifenc: 6 svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p webp2: Default webp2: Quality 100, Compression Effort 5 onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU webp: Quality 100 lammps: Rhodopsin Protein webp: Default stress-ng: IO_uring a b 257085 899.70168 187.21756 0.03 10.85 592.36 15.746 543.75 0.49 0.49 362 176 382.188 19.21 10.22 275.47 275.436 124608000000 36.76 3.695 5.143 7.533 15.236 4.677 3.386 22.53 205.44 17.9 9.16 29041 192.12 28318 0.13 15.69 173.64 48.89 29.15 31.71 26.1 8.45 13.35 35.89 21.6 5.7 15 10.43 11.83 10.44 11.44 23.38 17367 14186 14608 31.054 776.346 164.77 146883 7.78 1.044224 8.15 65.68 7613 34.869 7170 0.575 869450 16.541 6045 0.113 882773 9.744 5131 0.271 922066 1.491 671 0.077 652544 0.045 21981 5.62 135.463 125037 4309 96.025607 34.008239 3528 122216 1682.547 297 565.864 442 160.474 623 0.537 930850 69.874 716 0.043 23407 0.255 979407 1.231 812 0.103 968554 0.074 675675 3617 14.77 125.04 7.69 34647 121.466 8.03 114.39 1085 115.421 914 4.26 4.67 5.61 8.24 4.78 64.825399416 5.48 888 61797.2 67899.4 94378.8 38341.3 49424.7 96569.8 58375.2 29267.4 30111.3 13759.6 98923.9 24394.6 6941 12434.3 15412.3 89867.4 1.601382 2.50 3.23 2.63 6.47 4.69 66.06 5.07 9938.1 111.5 111.1 202.9 226.3 99.192 77.03 9.12 78464 1.840618 0.28 9.93 67352 33.54 1380.95 1380.06 1409.26 65864 10.53 75.8 0.27 781.001 784.125 786.583 198.59 205.73 142.18 963.8114 18.6511 2.61702 2.631 1501800.97 994293.9 942134.52 1044459.26 959933.8 707625.49 989118.75 963.5663 18.6156 80.94 2199.69 4.03 2192.85 4.04 197.132 91.1877 57.28 1283.2 6.94 347.32 25.83 16896.5 115.2 78.04 61.69 13.06 687.33 51 704.6 21.3 421.71 0.98 36342.3 13.17 2717.47 6.42 1395.73 1.64 21846.45 174193 2545133 1092 29.5607 33.8154 5155074 107719684 220 392 816 506 594 735 114.3502 8.7441 59.901 114.2741 8.7499 57.282 55.54 3.394244 2007 10874 0.44 2.90554 52.645 143.41 3.281164 180.5419 99.481 48.814 50.619 130.9 324.2 158350 171677 13.58 23.9046 41.8085 3.634671 99.329 180.9023 19.6 41.892 46.37 118.9 294 14.8943 67.0883 135.7515 132.4405 119900000 40.508 16.04 18.4136 54.2553 50.9512 352.856 40.408 8.7764 113.7454 106.64 34.988 1.8422 1051825.91 253670 701.91 6260590.45 503507.78 126894.32 99753.68 3795441.99 4585186.59 109293721.23 577.5 4644.12 41988.02 1021.04 124052.48 34685.11 417 12910 89.3 564.44 26882.84 59047.56 437831.75 17168753.52 103156.98 1715303.02 11570 1252.09 3292.87 28150 2216930 401450 951.15 3.7 2202580.5 598720 182930 2217236.25 22.76 0.72 22.89 2242796 6.245 22.389 29.43 735.749 2617818.25 79.17 23.87 2852955.5 2908381.75 65.7 100.3 136.3 320.8 36.589 22.41 11.4783 9.8543 0.568809 126.1 293.3 19.743 1.25 18.0603 38.05 1.72077 5.70137 1.37489 11.12 364.5543 46.5 13.363 48.3 2.55381 0.477096 0.235557 120.584 10.043 85.018 3.79612 3.40068 1.25783 8.579 3.03 138.0913 106.678 7.529 7.053 7.47846 6.94903 6.35819 5.298 227.23 201.865 6.59 7.27 9.61455 2.80983 0.693146 8.92 17.951 13.83 251911 898.33703 187.19525 0.03 10.96 591.97 15.683 545.447 0.46 0.48 353 171 382.303 22.42 10.18 275.88 276.403 124198000000 36.166 3.487 5.221 7.588 14.662 4.686 3.334 22.37 201.72 17.97 9.15 29061 191.19 28332 0.13 14.43 172.93 51.14 27.75 32.91 26.12 9.25 13.82 39.21 23.01 5.94 16.3 11.03 12.81 11.24 12.46 22.93 17342 14166 14535 31.5 787.503 162.95 147053 7.68 1.093912 7.93 66.891 7475 35.484 7046 0.577 866030 16.977 5890 0.111 899014 11.326 4415 0.273 915519 3.065 326 0.077 652647 0.046 21860 5.94 147.747 125174 4311 95.549133 33.881804 3529 122843 2481.623 201 985.483 254 231.892 431 0.54 925569 70.482 709 0.044 22579 0.256 976093 1.392 718 0.103 971447 0.074 672851 3624 14.74 124.78 7.56 34637 122.707 7.89 114.9 1085 111.684 915 6.22 5.81 5.16 7.68 4.85 65.80 5.71 890 63998.7 70300.1 92938.9 39590.8 46804.8 96577.9 66868.9 34085.9 27413.2 17242.8 99986 23024.3 6717.4 10945.5 14191.7 86632.5 1.650224 1.54 2.84 4.09 6.24 4.71 65.95 5.07 10411.8 108.1 109.2 228.8 226.4 95.428 76.16 9.08 78883 1.765949 0.27 10.14 67325 32.92 1379.94 1380.45 1381.57 66209 10.53 75.59 0.27 789.364 892.232 801.797 220.22 213.89 205.30 966.403 18.3638 2.019155 2.524 1504675.76 890721.27 839630.53 937646.51 962500.62 930057.66 1038762.14 963.9121 18.6722 80.26 2205.86 4.01 2193.51 4.03 196.7007 91.2096 57.26 1283.73 6.94 347.82 25.82 16579.3 115.95 77.53 61.82 13.3 674.84 50.86 706.42 21.76 412.75 1.18 30254.31 13.19 2714.67 6.45 1388.97 1.64 21782.56 172766 2534266 1098 28.5675 34.9917 5254605 120010661 218 391 798 502 575 719 115.5038 8.6568 58.227 115.3522 8.6682 59.274 55.56 2.559217 1950 10838 0.44 3.23369 52.687 142.07 2.979784 181.7991 98.7664 53.474 50.914 129.2 321.2 157899 169842 13.24 23.9542 41.7245 3.584121 99.8308 180.1226 20.13 44.754 46.18 117.9 291.4 14.9224 66.9674 136.6781 131.593 120000000 39.852 16.23 18.3466 54.452 51.0098 352.6274 35.115 8.7659 113.8925 105.78 35.734 1.8432 1152673.55 253330 703.92 7049324.49 503602.92 116358.2 98829.75 3677320.93 4572788.09 109184164.47 572.28 4691.35 42163.36 1016.64 123275.06 34689.96 424.2 12940 76.5 561.69 20197.23 58316.18 444590.02 17219955.71 103019.74 1703193.63 12000 1253.15 3325.86 28140 2239380 403630 946.1 3.7 2204248 598300 183720 2232581.5 22.59 0.73 22.75 2244608.5 6.385 22.426 30.272 756.807 2404467.25 79.09 26.43 2805580.75 2700561.5 65.5 99.9 134.7 318.8 36.387 22.58 11.5556 9.81832 0.567734 124.8 289.2 19.835 1.24 18.0936 37.75 1.71954 5.71409 1.38109 11.319 364.3485 46.38 13.364 46.73 2.59337 0.481461 0.235658 118.267 9.978 86.178 3.80003 3.31851 1.25446 8.584 3.03 136.6876 110.562 7.476 7.052 7.48562 6.94611 6.4127 5.28 227.064 199.966 6.76 7.45 9.6345 2.81784 0.694553 8.91 13.085 13.92 OpenBenchmarking.org
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 60K 120K 180K 240K 300K 257085 251911 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time b a 200 400 600 800 1000 898.34 899.70 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time b a 40 80 120 160 200 187.20 187.22 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
WebP2 Image Encode Encode Settings: Quality 100, Lossless Compression OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Lossless Compression b a 0.0068 0.0136 0.0204 0.0272 0.034 0.03 0.03 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
TensorFlow Device: CPU - Batch Size: 64 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: VGG-16 b a 3 6 9 12 15 10.96 10.85
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: CPU-Only b a 130 260 390 520 650 591.97 592.36
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms a b 4 8 12 16 20 15.75 15.68 1. (CXX) g++ options: -O3 -pthread -lm -ldl
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig a b 120 240 360 480 600 543.75 545.45
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 a b 0.1103 0.2206 0.3309 0.4412 0.5515 0.49 0.46 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 a b 0.1103 0.2206 0.3309 0.4412 0.5515 0.49 0.48 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenVKL Benchmark: vklBenchmark ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC a b 80 160 240 320 400 362 353 MIN: 53 / MAX: 2735 MIN: 52 / MAX: 2736
OpenVKL Benchmark: vklBenchmark Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar a b 40 80 120 160 200 176 171 MIN: 22 / MAX: 2104 MIN: 22 / MAX: 1573
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized a b 80 160 240 320 400 382.19 382.30
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 b a 5 10 15 20 25 22.42 19.21
TensorFlow Device: CPU - Batch Size: 32 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: VGG-16 a b 3 6 9 12 15 10.22 10.18
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container a b 60 120 180 240 300 275.47 275.88
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile a b 60 120 180 240 300 275.44 276.40
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 30000M 60000M 90000M 120000M 150000M 124608000000 124198000000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -pthread -lmpi_cxx -lmpi
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 b a 8 16 24 32 40 36.17 36.76 MIN: 35.24 / MAX: 154.36 MIN: 36.21 / MAX: 49.89 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 b a 0.8314 1.6628 2.4942 3.3256 4.157 3.487 3.695 MIN: 3.38 / MAX: 3.9 MIN: 3.57 / MAX: 4.07 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 1.1747 2.3494 3.5241 4.6988 5.8735 5.143 5.221 MIN: 4.38 / MAX: 18.18 MIN: 4.5 / MAX: 26.7 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 2 4 6 8 10 7.533 7.588 MIN: 7.39 / MAX: 7.71 MIN: 7.43 / MAX: 7.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 b a 4 8 12 16 20 14.66 15.24 MIN: 14.18 / MAX: 125.14 MIN: 14.25 / MAX: 102.15 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 a b 1.0544 2.1088 3.1632 4.2176 5.272 4.677 4.686 MIN: 4.5 / MAX: 5.08 MIN: 4.5 / MAX: 13.84 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 b a 0.7619 1.5238 2.2857 3.0476 3.8095 3.334 3.386 MIN: 3.22 / MAX: 3.71 MIN: 3.3 / MAX: 3.62 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet b a 5 10 15 20 25 22.37 22.53 MIN: 22.08 / MAX: 29.94 MIN: 22.21 / MAX: 22.9 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield b a 50 100 150 200 250 201.72 205.44
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 b a 4 8 12 16 20 17.97 17.90
TensorFlow Device: CPU - Batch Size: 16 - Model: VGG-16 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: VGG-16 a b 3 6 9 12 15 9.16 9.15
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer a b 6K 12K 18K 24K 30K 29041 29061 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: CPU-Only b a 40 80 120 160 200 191.19 192.12
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer a b 6K 12K 18K 24K 30K 28318 28332 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 b a 0.0293 0.0586 0.0879 0.1172 0.1465 0.13 0.13 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet b a 4 8 12 16 20 14.43 15.69 MIN: 14.16 / MAX: 15.42 MIN: 15.41 / MAX: 17.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer b a 40 80 120 160 200 172.93 173.64 MIN: 167.69 / MAX: 270.28 MIN: 166.7 / MAX: 356.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m a b 12 24 36 48 60 48.89 51.14 MIN: 48.18 / MAX: 56.99 MIN: 50.45 / MAX: 52.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd b a 7 14 21 28 35 27.75 29.15 MIN: 26.83 / MAX: 101.49 MIN: 28.61 / MAX: 32.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny a b 8 16 24 32 40 31.71 32.91 MIN: 31.39 / MAX: 33.18 MIN: 32.4 / MAX: 34.49 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 a b 6 12 18 24 30 26.10 26.12 MIN: 24.7 / MAX: 56.89 MIN: 25.69 / MAX: 29.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet a b 3 6 9 12 15 8.45 9.25 MIN: 8.26 / MAX: 11.64 MIN: 8.49 / MAX: 68.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 a b 4 8 12 16 20 13.35 13.82 MIN: 13.14 / MAX: 15.08 MIN: 13.57 / MAX: 15.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 a b 9 18 27 36 45 35.89 39.21 MIN: 34.05 / MAX: 85.91 MIN: 38.84 / MAX: 55.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet a b 6 12 18 24 30 21.60 23.01 MIN: 21.26 / MAX: 22.11 MIN: 22.72 / MAX: 23.66 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface a b 1.3365 2.673 4.0095 5.346 6.6825 5.70 5.94 MIN: 5.55 / MAX: 6.94 MIN: 5.79 / MAX: 7.16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 a b 4 8 12 16 20 15.0 16.3 MIN: 14.75 / MAX: 15.78 MIN: 16 / MAX: 20.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet a b 3 6 9 12 15 10.43 11.03 MIN: 10.23 / MAX: 11.32 MIN: 10.77 / MAX: 11.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 a b 3 6 9 12 15 11.83 12.81 MIN: 11.61 / MAX: 13.09 MIN: 12.55 / MAX: 14.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 3 6 9 12 15 10.44 11.24 MIN: 10.18 / MAX: 12.88 MIN: 10.99 / MAX: 12.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 3 6 9 12 15 11.44 12.46 MIN: 11.11 / MAX: 20.16 MIN: 12.01 / MAX: 23.54 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet b a 6 12 18 24 30 22.93 23.38 MIN: 22.51 / MAX: 24.62 MIN: 22.74 / MAX: 43.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer b a 4K 8K 12K 16K 20K 17342 17367 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer b a 3K 6K 9K 12K 15K 14166 14186 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer b a 3K 6K 9K 12K 15K 14535 14608 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 b a 7 14 21 28 35 31.50 31.05 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 b a 200 400 600 800 1000 787.50 776.35 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: CPU-Only b a 40 80 120 160 200 162.95 164.77
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 30K 60K 90K 120K 150K 146883 147053 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 a b 2 4 6 8 10 7.78 7.68 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 512 b a 0.2461 0.4922 0.7383 0.9844 1.2305 1.093912 1.044224 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 a b 2 4 6 8 10 8.15 7.93 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency a b 15 30 45 60 75 65.68 66.89 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write a b 1600 3200 4800 6400 8000 7613 7475 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency a b 8 16 24 32 40 34.87 35.48 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write a b 1500 3000 4500 6000 7500 7170 7046 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency a b 0.1298 0.2596 0.3894 0.5192 0.649 0.575 0.577 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only a b 200K 400K 600K 800K 1000K 869450 866030 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a b 4 8 12 16 20 16.54 16.98 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a b 1300 2600 3900 5200 6500 6045 5890 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency b a 0.0254 0.0508 0.0762 0.1016 0.127 0.111 0.113 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only b a 200K 400K 600K 800K 1000K 899014 882773 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency a b 3 6 9 12 15 9.744 11.326 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write a b 1100 2200 3300 4400 5500 5131 4415 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency a b 0.0614 0.1228 0.1842 0.2456 0.307 0.271 0.273 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only a b 200K 400K 600K 800K 1000K 922066 915519 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency a b 0.6896 1.3792 2.0688 2.7584 3.448 1.491 3.065 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write a b 140 280 420 560 700 671 326 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0173 0.0346 0.0519 0.0692 0.0865 0.077 0.077 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only b a 140K 280K 420K 560K 700K 652647 652544 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency a b 0.0104 0.0208 0.0312 0.0416 0.052 0.045 0.046 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 1 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only a b 5K 10K 15K 20K 25K 21981 21860 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K b a 1.3365 2.673 4.0095 5.346 6.6825 5.94 5.62 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Numenta Anomaly Benchmark Detector: KNN CAD OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD a b 30 60 90 120 150 135.46 147.75
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 30K 60K 90K 120K 150K 125037 125174 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 900 1800 2700 3600 4500 4309 4311 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time b a 20 40 60 80 100 95.55 96.03 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time b a 8 16 24 32 40 33.88 34.01 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 800 1600 2400 3200 4000 3528 3529 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 30K 60K 90K 120K 150K 122216 122843 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency a b 500 1000 1500 2000 2500 1682.55 2481.62 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 500 - Mode: Read Write a b 60 120 180 240 300 297 201 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency a b 200 400 600 800 1000 565.86 985.48 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 250 - Mode: Read Write a b 100 200 300 400 500 442 254 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency a b 50 100 150 200 250 160.47 231.89 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write a b 130 260 390 520 650 623 431 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency a b 0.1215 0.243 0.3645 0.486 0.6075 0.537 0.540 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 500 - Mode: Read Only a b 200K 400K 600K 800K 1000K 930850 925569 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency a b 16 32 48 64 80 69.87 70.48 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write a b 150 300 450 600 750 716 709 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency a b 0.0099 0.0198 0.0297 0.0396 0.0495 0.043 0.044 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Only a b 5K 10K 15K 20K 25K 23407 22579 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency a b 0.0576 0.1152 0.1728 0.2304 0.288 0.255 0.256 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 250 - Mode: Read Only a b 200K 400K 600K 800K 1000K 979407 976093 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency a b 0.3132 0.6264 0.9396 1.2528 1.566 1.231 1.392 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 1 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write a b 200 400 600 800 1000 812 718 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0232 0.0464 0.0696 0.0928 0.116 0.103 0.103 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only b a 200K 400K 600K 800K 1000K 971447 968554 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0167 0.0334 0.0501 0.0668 0.0835 0.074 0.074 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL Scaling Factor: 1 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only a b 140K 280K 420K 560K 700K 675675 672851 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 800 1600 2400 3200 4000 3617 3624 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 14.77 14.74
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam b a 30 60 90 120 150 124.78 125.04
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 a b 2 4 6 8 10 7.69 7.56 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer b a 7K 14K 21K 28K 35K 34637 34647 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile a b 30 60 90 120 150 121.47 122.71
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 a b 2 4 6 8 10 8.03 7.89 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation a b 30 60 90 120 150 114.39 114.90
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 200 400 600 800 1000 1085 1085 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 b a 30 60 90 120 150 111.68 115.42 1. (CXX) g++ options: -O3 -fPIC -lm
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 200 400 600 800 1000 914 915 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time a b 2 4 6 8 10 4.26 6.22
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time a b 1.3073 2.6146 3.9219 5.2292 6.5365 4.67 5.81
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time b a 1.2623 2.5246 3.7869 5.0492 6.3115 5.16 5.61
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time b a 2 4 6 8 10 7.68 8.24
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a b 1.0913 2.1826 3.2739 4.3652 5.4565 4.78 4.85
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark a b 15 30 45 60 75 64.83 65.80
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time a b 1.2848 2.5696 3.8544 5.1392 6.424 5.48 5.71
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 200 400 600 800 1000 888 890 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
CockroachDB Workload: KV, 50% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 1024 b a 14K 28K 42K 56K 70K 63998.7 61797.2
CockroachDB Workload: KV, 60% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 1024 b a 15K 30K 45K 60K 75K 70300.1 67899.4
CockroachDB Workload: KV, 95% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 1024 a b 20K 40K 60K 80K 100K 94378.8 92938.9
CockroachDB Workload: KV, 10% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 1024 b a 8K 16K 24K 32K 40K 39590.8 38341.3
CockroachDB Workload: KV, 50% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 512 a b 11K 22K 33K 44K 55K 49424.7 46804.8
CockroachDB Workload: KV, 95% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 512 b a 20K 40K 60K 80K 100K 96577.9 96569.8
CockroachDB Workload: KV, 60% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 512 b a 14K 28K 42K 56K 70K 66868.9 58375.2
CockroachDB Workload: KV, 10% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 512 b a 7K 14K 21K 28K 35K 34085.9 29267.4
CockroachDB Workload: KV, 60% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 256 a b 6K 12K 18K 24K 30K 30111.3 27413.2
CockroachDB Workload: KV, 10% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 256 b a 4K 8K 12K 16K 20K 17242.8 13759.6
CockroachDB Workload: KV, 95% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 256 b a 20K 40K 60K 80K 100K 99986.0 98923.9
CockroachDB Workload: KV, 50% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 256 a b 5K 10K 15K 20K 25K 24394.6 23024.3
CockroachDB Workload: KV, 10% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 128 a b 1500 3000 4500 6000 7500 6941.0 6717.4
CockroachDB Workload: KV, 50% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 128 a b 3K 6K 9K 12K 15K 12434.3 10945.5
CockroachDB Workload: KV, 60% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 128 a b 3K 6K 9K 12K 15K 15412.3 14191.7
CockroachDB Workload: KV, 95% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 128 a b 20K 40K 60K 80K 100K 89867.4 86632.5
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 1024 b a 0.3713 0.7426 1.1139 1.4852 1.8565 1.650224 1.601382 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time b a 0.5625 1.125 1.6875 2.25 2.8125 1.54 2.50
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time b a 0.7268 1.4536 2.1804 2.9072 3.634 2.84 3.23
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time a b 0.9203 1.8406 2.7609 3.6812 4.6015 2.63 4.09
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time b a 2 4 6 8 10 6.24 6.47
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b 1.0598 2.1196 3.1794 4.2392 5.299 4.69 4.71
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark b a 15 30 45 60 75 65.95 66.06
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b 1.1408 2.2816 3.4224 4.5632 5.704 5.07 5.07
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M b a 2K 4K 6K 8K 10K 10411.8 9938.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
CockroachDB Workload: MoVR - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 512 a b 20 40 60 80 100 111.5 108.1
CockroachDB Workload: MoVR - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 1024 a b 20 40 60 80 100 111.1 109.2
CockroachDB Workload: MoVR - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 256 b a 50 100 150 200 250 228.8 202.9
CockroachDB Workload: MoVR - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 128 b a 50 100 150 200 250 226.4 226.3
Numenta Anomaly Benchmark Detector: Earthgecko Skyline OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline b a 20 40 60 80 100 95.43 99.19
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b 20 40 60 80 100 77.03 76.16
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K a b 3 6 9 12 15 9.12 9.08 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer a b 20K 40K 60K 80K 100K 78464 78883 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 512 a b 0.4141 0.8282 1.2423 1.6564 2.0705 1.840618 1.765949 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 a b 0.063 0.126 0.189 0.252 0.315 0.28 0.27 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark b a 3 6 9 12 15 10.14 9.93
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer b a 14K 28K 42K 56K 70K 67325 67352 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 a b 8 16 24 32 40 33.54 32.92
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU b a 300 600 900 1200 1500 1379.94 1380.95 MIN: 1371.97 MIN: 1374.35 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 300 600 900 1200 1500 1380.06 1380.45 MIN: 1374.84 MIN: 1373.02 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU b a 300 600 900 1200 1500 1381.57 1409.26 MIN: 1372.52 MIN: 1374.19 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer a b 14K 28K 42K 56K 70K 65864 66209 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p b a 3 6 9 12 15 10.53 10.53 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: CPU-Only b a 20 40 60 80 100 75.59 75.80
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K b a 0.0608 0.1216 0.1824 0.2432 0.304 0.27 0.27 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 200 400 600 800 1000 781.00 789.36 MIN: 770.18 MIN: 781.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 200 400 600 800 1000 784.13 892.23 MIN: 777.45 MIN: 775.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 200 400 600 800 1000 786.58 801.80 MIN: 780.44 MIN: 779.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run b a 50 100 150 200 250 220.22 198.59 MIN: 27 / MAX: 6000 MIN: 26.85 / MAX: 1578.95 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run b a 50 100 150 200 250 213.89 205.73 MIN: 26.86 / MAX: 5454.55 MIN: 28.36 / MAX: 6000 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache b a 50 100 150 200 250 205.30 142.18 MIN: 22.56 / MAX: 6000 MIN: 12.92 / MAX: 2000 1. ClickHouse server version 22.5.4.19 (official build).
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 963.81 966.40
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 18.65 18.36
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 1024 a b 0.5888 1.1776 1.7664 2.3552 2.944 2.617020 2.019155 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b 0.592 1.184 1.776 2.368 2.96 2.631 2.524 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1 b a 300K 600K 900K 1200K 1500K 1504675.76 1501800.97 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 a b 200K 400K 600K 800K 1000K 994293.90 890721.27 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 a b 200K 400K 600K 800K 1000K 942134.52 839630.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 a b 200K 400K 600K 800K 1000K 1044459.26 937646.51 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 b a 200K 400K 600K 800K 1000K 962500.62 959933.80 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 b a 200K 400K 600K 800K 1000K 930057.66 707625.49 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 b a 200K 400K 600K 800K 1000K 1038762.14 989118.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 963.57 963.91
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a 5 10 15 20 25 18.67 18.62
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All a b 20 40 60 80 100 80.94 80.26
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 500 1000 1500 2000 2500 2199.69 2205.86 MIN: 1690.02 / MAX: 2489.41 MIN: 1652.91 / MAX: 2622.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 0.9068 1.8136 2.7204 3.6272 4.534 4.03 4.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 500 1000 1500 2000 2500 2192.85 2193.51 MIN: 1620.89 / MAX: 2642.98 MIN: 1583.87 / MAX: 2542.24 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 0.909 1.818 2.727 3.636 4.545 4.04 4.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream b a 40 80 120 160 200 196.70 197.13
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream b a 20 40 60 80 100 91.21 91.19
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test b a 13 26 39 52 65 57.26 57.28
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU a b 300 600 900 1200 1500 1283.20 1283.73 MIN: 918.94 / MAX: 1425.48 MIN: 888.06 / MAX: 1519.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU b a 2 4 6 8 10 6.94 6.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 80 160 240 320 400 347.32 347.82 MIN: 298.61 / MAX: 447.14 MIN: 277.09 / MAX: 423.22 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 6 12 18 24 30 25.83 25.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a b 4K 8K 12K 16K 20K 16896.5 16579.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 30 60 90 120 150 115.20 115.95 MIN: 98.32 / MAX: 349.37 MIN: 93.14 / MAX: 374.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 20 40 60 80 100 78.04 77.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet b a 14 28 42 56 70 61.82 61.69
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 3 6 9 12 15 13.06 13.30 MIN: 11.64 / MAX: 41.68 MIN: 12.04 / MAX: 41.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 150 300 450 600 750 687.33 674.84 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU b a 12 24 36 48 60 50.86 51.00 MIN: 47.78 / MAX: 92.75 MIN: 47.98 / MAX: 86.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU b a 150 300 450 600 750 706.42 704.60 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 5 10 15 20 25 21.30 21.76 MIN: 17.61 / MAX: 58.97 MIN: 17.7 / MAX: 58.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 90 180 270 360 450 421.71 412.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.2655 0.531 0.7965 1.062 1.3275 0.98 1.18 MIN: 0.59 / MAX: 15.38 MIN: 0.59 / MAX: 10.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 8K 16K 24K 32K 40K 36342.30 30254.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 3 6 9 12 15 13.17 13.19 MIN: 12.73 / MAX: 25.6 MIN: 9.34 / MAX: 27.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 600 1200 1800 2400 3000 2717.47 2714.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 2 4 6 8 10 6.42 6.45 MIN: 6.2 / MAX: 22.94 MIN: 6.23 / MAX: 21.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 300 600 900 1200 1500 1395.73 1388.97 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.369 0.738 1.107 1.476 1.845 1.64 1.64 MIN: 1.55 / MAX: 8.55 MIN: 1.55 / MAX: 9.24 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 5K 10K 15K 20K 25K 21846.45 21782.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random a b 40K 80K 120K 160K 200K 174193 172766 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random a b 500K 1000K 1500K 2000K 2500K 2545133 2534266 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing b a 200 400 600 800 1000 1098 1092 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream b a 7 14 21 28 35 28.57 29.56
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream b a 8 16 24 32 40 34.99 33.82
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing b a 1.1M 2.2M 3.3M 4.4M 5.5M 5254605 5155074 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read b a 30M 60M 90M 120M 150M 120010661 107719684 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen a b 50 100 150 200 250 220 218 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian a b 90 180 270 360 450 392 391 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl a b 200 400 600 800 1000 816 798 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced a b 110 220 330 440 550 506 502 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate a b 130 260 390 520 650 594 575 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space a b 160 320 480 640 800 735 719 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 30 60 90 120 150 114.35 115.50
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 2 4 6 8 10 8.7441 8.6568
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 b a 13 26 39 52 65 58.23 59.90 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 30 60 90 120 150 114.27 115.35
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 2 4 6 8 10 8.7499 8.6682
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile a b 13 26 39 52 65 57.28 59.27
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only a b 12 24 36 48 60 55.54 55.56
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 512 a b 0.7637 1.5274 2.2911 3.0548 3.8185 3.394244 2.559217 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf a b 400 800 1200 1600 2000 2007 1950
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 2K 4K 6K 8K 10K 10874 10838
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression b a 0.099 0.198 0.297 0.396 0.495 0.44 0.44 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 1024 b a 0.7276 1.4552 2.1828 2.9104 3.638 3.23369 2.90554 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig a b 12 24 36 48 60 52.65 52.69
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet a b 30 60 90 120 150 143.41 142.07
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 512 a b 0.7383 1.4766 2.2149 2.9532 3.6915 3.281164 2.979784 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 40 80 120 160 200 180.54 181.80
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 99.48 98.77
EnCodec Target Bandwidth: 24 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps a b 12 24 36 48 60 48.81 53.47
Numenta Anomaly Benchmark Detector: Contextual Anomaly Detector OSE OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE a b 11 22 33 44 55 50.62 50.91
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 30 60 90 120 150 130.9 129.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 70 140 210 280 350 324.2 321.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating a b 30K 60K 90K 120K 150K 158350 157899 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating a b 40K 80K 120K 160K 200K 171677 169842 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 3 6 9 12 15 13.58 13.24 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b 6 12 18 24 30 23.90 23.95
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b 10 20 30 40 50 41.81 41.72
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 1024 a b 0.8178 1.6356 2.4534 3.2712 4.089 3.634671 3.584121 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 99.33 99.83
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 40 80 120 160 200 180.90 180.12
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p b a 5 10 15 20 25 20.13 19.60 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
EnCodec Target Bandwidth: 1.5 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 1.5 kbps a b 10 20 30 40 50 41.89 44.75
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 11 22 33 44 55 46.37 46.18
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 30 60 90 120 150 118.9 117.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 60 120 180 240 300 294.0 291.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 4 8 12 16 20 14.89 14.92
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 15 30 45 60 75 67.09 66.97
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 135.75 136.68
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 132.44 131.59
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test b a 30M 60M 90M 120M 150M 120000000 119900000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
EnCodec Target Bandwidth: 6 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 6 kbps b a 9 18 27 36 45 39.85 40.51
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K b a 4 8 12 16 20 16.23 16.04 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream b a 5 10 15 20 25 18.35 18.41
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream b a 12 24 36 48 60 54.45 54.26
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 12 24 36 48 60 50.95 51.01
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 80 160 240 320 400 352.86 352.63
EnCodec Target Bandwidth: 3 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps b a 9 18 27 36 45 35.12 40.41
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 2 4 6 8 10 8.7659 8.7764
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 30 60 90 120 150 113.89 113.75
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet a b 20 40 60 80 100 106.64 105.78
Numenta Anomaly Benchmark Detector: Bayesian Changepoint OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint a b 8 16 24 32 40 34.99 35.73
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive b a 0.4147 0.8294 1.2441 1.6588 2.0735 1.8432 1.8422 1. (CXX) g++ options: -O3 -flto -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex b a 200K 400K 600K 800K 1000K 1152673.55 1051825.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin a b 50K 100K 150K 200K 250K 253670 253330 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi b a 150 300 450 600 750 703.92 701.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching b a 1.5M 3M 4.5M 6M 7.5M 7049324.49 6260590.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: x86_64 RdRand b a 110K 220K 330K 440K 550K 503602.92 503507.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic a b 30K 60K 90K 120K 150K 126894.32 116358.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 a b 20K 40K 60K 80K 100K 99753.68 98829.75 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -lbsd -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing a b 800K 1600K 2400K 3200K 4000K 3795441.99 3677320.93 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 1000K 2000K 3000K 4000K 5000K 4585186.59 4572788.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 20M 40M 60M 80M 100M 109293721.23 109184164.47 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 120 240 360 480 600 577.50 572.28 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying b a 1000 2000 3000 4000 5000 4691.35 4644.12 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking b a 9K 18K 27K 36K 45K 42163.36 41988.02 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 200 400 600 800 1000 1021.04 1016.64 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 30K 60K 90K 120K 150K 124052.48 123275.06 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto b a 7K 14K 21K 28K 35K 34689.96 34685.11 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting b a 90 180 270 360 450 424.2 417.0 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin b a 3K 6K 9K 12K 15K 12940 12910 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache a b 20 40 60 80 100 89.3 76.5 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP a b 120 240 360 480 600 564.44 561.69 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity a b 6K 12K 18K 24K 30K 26882.84 20197.23 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress a b 13K 26K 39K 52K 65K 59047.56 58316.18 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE b a 100K 200K 300K 400K 500K 444590.02 437831.75 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex b a 4M 8M 12M 16M 20M 17219955.71 17168753.52 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 20K 40K 60K 80K 100K 103156.98 103019.74 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions a b 400K 800K 1200K 1600K 2000K 1715303.02 1703193.63 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl b a 3K 6K 9K 12K 15K 12000 11570 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x b a 300 600 900 1200 1500 1253.15 1252.09 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin b a 700 1400 2100 2800 3500 3325.86 3292.87 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin a b 6K 12K 18K 24K 30K 28150 28140 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S b a 500K 1000K 1500K 2000K 2500K 2239380 2216930 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite b a 90K 180K 270K 360K 450K 403630 401450 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt a b 200 400 600 800 1000 951.15 946.10 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship b a 0.8325 1.665 2.4975 3.33 4.1625 3.7 3.7
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 b a 500K 1000K 1500K 2000K 2500K 2204248.0 2202580.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin a b 130K 260K 390K 520K 650K 598720 598300 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits b a 40K 80K 120K 160K 200K 183720 182930 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 b a 500K 1000K 1500K 2000K 2500K 2232581.50 2217236.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K a b 5 10 15 20 25 22.76 22.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p b a 0.1643 0.3286 0.4929 0.6572 0.8215 0.73 0.72 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K a b 5 10 15 20 25 22.89 22.75 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 b a 500K 1000K 1500K 2000K 2500K 2244608.5 2242796.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p b a 2 4 6 8 10 6.385 6.245 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B a b 5 10 15 20 25 22.39 22.43
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 b a 7 14 21 28 35 30.27 29.43 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 b a 160 320 480 640 800 756.81 735.75 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 a b 600K 1200K 1800K 2400K 3000K 2617818.25 2404467.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet a b 20 40 60 80 100 79.17 79.09
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p b a 6 12 18 24 30 26.43 23.87 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 a b 600K 1200K 1800K 2400K 3000K 2852955.50 2805580.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 a b 600K 1200K 1800K 2400K 3000K 2908381.75 2700561.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 15 30 45 60 75 65.7 65.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 20 40 60 80 100 100.3 99.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 30 60 90 120 150 136.3 134.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 70 140 210 280 350 320.8 318.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 8 16 24 32 40 36.59 36.39 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC a b 5 10 15 20 25 22.41 22.58 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 11.48 11.56 MIN: 9.71 MIN: 9.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU b a 3 6 9 12 15 9.81832 9.85430 MIN: 9.66 MIN: 9.66 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU b a 0.128 0.256 0.384 0.512 0.64 0.567734 0.568809 MIN: 0.54 MIN: 0.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 30 60 90 120 150 126.1 124.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 60 120 180 240 300 293.3 289.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default a b 5 10 15 20 25 19.74 19.84
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless a b 0.2813 0.5626 0.8439 1.1252 1.4065 1.25 1.24 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough b a 4 8 12 16 20 18.09 18.06 1. (CXX) g++ options: -O3 -flto -pthread
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p a b 9 18 27 36 45 38.05 37.75 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b a 0.3872 0.7744 1.1616 1.5488 1.936 1.71954 1.72077 MIN: 1.65 MIN: 1.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b 1.2857 2.5714 3.8571 5.1428 6.4285 5.70137 5.71409 MIN: 5.56 MIN: 5.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.3107 0.6214 0.9321 1.2428 1.5535 1.37489 1.38109 MIN: 1.28 MIN: 1.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M a b 3 6 9 12 15 11.12 11.32
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast a b 80 160 240 320 400 364.55 364.35 1. (CXX) g++ options: -O3 -flto -pthread
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p a b 11 22 33 44 55 46.50 46.38 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Numenta Anomaly Benchmark Detector: Relative Entropy OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy a b 3 6 9 12 15 13.36 13.36
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p a b 11 22 33 44 55 48.30 46.73 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU a b 0.5835 1.167 1.7505 2.334 2.9175 2.55381 2.59337 MIN: 2.3 MIN: 2.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU a b 0.1083 0.2166 0.3249 0.4332 0.5415 0.477096 0.481461 MIN: 0.46 MIN: 0.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU a b 0.053 0.106 0.159 0.212 0.265 0.235557 0.235658 MIN: 0.21 MIN: 0.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 30 60 90 120 150 120.58 118.27 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless b a 3 6 9 12 15 9.978 10.043 1. (CXX) g++ options: -O3 -fPIC -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 20 40 60 80 100 86.18 85.02 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 0.855 1.71 2.565 3.42 4.275 3.79612 3.80003 MIN: 3.75 MIN: 3.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU b a 0.7652 1.5304 2.2956 3.0608 3.826 3.31851 3.40068 MIN: 2.91 MIN: 2.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU b a 0.283 0.566 0.849 1.132 1.415 1.25446 1.25783 MIN: 1.17 MIN: 1.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz a b 2 4 6 8 10 8.579 8.584
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression b a 0.6818 1.3636 2.0454 2.7272 3.409 3.03 3.03 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium a b 30 60 90 120 150 138.09 136.69 1. (CXX) g++ options: -O3 -flto -pthread
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 20 40 60 80 100 110.56 106.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Numenta Anomaly Benchmark Detector: Windowed Gaussian OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian b a 2 4 6 8 10 7.476 7.529
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless b a 2 4 6 8 10 7.052 7.053 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 7.47846 7.48562 MIN: 7.39 MIN: 7.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b a 2 4 6 8 10 6.94611 6.94903 MIN: 6.87 MIN: 6.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b 2 4 6 8 10 6.35819 6.41270 MIN: 6.32 MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 b a 1.1921 2.3842 3.5763 4.7684 5.9605 5.280 5.298 1. (CXX) g++ options: -O3 -fPIC -lm
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b 50 100 150 200 250 227.23 227.06 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 40 80 120 160 200 201.87 199.97 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default b a 2 4 6 8 10 6.76 6.59 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 b a 2 4 6 8 10 7.45 7.27 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b 3 6 9 12 15 9.61455 9.63450 MIN: 9.53 MIN: 9.52 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 0.634 1.268 1.902 2.536 3.17 2.80983 2.81784 MIN: 2.78 MIN: 2.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.1563 0.3126 0.4689 0.6252 0.7815 0.693146 0.694553 MIN: 0.68 MIN: 0.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100 a b 2 4 6 8 10 8.92 8.91 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein a b 4 8 12 16 20 17.95 13.09 1. (CXX) g++ options: -O3 -pthread -lm -ldl
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Default b a 4 8 12 16 20 13.92 13.83 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
Phoronix Test Suite v10.8.5