Tests for a future article. Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Processor: Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads), Motherboard: ASUS PRIME Z690-P WIFI D4 (0605 BIOS), Chipset: Intel Device 7aa7, Memory: 16GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: ASUS Intel ADL-S GT1 15GB (1450MHz), Audio: Realtek ALC897, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel Device 7af0
OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc6daily20220716-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3 + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.2.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
oktoberfest OpenBenchmarking.org Phoronix Test Suite Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads) ASUS PRIME Z690-P WIFI D4 (0605 BIOS) Intel Device 7aa7 16GB 1000GB Western Digital WDS100T1X0E-00AFY0 ASUS Intel ADL-S GT1 15GB (1450MHz) Realtek ALC897 ASUS MG28U Realtek RTL8125 2.5GbE + Intel Device 7af0 Ubuntu 22.04 5.19.0-051900rc6daily20220716-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.2.204 GCC 11.4.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution Oktoberfest Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9 - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04) - Python 3.10.12 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +7.4% +7.4% +14.8% +14.8% +22.2% +22.2% 29.5% 21.7% 14.6% 14% 12.2% 7.5% 5.8% 5.2% 5.1% 2.8% 2.5% 2% R.N.N.T - f32 - CPU R.N.N.I - f32 - CPU Speed 9 Realtime - Bosphorus 4K Speed 6 Realtime - Bosphorus 1080p Speed 8 Realtime - Bosphorus 1080p 13.1% Speed 6 Realtime - Bosphorus 4K CPU-v2-v2 - mobilenet-v2 12.1% Speed 9 Realtime - Bosphorus 1080p 8.7% Speed 10 Realtime - Bosphorus 4K 16 - 256 - 512 CPU - yolov4-tiny 5.7% CPU - vision_transformer Speed 10 Realtime - Bosphorus 1080p Bumper Beam 4.9% CPU - alexnet 3.6% Speed 11 Realtime - Bosphorus 4K 3.5% Preset 13 - Bosphorus 1080p 3.4% C.D.Y.C.S.I - S.S.S 3.2% C.D.Y.C.S.I - S.S.S 3.2% 1:10 3% CPU - FastestDet 3% CPU - resnet50 3% CPU - mobilenet 2.9% 16 - 256 - 57 2.9% C.P.D.T B.L.N.Q.A.S.I - S.S.S 2.8% B.L.N.Q.A.S.I - S.S.S 2.8% 2 CPU - mnasnet 2.4% 100 2.3% Preset 12 - Bosphorus 1080p 2.3% 1 - 256 - 57 Poll 2% oneDNN oneDNN AOM AV1 AOM AV1 AOM AV1 AOM AV1 NCNN AOM AV1 AOM AV1 Liquid-DSP NCNN NCNN AOM AV1 OpenRadioss NCNN AOM AV1 SVT-AV1 Neural Magic DeepSparse Neural Magic DeepSparse Memcached NCNN NCNN NCNN Liquid-DSP OpenRadioss Neural Magic DeepSparse Neural Magic DeepSparse libavif avifenc NCNN Apache HTTP Server SVT-AV1 Liquid-DSP Stress-NG a b
oktoberfest stress-ng: Hash stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Cloning stress-ng: Pthread stress-ng: AVL Tree stress-ng: AVX-512 VNNI stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Vector Shuffle stress-ng: Mixed Scheduler stress-ng: Wide Vector Math stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point nekrs: Kershaw nekrs: TurboPipe Periodic dav1d: Chimera 1080p dav1d: Summer Nature 4K dav1d: Summer Nature 1080p dav1d: Chimera 1080p 10-bit aom-av1: Speed 0 Two-Pass - Bosphorus 4K aom-av1: Speed 4 Two-Pass - Bosphorus 4K aom-av1: Speed 6 Realtime - Bosphorus 4K aom-av1: Speed 6 Two-Pass - Bosphorus 4K aom-av1: Speed 8 Realtime - Bosphorus 4K aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 11 Realtime - Bosphorus 4K aom-av1: Speed 0 Two-Pass - Bosphorus 1080p aom-av1: Speed 4 Two-Pass - Bosphorus 1080p aom-av1: Speed 6 Realtime - Bosphorus 1080p aom-av1: Speed 6 Two-Pass - Bosphorus 1080p aom-av1: Speed 8 Realtime - Bosphorus 1080p aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p aom-av1: Speed 11 Realtime - Bosphorus 1080p embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster hpcg: 104 104 104 - 60 libxsmm: 128 libxsmm: 32 libxsmm: 64 oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - ResNet-50 openvkl: vklBenchmarkCPU ISPC openvkl: vklBenchmarkCPU Scalar ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream palabos: 100 palabos: 400 quantlib: Multi-Threaded quantlib: Single-Threaded cassandra: Writes memcached: 1:10 memcached: 1:100 nginx: 100 nginx: 200 nginx: 500 nginx: 1000 apache: 100 apache: 200 apache: 500 apache: 1000 liquid-dsp: 1 - 256 - 32 liquid-dsp: 1 - 256 - 57 liquid-dsp: 2 - 256 - 32 liquid-dsp: 2 - 256 - 57 liquid-dsp: 4 - 256 - 32 liquid-dsp: 4 - 256 - 57 liquid-dsp: 8 - 256 - 32 liquid-dsp: 8 - 256 - 57 liquid-dsp: 1 - 256 - 512 liquid-dsp: 16 - 256 - 32 liquid-dsp: 16 - 256 - 57 liquid-dsp: 2 - 256 - 512 liquid-dsp: 4 - 256 - 512 liquid-dsp: 8 - 256 - 512 liquid-dsp: 16 - 256 - 512 brl-cad: VGR Performance Metric onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream sqlite: 1 sqlite: 2 sqlite: 4 openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container z3: 1.smt2 z3: 2.smt2 easywave: e2Asean Grid + BengkuluSept2007 Source - 240 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless build-gcc: Time To Compile build-godot: Time To Compile build2: Time To Compile encode-opus: WAV To Opus Encode espeak: Text-To-Speech Synthesis duckdb: IMDB duckdb: TPC-H Parquet blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only whisper-cpp: ggml-base.en - 2016 State of the Union whisper-cpp: ggml-small.en - 2016 State of the Union whisper-cpp: ggml-medium.en - 2016 State of the Union qmcpack: H4_ae qmcpack: Li2_STO_ae qmcpack: LiH_ae_MSD qmcpack: simple-H2O qmcpack: O_ae_pyscf_UHF qmcpack: FeCO6_b3lyp_gms a b 2294409.16 7171062.72 1241130.3 1130.39 860.48 186367.1 81.66 1228899.23 4377.45 1401.51 9447.65 10186.26 484455.75 16384409.66 19538.35 3261960000 683.61 238.78 956.18 556.38 0.24 6.77 63.82 14.25 68.67 69.71 75.67 73.19 0.73 15.99 146.21 46.78 183.46 199.96 178.89 193.68 12.706 13.4891 14.9731 13.5052 16.6603 14.513 3.475 44.977 109.879 111.129 11.198 92.681 437.372 516.2 4.931 10.181 15.108 33.291 6.95045 247.5 96.3 167.9 0.34 17.01 17.49 17.81 277 130 5.74695 5.68706 140.441 2.63693 2.55887 3.88095 8.4527 7.7535 222.916 87.647 100.4053 56.1658 30.4378 21.1919 109.1421 79.1508 712.3593 349.6151 52.3238 45.3071 9.656 8.145 108.1187 78.9909 50.9696 45.4754 69.7199 55.415 10.4405 9.7413 114.5198 50.7825 34.6065 28.2718 8.4541 7.754 51.2381 76.7579 40312.3 4562.6 123080 3279849.87 3016454.9 108258.46 102547.95 89266.33 82318.98 148966.44 140860.12 119690.62 120445.38 57879000 78801000 115700000 159380000 221860000 305790000 402980000 518790000 20784000 757790000 695800000 41548000 80680000 130420000 185450000 215292 3.89503 10.7972 14.3545 9.24791 8.04899 5530.13 2636.78 9734 9910 11549 159663 316965 162070 322602 189279 375733 2456 2499 2912 42656 82003 43199 83367 49884 96823 10.24 2.98 2.38 2.24 2.54 4.87 0.75 7.77 37.31 5.52 4.96 12.77 16.64 6.86 5.98 100.28 3 586.6719 128.9697 22.4104 11.4028 49.7154 17.7976 164.2155 47.1806 45.7934 12.6294 7.0052 2.8555 95.4789 22.0647 511.3289 122.7701 46.1862 12.6553 98.0783 21.986 71.6364 18.0424 472.6798 102.6438 43.6378 19.6842 144.1229 35.367 588.3009 128.9608 8.943 14.49 19.005 222.32 157.04 347.26 283.82 717.69 19.126 62.939 9.046 185.879 127.473 60.818 6.332 9.164 4.552 918.832 323.861 137.603 20.508 21.215 115.408 97.820 112.43 316.24 156.02 1237.27 385.7 244.72794 820.19569 2723.0535 39.88 445.38 201.74 43.891 314.45 341.81 2304545.32 7160556.83 1217218.91 1129.89 865.39 186386.61 82.1 1229104.91 4412.33 1418.21 9457.69 10290.81 484394.85 16391410.53 19468.51 3260840000 4079830000 675.83 239.22 962.35 555.73 0.24 6.77 71.58 14.26 68.16 79.9 81.38 70.73 0.73 15.99 166.68 46.68 162.27 184.04 188.01 190.21 12.623 13.5405 14.9131 13.4833 16.6913 14.5736 3.455 44.94 109.209 111.541 11.215 93.344 427.53 499.07 4.912 10.225 15.237 33.094 6.96238 248 96.4 167 0.34 17.09 17.5 17.95 277 130 5.76091 5.70261 140.588 2.63451 2.54893 3.86334 8.4551 7.7442 223.3921 89.1195 99.5737 56.4406 30.5986 21.1846 109.0587 79.3733 713.3158 355.4745 52.4969 45.2474 9.679 8.1485 108.0654 79.0416 50.8618 44.0455 69.0868 54.9374 10.3248 9.6372 114.2856 49.3977 34.5555 28.1161 8.3633 7.7181 51.3091 76.5863 39967.5 4573.9 122667 3183290.59 3059379.28 107751.55 102479.68 88913.24 82032.62 145590.45 140213.08 121041.48 119665.35 57881000 80378000 115690000 160510000 221680000 303380000 400370000 515540000 20781000 750070000 676010000 41499000 81117000 131280000 196230000 215193 3.89409 10.6941 14.3415 9.2303 8.04648 4270.3 2167.5 9716 9901 11536 159770 317047 162754 322586 189045 375965 2457 2496 2916 42536 82116 43132 83406 49910 96957 10.54 3.34 2.41 2.27 2.6 4.91 0.76 7.73 36.62 5.5 5.14 13.15 17.59 6.99 6.05 95.33 3.09 589.1531 129.1244 22.3515 11.2144 50.1726 17.7105 163.3215 47.1966 45.8118 12.5944 6.9947 2.8079 95.2027 22.0939 511.1198 122.7176 46.2101 12.6469 98.2865 22.6997 72.2413 18.1991 479.6106 103.7525 43.6626 20.2367 144.3669 35.5633 593.639 129.5604 8.908 14.457 19.004 233.21 152.73 342.5 283.06 714.81 19.121 63.059 9.06 185.999 126.773 59.356 6.274 9.139 4.634 918.938 20.541 21.288 112.12 316.35 156.07 1237.36 385.91 246.69041 818.91356 2736.77075 40.51 445.23 202.34 44.358 313.84 338.6 OpenBenchmarking.org
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe b a 1.5M 3M 4.5M 6M 7.5M 7160556.83 7171062.72 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll b a 300K 600K 900K 1200K 1500K 1217218.91 1241130.30 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib b a 200 400 600 800 1000 1129.89 1130.39 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning a b 200 400 600 800 1000 860.48 865.39 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread a b 40K 80K 120K 160K 200K 186367.10 186386.61 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree a b 20 40 60 80 100 81.66 82.10 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI a b 300K 600K 900K 1200K 1500K 1228899.23 1229104.91 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point a b 900 1800 2700 3600 4500 4377.45 4412.33 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math a b 300 600 900 1200 1500 1401.51 1418.21 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle a b 2K 4K 6K 8K 10K 9447.65 9457.69 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler a b 2K 4K 6K 8K 10K 10186.26 10290.81 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math b a 100K 200K 300K 400K 500K 484394.85 484455.75 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add a b 4M 8M 12M 16M 20M 16384409.66 16391410.53 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point b a 4K 8K 12K 16K 20K 19468.51 19538.35 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw b a 700M 1400M 2100M 2800M 3500M 3260840000 3261960000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic b 900M 1800M 2700M 3600M 4500M 4079830000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
Input: TurboPipe Periodic
a: The test quit with a non-zero exit status. E: mpirun noticed that process rank 1 with PID 0 on node phoronix-System-Product-Name exited on signal 11 (Segmentation fault).
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 2 4 6 8 10 6.77 6.77 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 16 32 48 64 80 63.82 71.58 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K a b 4 8 12 16 20 14.25 14.26 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K b a 15 30 45 60 75 68.16 68.67 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K a b 20 40 60 80 100 69.71 79.90 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K a b 20 40 60 80 100 75.67 81.38 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K b a 16 32 48 64 80 70.73 73.19 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p a b 0.1643 0.3286 0.4929 0.6572 0.8215 0.73 0.73 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p a b 4 8 12 16 20 15.99 15.99 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p a b 40 80 120 160 200 146.21 166.68 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p b a 11 22 33 44 55 46.68 46.78 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p b a 40 80 120 160 200 162.27 183.46 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p b a 40 80 120 160 200 184.04 199.96 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p a b 40 80 120 160 200 178.89 188.01 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p b a 40 80 120 160 200 190.21 193.68 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown b a 3 6 9 12 15 12.62 12.71 MIN: 12.52 / MAX: 12.78 MIN: 12.62 / MAX: 12.94
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b 3 6 9 12 15 13.49 13.54 MIN: 13.36 / MAX: 13.73 MIN: 13.42 / MAX: 13.73
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon b a 4 8 12 16 20 14.91 14.97 MIN: 14.84 / MAX: 15.18 MIN: 14.91 / MAX: 15.16
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj b a 3 6 9 12 15 13.48 13.51 MIN: 13.41 / MAX: 13.7 MIN: 13.43 / MAX: 13.66
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b 4 8 12 16 20 16.66 16.69 MIN: 16.56 / MAX: 16.95 MIN: 16.6 / MAX: 16.96
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b 4 8 12 16 20 14.51 14.57 MIN: 14.43 / MAX: 14.71 MIN: 14.49 / MAX: 14.75
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.7819 1.5638 2.3457 3.1276 3.9095 3.455 3.475 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K b a 10 20 30 40 50 44.94 44.98 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K b a 20 40 60 80 100 109.21 109.88 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 20 40 60 80 100 111.13 111.54 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 3 6 9 12 15 11.20 11.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b 20 40 60 80 100 92.68 93.34 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 90 180 270 360 450 427.53 437.37 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p b a 110 220 330 440 550 499.07 516.20 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast b a 1.1095 2.219 3.3285 4.438 5.5475 4.912 4.931 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b 3 6 9 12 15 10.18 10.23 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b 4 8 12 16 20 15.11 15.24 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster b a 8 16 24 32 40 33.09 33.29 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
X Y Z: 144 144 144 - RT: 60
a: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory
b: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 a b 50 100 150 200 250 247.5 248.0 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 a b 20 40 60 80 100 96.3 96.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 b a 40 80 120 160 200 167.0 167.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 17.01 17.09
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time a b 1.2962 2.5924 3.8886 5.1848 6.481 5.74695 5.76091
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time b a 0.5933 1.1866 1.7799 2.3732 2.9665 2.63451 2.63693
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time b a 0.5757 1.1514 1.7271 2.3028 2.8785 2.54893 2.55887
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time b a 0.8732 1.7464 2.6196 3.4928 4.366 3.86334 3.88095
Palabos The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 100 a b 12 24 36 48 60 51.24 51.31 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 400 b a 20 40 60 80 100 76.59 76.76 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded b a 9K 18K 27K 36K 45K 39967.5 40312.3 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b 1000 2000 3000 4000 5000 4562.6 4573.9 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 b a 700K 1400K 2100K 2800K 3500K 3183290.59 3279849.87 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 a b 700K 1400K 2100K 2800K 3500K 3016454.90 3059379.28 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 b a 20K 40K 60K 80K 100K 107751.55 108258.46 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 b a 20K 40K 60K 80K 100K 102479.68 102547.95 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 b a 20K 40K 60K 80K 100K 88913.24 89266.33 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 b a 20K 40K 60K 80K 100K 82032.62 82318.98 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 100 b a 30K 60K 90K 120K 150K 145590.45 148966.44 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b 12M 24M 36M 48M 60M 57879000 57881000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b 20M 40M 60M 80M 100M 78801000 80378000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 b a 20M 40M 60M 80M 100M 115690000 115700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 a b 30M 60M 90M 120M 150M 159380000 160510000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 b a 50M 100M 150M 200M 250M 221680000 221860000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 b a 70M 140M 210M 280M 350M 303380000 305790000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 b a 90M 180M 270M 360M 450M 400370000 402980000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 b a 110M 220M 330M 440M 550M 515540000 518790000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 b a 4M 8M 12M 16M 20M 20781000 20784000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 b a 160M 320M 480M 640M 800M 750070000 757790000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 b a 150M 300M 450M 600M 750M 676010000 695800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 b a 9M 18M 27M 36M 45M 41499000 41548000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 a b 20M 40M 60M 80M 100M 80680000 81117000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a b 30M 60M 90M 120M 150M 130420000 131280000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 a b 40M 80M 120M 160M 200M 185450000 196230000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric b a 50K 100K 150K 200K 250K 215193 215292 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 0.8764 1.7528 2.6292 3.5056 4.382 3.89503 3.89409 MIN: 3.75 MIN: 3.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 10.80 10.69 MIN: 10.69 MIN: 10.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 4 8 12 16 20 14.35 14.34 MIN: 14.15 MIN: 14.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 9.24791 9.23030 MIN: 5.33 MIN: 5.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 8.04899 8.04648 MIN: 8.02 MIN: 8.02 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 1200 2400 3600 4800 6000 5530.13 4270.30 MIN: 4272.6 MIN: 4261.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 600 1200 1800 2400 3000 2636.78 2167.50 MIN: 2149.96 MIN: 2155.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b 2K 4K 6K 8K 10K 9734 9716
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 b a 0.7515 1.503 2.2545 3.006 3.7575 3.34 2.98 MIN: 3.33 / MAX: 3.56 MIN: 2.96 / MAX: 3.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 b a 0.5423 1.0846 1.6269 2.1692 2.7115 2.41 2.38 MIN: 2.38 / MAX: 2.55 MIN: 2.36 / MAX: 2.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 b a 0.5108 1.0216 1.5324 2.0432 2.554 2.27 2.24 MIN: 2.26 / MAX: 2.55 MIN: 2.23 / MAX: 2.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet b a 0.585 1.17 1.755 2.34 2.925 2.60 2.54 MIN: 2.58 / MAX: 2.73 MIN: 2.52 / MAX: 2.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 b a 1.1048 2.2096 3.3144 4.4192 5.524 4.91 4.87 MIN: 4.86 / MAX: 5.12 MIN: 4.83 / MAX: 5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface b a 0.171 0.342 0.513 0.684 0.855 0.76 0.75 MIN: 0.75 / MAX: 0.77 MIN: 0.74 / MAX: 0.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b 2 4 6 8 10 7.77 7.73 MIN: 7.62 / MAX: 7.91 MIN: 7.62 / MAX: 7.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b 9 18 27 36 45 37.31 36.62 MIN: 37.22 / MAX: 37.65 MIN: 36.48 / MAX: 43.16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b 1.242 2.484 3.726 4.968 6.21 5.52 5.50 MIN: 5.35 / MAX: 5.68 MIN: 5.39 / MAX: 5.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet b a 1.1565 2.313 3.4695 4.626 5.7825 5.14 4.96 MIN: 5.08 / MAX: 5.22 MIN: 4.89 / MAX: 5.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 b a 3 6 9 12 15 13.15 12.77 MIN: 12.98 / MAX: 13.35 MIN: 12.66 / MAX: 13.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny b a 4 8 12 16 20 17.59 16.64 MIN: 17.43 / MAX: 23.14 MIN: 16.5 / MAX: 16.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd b a 2 4 6 8 10 6.99 6.86 MIN: 6.91 / MAX: 7.2 MIN: 6.78 / MAX: 6.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m b a 2 4 6 8 10 6.05 5.98 MIN: 6.02 / MAX: 6.18 MIN: 5.94 / MAX: 6.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b 20 40 60 80 100 100.28 95.33 MIN: 94 / MAX: 163.09 MIN: 94.07 / MAX: 152.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet b a 0.6953 1.3906 2.0859 2.7812 3.4765 3.09 3.00 MIN: 3.08 / MAX: 3.18 MIN: 2.96 / MAX: 3.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 1 a b 2 4 6 8 10 8.943 8.908 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam b a 50 100 150 200 250 233.21 222.32
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 b a 3 6 9 12 15 9.060 9.046 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 b a 40 80 120 160 200 186.00 185.88 1. (CXX) g++ options: -O3 -fopenmp
Timed Godot Game Engine Compilation This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Godot Game Engine Compilation 4.0 Time To Compile a 70 140 210 280 350 323.86
Time To Compile
b: The test quit with a non-zero exit status. E: scene/3d/skeleton_3d.cpp:1054:1: internal compiler error: Segmentation fault
Timed LLVM Compilation This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.
Build System: Ninja
a: The test quit with a non-zero exit status. E: /usr/include/c++/11/bits/vector.tcc:253:5: internal compiler error: Segmentation fault
b: The test quit with a non-zero exit status. E: llvm-16.0.0.src/lib/Target/AMDGPU/AMDGPURegBankCombiner.cpp:476: internal compiler error: Segmentation fault
Build System: Unix Makefiles
a: The test quit with a non-zero exit status. E: /usr/include/c++/11/bits/stl_map.h:524:7: internal compiler error: Segmentation fault
b: The test quit with a non-zero exit status. E: llvm-16.0.0.src/lib/AsmParser/LLParser.cpp:6604:1: internal compiler error: Segmentation fault
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
Time To Compile
a: The test quit with a non-zero exit status. E: ../deps/v8/src/heap/marking-worklist.h:187:53: internal compiler error: Segmentation fault
b: The test quit with a non-zero exit status. E: ../src/util-inl.h:563:6: internal compiler error: Segmentation fault
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.15 Time To Compile a 30 60 90 120 150 137.60
Time To Compile
b: The test quit with a non-zero exit status. E: build2-toolchain-0.15.0/build2/libbuild2/script/run.cxx:94:5: internal compiler error: Segmentation fault
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.4 WAV To Opus Encode b a 5 10 15 20 25 20.54 20.51 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.org Seconds, Fewer Is Better DuckDB 0.9.1 Benchmark: TPC-H Parquet a 20 40 60 80 100 SE +/- 0.18, N = 3 97.82 1. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
Benchmark: TPC-H Parquet
b: The test run did not produce a result.
Whisper.cpp Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union b a 50 100 150 200 250 246.69 244.73 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union a b 200 400 600 800 1000 820.20 818.91 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union b a 600 1200 1800 2400 3000 2736.77 2723.05 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae b a 9 18 27 36 45 40.51 39.88 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae a b 100 200 300 400 500 445.38 445.23 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD b a 40 80 120 160 200 202.34 201.74 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O b a 10 20 30 40 50 44.36 43.89 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b 70 140 210 280 350 314.45 313.84 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms a b 70 140 210 280 350 341.81 338.60 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 October 2023 06:06 by user pts.
b Processor: Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads), Motherboard: ASUS PRIME Z690-P WIFI D4 (0605 BIOS), Chipset: Intel Device 7aa7, Memory: 16GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: ASUS Intel ADL-S GT1 15GB (1450MHz), Audio: Realtek ALC897, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel Device 7af0
OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc6daily20220716-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3 + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.2.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 October 2023 13:59 by user pts.