tgls Tests for a future article. Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.7.0 BIOS) and Intel Xe TGL GT2 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2311139-PTS-TGLS087809&grr .
tgls Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution a b c Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads) Dell 0DXP1F (3.7.0 BIOS) Intel Tiger Lake-LP 16GB Micron 2300 NVMe 512GB Intel Xe TGL GT2 15GB (1350MHz) Realtek ALC289 Intel Wi-Fi 6 AX201 Ubuntu 22.04 5.19.0-46-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.3.204 GCC 11.4.0 ext4 1920x1200 6.2.0-36-generic (x86_64) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - a: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa6 - Thermald 2.4.9 - b: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa6 - Thermald 2.4.9 - c: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xac - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.12 Security Details - a: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - b: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Mitigation of Microcode + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
tgls build-gcc: Time To Compile build-gem5: Time To Compile ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU openvkl: vklBenchmarkCPU ISPC brl-cad: VGR Performance Metric qmcpack: Li2_STO_ae openradioss: Bird Strike on Windshield openvkl: vklBenchmarkCPU Scalar ospray-studio: 3 - 4K - 16 - Path Tracer - CPU openradioss: Rubber O-Ring Seal Installation ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU oidn: RTLightmap.hdr.4096x4096 - CPU-Only avifenc: 0 vvenc: Bosphorus 4K - Fast ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Platform ffmpeg: libx265 - Upload ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU ffmpeg: libx264 - Upload openradioss: Cell Phone Drop Test ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU qmcpack: O_ae_pyscf_UHF cloverleaf: clover_bm64_short oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 embree: Pathtracer - Asian Dragon Obj embree: Pathtracer - Crown ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Platform avifenc: 2 vvenc: Bosphorus 4K - Faster embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU svt-av1: Preset 4 - Bosphorus 4K cassandra: Writes build-ffmpeg: Time To Compile ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU embree: Pathtracer ISPC - Asian Dragon qmcpack: FeCO6_b3lyp_gms cloverleaf: clover_bm deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream vvenc: Bosphorus 1080p - Fast onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - f32 - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU onednn: Recurrent Neural Network Inference - f32 - CPU ffmpeg: libx265 - Live onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU qmcpack: LiH_ae_MSD ospray-studio: 1 - 4K - 1 - Path Tracer - CPU qmcpack: H4_ae openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU dacapobench: Apache Cassandra openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream quantlib: Multi-Threaded deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream ffmpeg: libx264 - Live deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream dacapobench: Eclipse dacapobench: Apache Tomcat vvenc: Bosphorus 1080p - Faster deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream ospray-studio: 3 - 4K - 1 - Path Tracer - CPU deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream dacapobench: H2 Database Engine ospray-studio: 2 - 4K - 1 - Path Tracer - CPU svt-av1: Preset 8 - Bosphorus 4K deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream svt-av1: Preset 4 - Bosphorus 1080p stress-ng: IO_uring dacapobench: Apache Lucene Search Index dacapobench: Tradebeans qmcpack: simple-H2O stress-ng: MMAP stress-ng: Cloning stress-ng: Malloc stress-ng: MEMFD cpuminer-opt: Garlicoin stress-ng: x86_64 RdRand stress-ng: Zlib cpuminer-opt: Quad SHA-256, Pyrite stress-ng: NUMA stress-ng: Atomic stress-ng: Floating Point stress-ng: Function Call stress-ng: Pthread stress-ng: Hash stress-ng: Futex stress-ng: SENDFILE stress-ng: Vector Floating Point stress-ng: Fused Multiply-Add stress-ng: Memory Copying stress-ng: Matrix 3D Math stress-ng: Mixed Scheduler stress-ng: AVL Tree stress-ng: Forking stress-ng: Crypto stress-ng: Mutex stress-ng: Poll stress-ng: Pipe stress-ng: Matrix Math stress-ng: CPU Cache stress-ng: System V Message Passing stress-ng: Glibc Qsort Data Sorting stress-ng: Glibc C String Functions stress-ng: Context Switching stress-ng: Wide Vector Math stress-ng: Socket Activity stress-ng: Vector Shuffle stress-ng: AVX-512 VNNI stress-ng: Vector Math stress-ng: Semaphores stress-ng: CPU Stress cpuminer-opt: Triple SHA-256, Onecoin cpuminer-opt: Blake-2 S cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Skeincoin cpuminer-opt: Deepcoin cpuminer-opt: Myriad-Groestl cpuminer-opt: scrypt cpuminer-opt: Magi cpuminer-opt: Ringcoin dacapobench: Spring Boot quantlib: Single-Threaded dacapobench: Apache Lucene Search Engine avifenc: 6, Lossless dacapobench: GraphChi dacapobench: jMonkeyEngine dacapobench: Tradesoap onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU dacapobench: BioJava Biological Data Framework dacapobench: Apache Kafka dacapobench: Jython svt-av1: Preset 12 - Bosphorus 4K dacapobench: H2O In-Memory Platform For Machine Learning onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU svt-av1: Preset 8 - Bosphorus 1080p avifenc: 6 svt-av1: Preset 13 - Bosphorus 4K dacapobench: PMD Source Code Analyzer dacapobench: Zxing 1D/2D Barcode Image Processing easywave: e2Asean Grid + BengkuluSept2007 Source - 240 avifenc: 10, Lossless onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU blosc: blosclz noshuffle - 256MB blosc: blosclz bitshuffle - 256MB blosc: blosclz shuffle - 256MB dacapobench: Avrora AVR Simulation Framework blosc: blosclz noshuffle - 128MB dacapobench: Apache Xalan XSLT blosc: blosclz bitshuffle - 128MB dacapobench: Batik SVG Toolkit blosc: blosclz shuffle - 128MB onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU blosc: blosclz noshuffle - 64MB blosc: blosclz bitshuffle - 64MB blosc: blosclz shuffle - 64MB blosc: blosclz noshuffle - 32MB blosc: blosclz bitshuffle - 32MB blosc: blosclz shuffle - 32MB blosc: blosclz noshuffle - 16MB blosc: blosclz shuffle - 16MB blosc: blosclz bitshuffle - 16MB blosc: blosclz noshuffle - 8MB blosc: blosclz bitshuffle - 8MB blosc: blosclz shuffle - 8MB dacapobench: FOP Print Formatter svt-av1: Preset 12 - Bosphorus 1080p onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU svt-av1: Preset 13 - Bosphorus 1080p rabbitmq: 60 Queues, 100 Producers, 100 Consumers a b c 2337.786 1296.126 1025430 5.05 185.96 10.75 11.91 26.69 23.94 7.44 9.44 49.76 13.84 1.29 7.17 3.77 3.45 3.5 4.42 18.82 4.83 182.09 8.05 11.9 26.68 23.98 6.81 8.36 47.81 11.36 0.92 5.61 3.77 3.4 3.48 4.39 18.97 869726 858149 124 54646 687.22 653.24 41 514920 482.8 436532 429631 0.07 381.267 1.738 26.42 26.42 12.63 260252 13.29 245.14 221087 217399 222.05 206.97 0.15 0.15 189.03 4.4332 4.1966 49.70 50.35 168.165 3.843 5.2742 4.9067 4.8695 132433 1.209 43984 128.964 112613 111136 6.1486 110.52 117.10 469.6783 4.2577 5.428 7179.51 7189.12 7128.22 7982 6768 6673 3701.08 79.05 3703.86 3671.69 83.217 26473 76.19 3600.29 1.11 7845 985.39 4.05 287.28 13.9 344.94 11.59 346.37 11.52 49.36 81 27.41 145.83 97.39 41.06 65.4 61.12 75.35 53.07 17.97 222.27 6.04 660.54 48.04 83.21 37.89 105.49 10.38 384.36 14.73 271.08 0.47 8360.99 1.37 2874.25 29.051 68.7416 13707 12.139 164.3389 590.1719 3.3887 292.8017 6.8296 189.77 136.2099 14.6802 501.8987 3.9794 14549 14326 13.201 121.4974 16.4408 33081 31.2889 63.8512 5250 31575 14.86 72.268 27.6648 36.725 54.411 86.8994 23.0079 37.0815 53.8907 4.6989 423.4998 93.6543 21.3453 4.593 335053.51 3397 7683 31.221 69.5 773.96 1275453.62 151.83 530.61 1190.72 339.75 8552.11 91.38 297.71 1644.27 2533.58 87060.44 814204.59 1556806.04 74606.13 11875.81 4206478.3 1450.68 1503.69 3933.3 38 19571.06 7870.33 1800239.74 583755.41 3286500.94 19810.07 788775.98 6382311.82 104.16 3597888.5 1325769.49 259593.94 6702.46 40150.8 575521.32 19001.52 8568428.8 8190.18 9651.39 18920 1723.39 3997.83 992.85 1627.74 44.52 79.32 526.74 5151 3489.3 6558 22.311 4418 6914 2943 2.15486 42.983 11.3325 6163 5346 4755 37.702 3223 6.76989 21.4412 1.5893 42.127 14.727 42.302 3315 3123 10.613 9.678 5.96832 6.08399 2.40204 4883.8 5086.8 5080.8 2554 6398.4 1248 6601.2 1392 6629.3 50.8812 7.88294 8.49762 7456.9 7691.6 7801.5 8124.8 8299.4 8499.9 8558.6 8877.9 9000.6 9850.2 9769.5 10149.8 709 191.606 35.7172 9.87897 2.45216 290.542 2342.485 1299.277 1024292 57.01 2056.91 143.48 174.87 354.78 372.91 119.06 132.13 563.22 196.54 13.67 88.38 59.72 55.66 54.67 76.64 273.8 62.98 2069.02 132.95 177.2 347.17 354.34 122.08 130.84 542.46 184.62 21.67 86.3 62.71 54.38 66.64 75.3 282.29 870122 856610 124 54938 653.08 650.51 40 514145 479.06 438395 429276 0.07 383.211 1.733 26.43 26.40 12.69 261073 13.26 243.36 222412 218393 222.61 203.68 0.15 0.15 188.679 4.4038 4.2047 50.01 51.03 168.557 3.858 5.2679 4.9376 4.8785 132979 1.211 37649 128.753 113201 111078 6.1218 110.13 116.20 466.2515 4.2784 5.415 7178.23 7178.37 7138.73 7987 6786 6704 3699.05 79.50 3697.25 3680.74 83.167 26522 69.78 3719.15 1.07 7797 1059.61 3.75 312.41 12.79 370.73 10.78 369.62 10.81 52.86 75.63 29.13 137.2 103.02 38.82 73.4 54.46 83.53 47.86 18.93 211.09 6.59 606.16 51.72 77.29 41.18 97.09 10.91 366.17 15.29 261.21 0.53 7457.91 1.4 2834.17 27.9534 71.4338 13689.3 12.66 157.5828 560.7205 3.5666 296.3991 6.7467 185.32 145.1634 13.7748 504.8649 3.9535 14537 14327 13.207 122.5885 16.3077 33073 31.2656 63.8942 5440 31507 14.841 66.2808 30.1633 36.7954 54.3057 92.6993 21.5687 37.1469 53.7706 5.0779 391.9655 88.4776 22.5938 4.801 350882.75 3655 7183 31.756 69.31 801.63 1281240.89 163.56 451.17 1175.31 328.6 7883.39 90.4 298.85 1377.23 2144.47 83444.87 761130.78 1573811.12 58965.05 9743.95 3753786.13 1175.93 1497.3 3841.89 38.26 17963.97 7909.16 1732350.97 577995.91 2941192.56 18411.16 614301.31 5221036.03 85.03 3169430.98 1291162.21 230331.68 4404.68 34114.33 467858.43 15774.03 7131559.06 7691.29 9651.58 23420 2080.34 5234.04 1232.9 1337.88 36.67 78.74 436.98 5416 3488.1 6472 22.193 4400 6893 2939 2.22896 43.3943 10.2647 6160 5341 4585 37.903 3772 6.76791 24.3748 1.59378 41.54 14.834 42.441 3645 3195 9.924 9.506 5.92005 6.09457 2.44303 5007.6 5087.2 5182.4 2524 6413.8 1438 6738.5 1321 6779.3 51.0121 8.05175 8.44523 7472.9 7799.7 8005.2 8273.4 8646.1 8751.4 8842 9133.5 9267.8 10107.7 10085.2 10851.9 702 189.286 35.7772 10.0704 2.44729 289.778 2374.917 1309.168 1119430 64.89 1952.08 127.58 174.48 336.66 349.97 141.51 149.31 534.25 180.35 17.89 91.44 63.8 59.68 56.24 71.04 265.17 55.01 1933.05 133.77 164.33 341.71 328.87 124.98 134.78 541.65 178.4 15.59 97.95 63.33 56.62 54.85 66.65 266.51 951381 943953 105 53711 675.81 657.96 37 561556 480.7 476659 470269 0.07 392.474 1.679 25.88 25.93 12.76 285007 13.11 241.64 242527 239831 239.07 203.49 0.15 0.15 188.515 4.4308 4.2112 49.79 49.38 172.811 3.711 5.0433 4.6745 4.8776 145066 1.175 37916 130.586 123787 122118 5.889 141.41 117.62 485.9236 4.1156 5.343 7176.24 7178.94 7174.36 8722 7425 7343 3703.5 79.15 3699.85 3680.01 87.017 33836 75.96 3922.05 1.01 7848 1061.06 3.75 312.33 12.79 372.34 10.72 370.61 10.77 52.94 75.52 29.26 136.61 103.24 38.72 72.42 55.2 83.82 47.69 18.88 211.63 6.56 608.75 51.79 77.19 41.07 97.33 10.86 368 15.17 263.43 0.53 7460.73 1.43 2763.4 35.1265 56.8622 13287 14.2479 140.0582 590.3738 3.3874 301.4003 6.6347 177.32 136.0052 14.6976 514.9428 3.8837 14307 14456 13.174 124.1804 16.0836 39043 32.7516 61.0055 5419 31324 14.759 71.468 27.9721 37.0116 53.9931 94.6327 21.1279 37.2447 53.6553 5.423 367.1136 92.4501 21.6236 4.25 335942.42 3541 10909 31.753 36.33 773.35 1028677.94 155.8 456.13 1107.03 327.19 7888.21 67.68 325.29 1350.58 2126.84 77712.52 808351.87 1409614.1 60163.92 8453.72 4280472.18 1179.3 1479.59 3666.05 38.6 15226.03 7843.28 1454355.06 577081.19 2989197.58 16983.95 541147.88 4574104.46 85.43 2687720.74 1089870.73 230559.53 5819.48 33753.34 497416.91 15404.09 9046086.62 6915.21 9653.77 22910 1755.69 4034.91 1185.5 1647.77 36.39 78.68 436.97 5444 3375.6 6687 22.449 4428 6876 2947 2.18576 43.1633 9.93517 6249 5351 4654 33.441 3335 7.39175 22.2758 1.5909 38.791 14.951 43.136 3187 3198 9.62 9.597 5.93496 6.07427 2.40573 4922.9 5010.3 5107.4 2612 6391.1 1264 6470.2 1362 6665.5 50.9697 7.95953 8.47369 7452.8 7839.9 7884.2 8221.1 8660 8437.5 8780.8 8841.6 9052 9885.9 10073.9 9898.3 690 221.835 35.7339 10.0389 2.4729 288.155 OpenBenchmarking.org
Timed GCC Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed GCC Compilation 13.2 Time To Compile a b c 500 1000 1500 2000 2500 2337.79 2342.49 2374.92
Timed Gem5 Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 23.0.1 Time To Compile a b c 300 600 900 1200 1500 1296.13 1299.28 1309.17
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 1025430 1024292 1119430
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b c 14 28 42 56 70 5.05 57.01 64.89 MIN: 3.5 / MAX: 623.64 MIN: 3.54 / MAX: 903.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b c 400 800 1200 1600 2000 185.96 2056.91 1952.08 MIN: 381.09 / MAX: 3140.23 MIN: 384.65 / MAX: 2889.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b c 30 60 90 120 150 10.75 143.48 127.58 MIN: 7.77 / MAX: 1315.68 MIN: 7.7 / MAX: 964.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b c 40 80 120 160 200 11.91 174.87 174.48 MIN: 9.79 / MAX: 788.66 MIN: 9.81 / MAX: 819.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b c 80 160 240 320 400 26.69 354.78 336.66 MIN: 22.96 / MAX: 2085.26 MIN: 30.41 / MAX: 1419.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b c 80 160 240 320 400 23.94 372.91 349.97 MIN: 23.69 / MAX: 1184.42 MIN: 19.77 / MAX: 1364.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b c 30 60 90 120 150 7.44 119.06 141.51 MIN: 6.3 / MAX: 796.52 MIN: 6.25 / MAX: 1066.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b c 30 60 90 120 150 9.44 132.13 149.31 MIN: 7.58 / MAX: 744.32 MIN: 7.75 / MAX: 1201.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b c 120 240 360 480 600 49.76 563.22 534.25 MIN: 198.43 / MAX: 1375.45 MIN: 87.21 / MAX: 1172.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b c 40 80 120 160 200 13.84 196.54 180.35 MIN: 11 / MAX: 1629.29 MIN: 10.97 / MAX: 1001.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a b c 4 8 12 16 20 1.29 13.67 17.89 MIN: 0.9 / MAX: 442.99 MIN: 0.92 / MAX: 528.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b c 20 40 60 80 100 7.17 88.38 91.44 MIN: 5.49 / MAX: 973.69 MIN: 5.45 / MAX: 732.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b c 14 28 42 56 70 3.77 59.72 63.80 MIN: 3.62 / MAX: 740.07 MIN: 3.6 / MAX: 607.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b c 13 26 39 52 65 3.45 55.66 59.68 MIN: 3.29 / MAX: 1450.73 MIN: 3.23 / MAX: 851.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b c 13 26 39 52 65 3.50 54.67 56.24 MIN: 3.37 / MAX: 1059.45 MIN: 3.38 / MAX: 450.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b c 20 40 60 80 100 4.42 76.64 71.04 MIN: 4.27 / MAX: 801.15 MIN: 4.41 / MAX: 684.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet a b c 60 120 180 240 300 18.82 273.80 265.17 MIN: 22.92 / MAX: 1358.79 MIN: 23.2 / MAX: 1294.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet a b c 14 28 42 56 70 4.83 62.98 55.01 MIN: 3.49 / MAX: 651.5 MIN: 3.49 / MAX: 519.61 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer a b c 400 800 1200 1600 2000 182.09 2069.02 1933.05 MIN: 1126.73 / MAX: 3158.74 MIN: 513.75 / MAX: 3020.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m a b c 30 60 90 120 150 8.05 132.95 133.77 MIN: 7.8 / MAX: 1665.01 MIN: 7.72 / MAX: 816.16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd a b c 40 80 120 160 200 11.90 177.20 164.33 MIN: 9.87 / MAX: 889.9 MIN: 9.8 / MAX: 1016.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny a b c 80 160 240 320 400 26.68 347.17 341.71 MIN: 22.81 / MAX: 1008.19 MIN: 29 / MAX: 1084.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 a b c 80 160 240 320 400 23.98 354.34 328.87 MIN: 30.09 / MAX: 1285.23 MIN: 30.98 / MAX: 1188.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet a b c 30 60 90 120 150 6.81 122.08 124.98 MIN: 6.21 / MAX: 909.4 MIN: 6.26 / MAX: 828.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a b c 30 60 90 120 150 8.36 130.84 134.78 MIN: 7.64 / MAX: 808.51 MIN: 7.75 / MAX: 881.26 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b c 120 240 360 480 600 47.81 542.46 541.65 MIN: 171.77 / MAX: 1341.09 MIN: 93.35 / MAX: 1265.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet a b c 40 80 120 160 200 11.36 184.62 178.40 MIN: 10.94 / MAX: 775.13 MIN: 10.91 / MAX: 877.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface a b c 5 10 15 20 25 0.92 21.67 15.59 MIN: 0.89 / MAX: 816.69 MIN: 0.92 / MAX: 443.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 a b c 20 40 60 80 100 5.61 86.30 97.95 MIN: 5.43 / MAX: 937.59 MIN: 5.43 / MAX: 957.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet a b c 14 28 42 56 70 3.77 62.71 63.33 MIN: 3.61 / MAX: 584.51 MIN: 3.55 / MAX: 651.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 a b c 13 26 39 52 65 3.40 54.38 56.62 MIN: 3.2 / MAX: 675.09 MIN: 3.26 / MAX: 747.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 a b c 15 30 45 60 75 3.48 66.64 54.85 MIN: 3.32 / MAX: 878.52 MIN: 3.5 / MAX: 559.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 a b c 20 40 60 80 100 4.39 75.30 66.65 MIN: 4.37 / MAX: 730.97 MIN: 4.26 / MAX: 485.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet a b c 60 120 180 240 300 18.97 282.29 266.51 MIN: 15.13 / MAX: 1073.47 MIN: 16.56 / MAX: 1096.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 869726 870122 951381
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 858149 856610 943953
OpenVKL Benchmark: vklBenchmarkCPU ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC a b c 30 60 90 120 150 124 124 105 MIN: 8 / MAX: 1979 MIN: 8 / MAX: 1977 MIN: 7 / MAX: 1847
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c 12K 24K 36K 48K 60K 54646 54938 53711 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
QMCPACK Input: Li2_STO_ae OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae a b c 150 300 450 600 750 687.22 653.08 675.81 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bird Strike on Windshield a b c 140 280 420 560 700 653.24 650.51 657.96
OpenVKL Benchmark: vklBenchmarkCPU Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar a b c 9 18 27 36 45 41 40 37 MIN: 3 / MAX: 708 MIN: 3 / MAX: 653 MIN: 3 / MAX: 629
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 120K 240K 360K 480K 600K 514920 514145 561556
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Rubber O-Ring Seal Installation a b c 100 200 300 400 500 482.80 479.06 480.70
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 436532 438395 476659
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 429631 429276 470269
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b c 0.0158 0.0316 0.0474 0.0632 0.079 0.07 0.07 0.07
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 0 a b c 90 180 270 360 450 381.27 383.21 392.47 1. (CXX) g++ options: -O3 -fPIC -lm
VVenC Video Input: Bosphorus 4K - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b c 0.3911 0.7822 1.1733 1.5644 1.9555 1.738 1.733 1.679 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand a b c 6 12 18 24 30 26.42 26.43 25.88 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform a b c 6 12 18 24 30 26.42 26.40 25.93 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload a b c 3 6 9 12 15 12.63 12.69 12.76 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 60K 120K 180K 240K 300K 260252 261073 285007
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a b c 3 6 9 12 15 13.29 13.26 13.11 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Cell Phone Drop Test a b c 50 100 150 200 250 245.14 243.36 241.64
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 221087 222412 242527
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 217399 218393 239831
QMCPACK Input: O_ae_pyscf_UHF OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b c 50 100 150 200 250 222.05 222.61 239.07 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
CloverLeaf Input: clover_bm64_short OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short a b c 50 100 150 200 250 206.97 203.68 203.49 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b c 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 0.15
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b c 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 0.15
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b c 40 80 120 160 200 189.03 188.68 188.52 1. (CXX) g++ options: -O3 -fopenmp
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c 0.9975 1.995 2.9925 3.99 4.9875 4.4332 4.4038 4.4308 MIN: 4.41 / MAX: 4.52 MIN: 4.38 / MAX: 4.47 MIN: 4.4 / MAX: 4.51
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b c 0.9475 1.895 2.8425 3.79 4.7375 4.1966 4.2047 4.2112 MIN: 4.17 / MAX: 4.26 MIN: 4.18 / MAX: 4.26 MIN: 4.19 / MAX: 4.27
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand a b c 11 22 33 44 55 49.70 50.01 49.79 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform a b c 12 24 36 48 60 50.35 51.03 49.38 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 2 a b c 40 80 120 160 200 168.17 168.56 172.81 1. (CXX) g++ options: -O3 -fPIC -lm
VVenC Video Input: Bosphorus 4K - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b c 0.8681 1.7362 2.6043 3.4724 4.3405 3.843 3.858 3.711 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c 1.1867 2.3734 3.5601 4.7468 5.9335 5.2742 5.2679 5.0433 MIN: 5.25 / MAX: 5.34 MIN: 5.24 / MAX: 5.33 MIN: 5.01 / MAX: 5.11
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b c 1.111 2.222 3.333 4.444 5.555 4.9067 4.9376 4.6745 MIN: 4.86 / MAX: 5 MIN: 4.9 / MAX: 5.01 MIN: 4.64 / MAX: 4.75
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c 1.0977 2.1954 3.2931 4.3908 5.4885 4.8695 4.8785 4.8776 MIN: 4.85 / MAX: 4.91 MIN: 4.86 / MAX: 4.92 MIN: 4.86 / MAX: 4.94
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 132433 132979 145066
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c 0.2725 0.545 0.8175 1.09 1.3625 1.209 1.211 1.175 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes a b c 9K 18K 27K 36K 45K 43984 37649 37916
Timed FFmpeg Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed FFmpeg Compilation 6.1 Time To Compile a b c 30 60 90 120 150 128.96 128.75 130.59
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 112613 113201 123787
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 111136 111078 122118
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b c 2 4 6 8 10 6.1486 6.1218 5.8890 MIN: 6.11 / MAX: 6.24 MIN: 6.09 / MAX: 6.21 MIN: 5.86 / MAX: 5.95
QMCPACK Input: FeCO6_b3lyp_gms OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms a b c 30 60 90 120 150 110.52 110.13 141.41 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
CloverLeaf Input: clover_bm OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm a b c 30 60 90 120 150 117.10 116.20 117.62 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c 110 220 330 440 550 469.68 466.25 485.92
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c 0.9626 1.9252 2.8878 3.8504 4.813 4.2577 4.2784 4.1156
VVenC Video Input: Bosphorus 1080p - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b c 1.2213 2.4426 3.6639 4.8852 6.1065 5.428 5.415 5.343 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b c 1500 3000 4500 6000 7500 7179.51 7178.23 7176.24 MIN: 7137.59 MIN: 7137.21 MIN: 7139.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b c 1500 3000 4500 6000 7500 7189.12 7178.37 7178.94 MIN: 7157.19 MIN: 7134.3 MIN: 7128.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c 1500 3000 4500 6000 7500 7128.22 7138.73 7174.36 MIN: 7074.03 MIN: 7092.66 MIN: 7126.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 2K 4K 6K 8K 10K 7982 7987 8722
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 1600 3200 4800 6400 8000 6768 6786 7425
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 1600 3200 4800 6400 8000 6673 6704 7343
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c 800 1600 2400 3200 4000 3701.08 3699.05 3703.50 MIN: 3672.95 MIN: 3670.29 MIN: 3669.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live a b c 20 40 60 80 100 79.05 79.50 79.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b c 800 1600 2400 3200 4000 3703.86 3697.25 3699.85 MIN: 3673.82 MIN: 3666.72 MIN: 3668.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b c 800 1600 2400 3200 4000 3671.69 3680.74 3680.01 MIN: 3629.79 MIN: 3643.32 MIN: 3636.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
QMCPACK Input: LiH_ae_MSD OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD a b c 20 40 60 80 100 83.22 83.17 87.02 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 7K 14K 21K 28K 35K 26473 26522 33836
QMCPACK Input: H4_ae OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae a b c 20 40 60 80 100 76.19 69.78 75.96 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 800 1600 2400 3200 4000 3600.29 3719.15 3922.05 MIN: 3227.71 / MAX: 3788.77 MIN: 2883.49 / MAX: 4105.58 MIN: 2940.15 / MAX: 4178.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 0.2498 0.4996 0.7494 0.9992 1.249 1.11 1.07 1.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
DaCapo Benchmark Java Test: Apache Cassandra OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Cassandra a b c 2K 4K 6K 8K 10K 7845 7797 7848
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 200 400 600 800 1000 985.39 1059.61 1061.06 MIN: 643.01 / MAX: 1010.66 MIN: 796.92 / MAX: 1223.56 MIN: 813.72 / MAX: 1252.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 0.9113 1.8226 2.7339 3.6452 4.5565 4.05 3.75 3.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 70 140 210 280 350 287.28 312.41 312.33 MIN: 200.6 / MAX: 301.65 MIN: 250.11 / MAX: 403.55 MIN: 237.04 / MAX: 398.05 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 4 8 12 16 20 13.90 12.79 12.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 80 160 240 320 400 344.94 370.73 372.34 MIN: 331.68 / MAX: 379.01 MIN: 305.49 / MAX: 506.4 MIN: 298.79 / MAX: 478.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 3 6 9 12 15 11.59 10.78 10.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 80 160 240 320 400 346.37 369.62 370.61 MIN: 328.56 / MAX: 384.23 MIN: 213.11 / MAX: 517.81 MIN: 200.89 / MAX: 476.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 3 6 9 12 15 11.52 10.81 10.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 12 24 36 48 60 49.36 52.86 52.94 MIN: 35.42 / MAX: 65.52 MIN: 37.97 / MAX: 96.18 MIN: 37.37 / MAX: 105.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 20 40 60 80 100 81.00 75.63 75.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 7 14 21 28 35 27.41 29.13 29.26 MIN: 19.48 / MAX: 43.38 MIN: 17.21 / MAX: 62 MIN: 17.66 / MAX: 65.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 30 60 90 120 150 145.83 137.20 136.61 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 20 40 60 80 100 97.39 103.02 103.24 MIN: 89.34 / MAX: 113.31 MIN: 73.24 / MAX: 176.26 MIN: 72.38 / MAX: 172.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 9 18 27 36 45 41.06 38.82 38.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 16 32 48 64 80 65.40 73.40 72.42 MIN: 52.18 / MAX: 86.91 MIN: 40.77 / MAX: 125.7 MIN: 40.47 / MAX: 145.92 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 14 28 42 56 70 61.12 54.46 55.20 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 20 40 60 80 100 75.35 83.53 83.82 MIN: 54.28 / MAX: 95.91 MIN: 51.8 / MAX: 139.82 MIN: 41.32 / MAX: 136.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 12 24 36 48 60 53.07 47.86 47.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 5 10 15 20 25 17.97 18.93 18.88 MIN: 13.13 / MAX: 31.2 MIN: 10.24 / MAX: 58.75 MIN: 10.24 / MAX: 56.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 50 100 150 200 250 222.27 211.09 211.63 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 2 4 6 8 10 6.04 6.59 6.56 MIN: 3.47 / MAX: 20.67 MIN: 3.81 / MAX: 34.71 MIN: 3.98 / MAX: 36.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 140 280 420 560 700 660.54 606.16 608.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 12 24 36 48 60 48.04 51.72 51.79 MIN: 39.51 / MAX: 63.6 MIN: 25.15 / MAX: 115.97 MIN: 27.85 / MAX: 114.6 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 20 40 60 80 100 83.21 77.29 77.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 9 18 27 36 45 37.89 41.18 41.07 MIN: 19.79 / MAX: 50.35 MIN: 21.43 / MAX: 94.08 MIN: 22.76 / MAX: 99.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 20 40 60 80 100 105.49 97.09 97.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 3 6 9 12 15 10.38 10.91 10.86 MIN: 5.52 / MAX: 22.1 MIN: 5.56 / MAX: 40.59 MIN: 5.41 / MAX: 42.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 80 160 240 320 400 384.36 366.17 368.00 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 4 8 12 16 20 14.73 15.29 15.17 MIN: 7.6 / MAX: 27.51 MIN: 7.76 / MAX: 45.51 MIN: 7.75 / MAX: 46.13 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 60 120 180 240 300 271.08 261.21 263.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 0.1193 0.2386 0.3579 0.4772 0.5965 0.47 0.53 0.53 MIN: 0.3 / MAX: 16.79 MIN: 0.28 / MAX: 30.87 MIN: 0.29 / MAX: 32.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 2K 4K 6K 8K 10K 8360.99 7457.91 7460.73 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 0.3218 0.6436 0.9654 1.2872 1.609 1.37 1.40 1.43 MIN: 0.79 / MAX: 14.03 MIN: 0.73 / MAX: 32.28 MIN: 0.72 / MAX: 46.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 600 1200 1800 2400 3000 2874.25 2834.17 2763.40 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 8 16 24 32 40 29.05 27.95 35.13
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 16 32 48 64 80 68.74 71.43 56.86
QuantLib Configuration: Multi-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a b c 3K 6K 9K 12K 15K 13707.0 13689.3 13287.0 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 12.14 12.66 14.25
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 40 80 120 160 200 164.34 157.58 140.06
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 130 260 390 520 650 590.17 560.72 590.37
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 0.8025 1.605 2.4075 3.21 4.0125 3.3887 3.5666 3.3874
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 70 140 210 280 350 292.80 296.40 301.40
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 2 4 6 8 10 6.8296 6.7467 6.6347
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live a b c 40 80 120 160 200 189.77 185.32 177.32 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 136.21 145.16 136.01
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 14.68 13.77 14.70
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 110 220 330 440 550 501.90 504.86 514.94
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 0.8954 1.7908 2.6862 3.5816 4.477 3.9794 3.9535 3.8837
DaCapo Benchmark Java Test: Eclipse OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Eclipse a b c 3K 6K 9K 12K 15K 14549 14537 14307
DaCapo Benchmark Java Test: Apache Tomcat OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Tomcat a b c 3K 6K 9K 12K 15K 14326 14327 14456
VVenC Video Input: Bosphorus 1080p - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b c 3 6 9 12 15 13.20 13.21 13.17 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 121.50 122.59 124.18
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 16.44 16.31 16.08
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 8K 16K 24K 32K 40K 33081 33073 39043
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 8 16 24 32 40 31.29 31.27 32.75
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 14 28 42 56 70 63.85 63.89 61.01
DaCapo Benchmark Java Test: H2 Database Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2 Database Engine a b c 1200 2400 3600 4800 6000 5250 5440 5419
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 7K 14K 21K 28K 35K 31575 31507 31324
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b c 4 8 12 16 20 14.86 14.84 14.76 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 16 32 48 64 80 72.27 66.28 71.47
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 7 14 21 28 35 27.66 30.16 27.97
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c 9 18 27 36 45 36.73 36.80 37.01
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c 12 24 36 48 60 54.41 54.31 53.99
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 86.90 92.70 94.63
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 6 12 18 24 30 23.01 21.57 21.13
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 9 18 27 36 45 37.08 37.15 37.24
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 12 24 36 48 60 53.89 53.77 53.66
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 1.2202 2.4404 3.6606 4.8808 6.101 4.6989 5.0779 5.4230
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 90 180 270 360 450 423.50 391.97 367.11
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 93.65 88.48 92.45
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 5 10 15 20 25 21.35 22.59 21.62
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b c 1.0802 2.1604 3.2406 4.3208 5.401 4.593 4.801 4.250 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: IO_uring a b c 80K 160K 240K 320K 400K 335053.51 350882.75 335942.42 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
DaCapo Benchmark Java Test: Apache Lucene Search Index OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Index a b c 800 1600 2400 3200 4000 3397 3655 3541
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradebeans a b c 2K 4K 6K 8K 10K 7683 7183 10909
QMCPACK Input: simple-H2O OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O a b c 7 14 21 28 35 31.22 31.76 31.75 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MMAP a b c 15 30 45 60 75 69.50 69.31 36.33 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Cloning OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning a b c 200 400 600 800 1000 773.96 801.63 773.35 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Malloc a b c 300K 600K 900K 1200K 1500K 1275453.62 1281240.89 1028677.94 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MEMFD a b c 40 80 120 160 200 151.83 163.56 155.80 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Garlicoin a b c 110 220 330 440 550 530.61 451.17 456.13 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: x86_64 RdRand a b c 300 600 900 1200 1500 1190.72 1175.31 1107.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Zlib OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib a b c 70 140 210 280 350 339.75 328.60 327.19 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite a b c 2K 4K 6K 8K 10K 8552.11 7883.39 7888.21 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: NUMA a b c 20 40 60 80 100 91.38 90.40 67.68 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Atomic a b c 70 140 210 280 350 297.71 298.85 325.29 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point a b c 400 800 1200 1600 2000 1644.27 1377.23 1350.58 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Function Call OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Function Call a b c 500 1000 1500 2000 2500 2533.58 2144.47 2126.84 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Pthread OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread a b c 20K 40K 60K 80K 100K 87060.44 83444.87 77712.52 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Hash OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Hash a b c 200K 400K 600K 800K 1000K 814204.59 761130.78 808351.87 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Futex a b c 300K 600K 900K 1200K 1500K 1556806.04 1573811.12 1409614.10 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: SENDFILE a b c 16K 32K 48K 64K 80K 74606.13 58965.05 60163.92 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point a b c 3K 6K 9K 12K 15K 11875.81 9743.95 8453.72 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Fused Multiply-Add OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add a b c 900K 1800K 2700K 3600K 4500K 4206478.30 3753786.13 4280472.18 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Memory Copying a b c 300 600 900 1200 1500 1450.68 1175.93 1179.30 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Matrix 3D Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math a b c 300 600 900 1200 1500 1503.69 1497.30 1479.59 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Mixed Scheduler OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler a b c 800 1600 2400 3200 4000 3933.30 3841.89 3666.05 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: AVL Tree OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree a b c 9 18 27 36 45 38.00 38.26 38.60 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Forking a b c 4K 8K 12K 16K 20K 19571.06 17963.97 15226.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Crypto a b c 2K 4K 6K 8K 10K 7870.33 7909.16 7843.28 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mutex a b c 400K 800K 1200K 1600K 2000K 1800239.74 1732350.97 1454355.06 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Poll OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll a b c 130K 260K 390K 520K 650K 583755.41 577995.91 577081.19 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Pipe OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe a b c 700K 1400K 2100K 2800K 3500K 3286500.94 2941192.56 2989197.58 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix Math a b c 4K 8K 12K 16K 20K 19810.07 18411.16 16983.95 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache a b c 200K 400K 600K 800K 1000K 788775.98 614301.31 541147.88 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing a b c 1.4M 2.8M 4.2M 5.6M 7M 6382311.82 5221036.03 4574104.46 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting a b c 20 40 60 80 100 104.16 85.03 85.43 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions a b c 800K 1600K 2400K 3200K 4000K 3597888.50 3169430.98 2687720.74 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching a b c 300K 600K 900K 1200K 1500K 1325769.49 1291162.21 1089870.73 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Wide Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math a b c 60K 120K 180K 240K 300K 259593.94 230331.68 230559.53 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Socket Activity a b c 1400 2800 4200 5600 7000 6702.46 4404.68 5819.48 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Shuffle OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle a b c 9K 18K 27K 36K 45K 40150.80 34114.33 33753.34 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: AVX-512 VNNI OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI a b c 120K 240K 360K 480K 600K 575521.32 467858.43 497416.91 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Math a b c 4K 8K 12K 16K 20K 19001.52 15774.03 15404.09 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Semaphores a b c 2M 4M 6M 8M 10M 8568428.80 7131559.06 9046086.62 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Stress a b c 2K 4K 6K 8K 10K 8190.18 7691.29 6915.21 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin a b c 2K 4K 6K 8K 10K 9651.39 9651.58 9653.77 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Blake-2 S a b c 5K 10K 15K 20K 25K 18920 23420 22910 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a b c 400 800 1200 1600 2000 1723.39 2080.34 1755.69 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Skeincoin a b c 1100 2200 3300 4400 5500 3997.83 5234.04 4034.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin a b c 300 600 900 1200 1500 992.85 1232.90 1185.50 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl a b c 400 800 1200 1600 2000 1627.74 1337.88 1647.77 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: scrypt a b c 10 20 30 40 50 44.52 36.67 36.39 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a b c 20 40 60 80 100 79.32 78.74 78.68 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Ringcoin a b c 110 220 330 440 550 526.74 436.98 436.97 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
DaCapo Benchmark Java Test: Spring Boot OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Spring Boot a b c 1200 2400 3600 4800 6000 5151 5416 5444
QuantLib Configuration: Single-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b c 700 1400 2100 2800 3500 3489.3 3488.1 3375.6 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
DaCapo Benchmark Java Test: Apache Lucene Search Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Engine a b c 1400 2800 4200 5600 7000 6558 6472 6687
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6, Lossless a b c 5 10 15 20 25 22.31 22.19 22.45 1. (CXX) g++ options: -O3 -fPIC -lm
DaCapo Benchmark Java Test: GraphChi OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: GraphChi a b c 900 1800 2700 3600 4500 4418 4400 4428
DaCapo Benchmark Java Test: jMonkeyEngine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: jMonkeyEngine a b c 1500 3000 4500 6000 7500 6914 6893 6876
DaCapo Benchmark Java Test: Tradesoap OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradesoap a b c 600 1200 1800 2400 3000 2943 2939 2947
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b c 0.5015 1.003 1.5045 2.006 2.5075 2.15486 2.22896 2.18576 MIN: 1.9 MIN: 1.94 MIN: 1.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b c 10 20 30 40 50 42.98 43.39 43.16 MIN: 41.08 MIN: 41.62 MIN: 41.01 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 11.33250 10.26470 9.93517 MIN: 9.35 MIN: 8.95 MIN: 9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
DaCapo Benchmark Java Test: BioJava Biological Data Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: BioJava Biological Data Framework a b c 1300 2600 3900 5200 6500 6163 6160 6249
DaCapo Benchmark Java Test: Apache Kafka OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Kafka a b c 1100 2200 3300 4400 5500 5346 5341 5351
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Jython a b c 1000 2000 3000 4000 5000 4755 4585 4654
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b c 9 18 27 36 45 37.70 37.90 33.44 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
DaCapo Benchmark Java Test: H2O In-Memory Platform For Machine Learning OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2O In-Memory Platform For Machine Learning a b c 800 1600 2400 3200 4000 3223 3772 3335
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 6.76989 6.76791 7.39175 MIN: 6.04 MIN: 6 MIN: 7.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b c 6 12 18 24 30 21.44 24.37 22.28 MIN: 17.25 MIN: 23.9 MIN: 17.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b c 0.3586 0.7172 1.0758 1.4344 1.793 1.58930 1.59378 1.59090 MIN: 1.37 MIN: 1.39 MIN: 1.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b c 10 20 30 40 50 42.13 41.54 38.79 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6 a b c 4 8 12 16 20 14.73 14.83 14.95 1. (CXX) g++ options: -O3 -fPIC -lm
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b c 10 20 30 40 50 42.30 42.44 43.14 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
DaCapo Benchmark Java Test: PMD Source Code Analyzer OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: PMD Source Code Analyzer a b c 800 1600 2400 3200 4000 3315 3645 3187
DaCapo Benchmark Java Test: Zxing 1D/2D Barcode Image Processing OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Zxing 1D/2D Barcode Image Processing a b c 700 1400 2100 2800 3500 3123 3195 3198
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a b c 3 6 9 12 15 10.613 9.924 9.620 1. (CXX) g++ options: -O3 -fopenmp
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 10, Lossless a b c 3 6 9 12 15 9.678 9.506 9.597 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b c 1.3429 2.6858 4.0287 5.3716 6.7145 5.96832 5.92005 5.93496 MIN: 4.97 MIN: 4.94 MIN: 5.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 6.08399 6.09457 6.07427 MIN: 5.89 MIN: 5.85 MIN: 5.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b c 0.5497 1.0994 1.6491 2.1988 2.7485 2.40204 2.44303 2.40573 MIN: 2.29 MIN: 2.3 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
C-Blosc Test: blosclz noshuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 4883.8 5007.6 4922.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 5086.8 5087.2 5010.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 5080.8 5182.4 5107.4 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
DaCapo Benchmark Java Test: Avrora AVR Simulation Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Avrora AVR Simulation Framework a b c 600 1200 1800 2400 3000 2554 2524 2612
C-Blosc Test: blosclz noshuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 128MB a b c 1400 2800 4200 5600 7000 6398.4 6413.8 6391.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
DaCapo Benchmark Java Test: Apache Xalan XSLT OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Xalan XSLT a b c 300 600 900 1200 1500 1248 1438 1264
C-Blosc Test: blosclz bitshuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 128MB a b c 1400 2800 4200 5600 7000 6601.2 6738.5 6470.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
DaCapo Benchmark Java Test: Batik SVG Toolkit OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Batik SVG Toolkit a b c 300 600 900 1200 1500 1392 1321 1362
C-Blosc Test: blosclz shuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 128MB a b c 1500 3000 4500 6000 7500 6629.3 6779.3 6665.5 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b c 12 24 36 48 60 50.88 51.01 50.97 MIN: 50.58 MIN: 50.58 MIN: 50.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b c 2 4 6 8 10 7.88294 8.05175 7.95953 MIN: 7.71 MIN: 7.66 MIN: 7.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 8.49762 8.44523 8.47369 MIN: 8.21 MIN: 8.17 MIN: 8.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
C-Blosc Test: blosclz noshuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 64MB a b c 1600 3200 4800 6400 8000 7456.9 7472.9 7452.8 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 64MB a b c 2K 4K 6K 8K 10K 7691.6 7799.7 7839.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 64MB a b c 2K 4K 6K 8K 10K 7801.5 8005.2 7884.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz noshuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8124.8 8273.4 8221.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8299.4 8646.1 8660.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8499.9 8751.4 8437.5 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz noshuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 8558.6 8842.0 8780.8 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 8877.9 9133.5 8841.6 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 9000.6 9267.8 9052.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz noshuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 9850.2 10107.7 9885.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 9769.5 10085.2 10073.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 10149.8 10851.9 9898.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
DaCapo Benchmark Java Test: FOP Print Formatter OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: FOP Print Formatter a b c 150 300 450 600 750 709 702 690
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b c 50 100 150 200 250 191.61 189.29 221.84 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b c 8 16 24 32 40 35.72 35.78 35.73 MIN: 35.34 MIN: 35.46 MIN: 35.41 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 9.87897 10.07040 10.03890 MIN: 9.55 MIN: 9.57 MIN: 9.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c 0.5564 1.1128 1.6692 2.2256 2.782 2.45216 2.44729 2.47290 MIN: 2.21 MIN: 2.21 MIN: 2.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b c 60 120 180 240 300 290.54 289.78 288.16 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Phoronix Test Suite v10.8.5