tgls Tests for a future article. Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.7.0 BIOS) and Intel Xe TGL GT2 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2311139-PTS-TGLS087809&rdt&grs .
tgls Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution a b c Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads) Dell 0DXP1F (3.7.0 BIOS) Intel Tiger Lake-LP 16GB Micron 2300 NVMe 512GB Intel Xe TGL GT2 15GB (1350MHz) Realtek ALC289 Intel Wi-Fi 6 AX201 Ubuntu 22.04 5.19.0-46-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.3.204 GCC 11.4.0 ext4 1920x1200 6.2.0-36-generic (x86_64) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - a: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa6 - Thermald 2.4.9 - b: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa6 - Thermald 2.4.9 - c: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xac - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.12 Security Details - a: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - b: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Mitigation of Microcode + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
tgls ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: CPU - alexnet ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - shufflenet-v2 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: CPU - mnasnet ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - resnet18 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - resnet18 ncnn: CPU - resnet50 ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU - resnet50 ncnn: CPU - squeezenet_ssd ncnn: CPU - mobilenet ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - regnety_400m ncnn: CPU - yolov4-tiny ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - yolov4-tiny ncnn: CPU - FastestDet ncnn: CPU - efficientnet-b0 ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - vgg16 ncnn: CPU - vgg16 ncnn: CPU - vision_transformer stress-ng: MMAP stress-ng: Socket Activity dacapobench: Tradebeans stress-ng: CPU Cache stress-ng: Vector Floating Point stress-ng: System V Message Passing stress-ng: NUMA stress-ng: Glibc C String Functions cpuminer-opt: Skeincoin stress-ng: Forking qmcpack: FeCO6_b3lyp_gms ospray-studio: 1 - 4K - 1 - Path Tracer - CPU stress-ng: Semaphores stress-ng: SENDFILE deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream stress-ng: Malloc cpuminer-opt: Deepcoin cpuminer-opt: Blake-2 S stress-ng: Mutex stress-ng: Memory Copying stress-ng: Vector Math cpuminer-opt: Myriad-Groestl stress-ng: AVX-512 VNNI stress-ng: Glibc Qsort Data Sorting cpuminer-opt: scrypt stress-ng: Floating Point stress-ng: Context Switching cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Ringcoin stress-ng: Function Call stress-ng: Vector Shuffle stress-ng: CPU Stress openvkl: vklBenchmarkCPU ISPC ospray-studio: 3 - 4K - 1 - Path Tracer - CPU cpuminer-opt: Garlicoin deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream svt-av1: Preset 12 - Bosphorus 1080p dacapobench: H2O In-Memory Platform For Machine Learning cassandra: Writes stress-ng: Matrix Math deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream dacapobench: Apache Xalan XSLT dacapobench: PMD Source Code Analyzer onednn: Deconvolution Batch shapes_1d - f32 - CPU stress-ng: Fused Multiply-Add onednn: IP Shapes 1D - bf16bf16bf16 - CPU svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU stress-ng: Wide Vector Math openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU stress-ng: Pthread stress-ng: Pipe stress-ng: Futex openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvkl: vklBenchmarkCPU Scalar easywave: e2Asean Grid + BengkuluSept2007 Source - 240 ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU openvino: Face Detection FP16 - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU blosc: blosclz shuffle - 8MB ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU stress-ng: Atomic ospray-studio: 3 - 4K - 16 - Path Tracer - CPU onednn: IP Shapes 1D - f32 - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU qmcpack: H4_ae openvino: Face Detection Retail FP16-INT8 - CPU deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection FP16 - CPU deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU svt-av1: Preset 8 - Bosphorus 1080p cpuminer-opt: Quad SHA-256, Pyrite openvino: Person Detection FP32 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU stress-ng: MEMFD openvino: Face Detection FP16-INT8 - CPU qmcpack: O_ae_pyscf_UHF dacapobench: Apache Lucene Search Index stress-ng: x86_64 RdRand stress-ng: Mixed Scheduler openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU ffmpeg: libx264 - Live openvino: Person Detection FP16 - CPU stress-ng: Hash openvino: Person Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream dacapobench: Spring Boot embree: Pathtracer ISPC - Crown dacapobench: Batik SVG Toolkit openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream qmcpack: Li2_STO_ae openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream stress-ng: IO_uring qmcpack: LiH_ae_MSD embree: Pathtracer ISPC - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon openvino: Age Gender Recognition Retail 0013 FP16 - CPU blosc: blosclz bitshuffle - 32MB deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream blosc: blosclz bitshuffle - 128MB openvino: Age Gender Recognition Retail 0013 FP16 - CPU vvenc: Bosphorus 4K - Faster deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream stress-ng: Zlib openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU blosc: blosclz shuffle - 32MB dacapobench: Jython stress-ng: Cloning dacapobench: H2 Database Engine vvenc: Bosphorus 4K - Fast dacapobench: Avrora AVR Simulation Framework onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU quantlib: Single-Threaded ffmpeg: libx264 - Platform dacapobench: Apache Lucene Search Engine blosc: blosclz noshuffle - 16MB blosc: blosclz shuffle - 16MB blosc: blosclz bitshuffle - 8MB quantlib: Multi-Threaded svt-av1: Preset 4 - Bosphorus 4K blosc: blosclz bitshuffle - 16MB avifenc: 0 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream avifenc: 2 dacapobench: FOP Print Formatter blosc: blosclz noshuffle - 8MB blosc: blosclz shuffle - 64MB deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream blosc: blosclz noshuffle - 256MB deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream dacapobench: Zxing 1D/2D Barcode Image Processing brl-cad: VGR Performance Metric blosc: blosclz shuffle - 128MB deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU ffmpeg: libx265 - Video On Demand blosc: blosclz shuffle - 256MB svt-av1: Preset 13 - Bosphorus 4K onednn: Deconvolution Batch shapes_3d - f32 - CPU blosc: blosclz bitshuffle - 64MB ffmpeg: libx265 - Platform blosc: blosclz noshuffle - 32MB avifenc: 10, Lossless qmcpack: simple-H2O cloverleaf: clover_bm64_short onednn: IP Shapes 3D - u8s8f32 - CPU dacapobench: Eclipse stress-ng: Matrix 3D Math vvenc: Bosphorus 1080p - Fast build-gcc: Time To Compile stress-ng: AVL Tree blosc: blosclz bitshuffle - 256MB avifenc: 6 openradioss: Cell Phone Drop Test dacapobench: BioJava Biological Data Framework build-ffmpeg: Time To Compile ffmpeg: libx264 - Upload cloverleaf: clover_bm stress-ng: Poll avifenc: 6, Lossless openradioss: Bird Strike on Windshield onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU ffmpeg: libx265 - Upload build-gem5: Time To Compile onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU dacapobench: Apache Tomcat stress-ng: Crypto svt-av1: Preset 13 - Bosphorus 1080p onednn: IP Shapes 3D - bf16bf16bf16 - CPU cpuminer-opt: Magi ospray-studio: 2 - 4K - 1 - Path Tracer - CPU openradioss: Rubber O-Ring Seal Installation deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream svt-av1: Preset 8 - Bosphorus 4K embree: Pathtracer - Asian Dragon Obj dacapobench: Apache Cassandra onednn: Recurrent Neural Network Training - f32 - CPU dacapobench: GraphChi ffmpeg: libx264 - Video On Demand onednn: Convolution Batch Shapes Auto - f32 - CPU ffmpeg: libx265 - Live dacapobench: jMonkeyEngine deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream blosc: blosclz noshuffle - 128MB embree: Pathtracer - Crown onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 dacapobench: Tradesoap blosc: blosclz noshuffle - 64MB onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU vvenc: Bosphorus 1080p - Faster onednn: Recurrent Neural Network Inference - u8s8f32 - CPU dacapobench: Apache Kafka embree: Pathtracer - Asian Dragon onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU cpuminer-opt: Triple SHA-256, Onecoin oidn: RTLightmap.hdr.4096x4096 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only rabbitmq: Simple 2 Publishers + 4 Consumers a b c 0.92 3.48 7.44 6.81 5.61 4.42 3.45 4.39 3.77 3.77 3.4 8.05 11.36 8.36 3.5 9.44 23.94 11.9 18.97 23.98 11.91 18.82 13.84 1.29 10.75 26.69 4.83 26.68 5.05 7.17 182.09 47.81 49.76 185.96 69.5 6702.46 7683 788775.98 11875.81 6382311.82 91.38 3597888.5 3997.83 19571.06 110.52 26473 8568428.8 74606.13 29.051 68.7416 1275453.62 992.85 18920 1800239.74 1450.68 19001.52 1627.74 575521.32 104.16 44.52 1644.27 1325769.49 1723.39 526.74 2533.58 40150.8 8190.18 124 33081 530.61 12.139 164.3389 191.606 3223 43984 19810.07 4.6989 423.4998 1248 3315 11.3325 4206478.3 21.4412 37.702 4.593 0.47 259593.94 65.4 61.12 8360.99 87060.44 3286500.94 1556806.04 53.07 75.35 41 10.613 217399 858149 6673 111136 112613 1.11 6768 221087 10149.8 429631 132433 260252 869726 1025430 7982 297.71 514920 6.76989 436532 76.19 6.04 72.268 27.6648 660.54 3600.29 86.8994 23.0079 287.28 37.89 13.9 105.49 42.127 8552.11 11.59 4.05 344.94 48.04 83.21 151.83 985.39 222.05 3397 1190.72 3933.3 81 49.36 189.77 346.37 814204.59 11.52 27.41 145.83 136.2099 14.6802 41.06 97.39 93.6543 21.3453 5151 4.9067 1392 17.97 222.27 3.3887 590.1719 687.22 10.38 384.36 31.2889 63.8512 335053.51 83.217 5.2742 6.1486 1.37 8299.4 469.6783 6601.2 2874.25 3.843 4.2577 339.75 14.73 271.08 8499.9 4755 773.96 5250 1.738 2554 2.15486 3489.3 50.35 6558 8558.6 8877.9 9769.5 13707 1.209 9000.6 381.267 6.8296 292.8017 168.165 709 9850.2 7801.5 501.8987 4883.8 3.9794 3123 54646 6629.3 16.4408 121.4974 7.88294 26.42 5080.8 42.302 9.87897 7691.6 26.42 8124.8 9.678 31.221 206.97 2.40204 14549 1503.69 5.428 2337.786 38 5086.8 14.727 245.14 6163 128.964 13.29 117.10 583755.41 22.311 653.24 2.45216 12.63 1296.126 42.983 14326 7870.33 290.542 5.96832 79.32 31575 482.8 36.725 54.411 14.86 4.4332 7845 7128.22 4418 49.70 8.49762 79.05 6914 37.0815 53.8907 6398.4 4.1966 6.08399 1.5893 189.03 2943 7456.9 50.8812 13.201 3671.69 5346 4.8695 3703.86 35.7172 7189.12 3701.08 7179.51 9651.39 0.07 0.15 0.15 21.67 66.64 119.06 122.08 86.3 76.64 55.66 75.3 59.72 62.71 54.38 132.95 184.62 130.84 54.67 132.13 372.91 177.2 282.29 354.34 174.87 273.8 196.54 13.67 143.48 354.78 62.98 347.17 57.01 88.38 2069.02 542.46 563.22 2056.91 69.31 4404.68 7183 614301.31 9743.95 5221036.03 90.4 3169430.98 5234.04 17963.97 110.13 26522 7131559.06 58965.05 27.9534 71.4338 1281240.89 1232.9 23420 1732350.97 1175.93 15774.03 1337.88 467858.43 85.03 36.67 1377.23 1291162.21 2080.34 436.98 2144.47 34114.33 7691.29 124 33073 451.17 12.66 157.5828 189.286 3772 37649 18411.16 5.0779 391.9655 1438 3645 10.2647 3753786.13 24.3748 37.903 4.801 0.53 230331.68 73.4 54.46 7457.91 83444.87 2941192.56 1573811.12 47.86 83.53 40 9.924 218393 856610 6704 111078 113201 1.07 6786 222412 10851.9 429276 132979 261073 870122 1024292 7987 298.85 514145 6.76791 438395 69.78 6.59 66.2808 30.1633 606.16 3719.15 92.6993 21.5687 312.41 41.18 12.79 97.09 41.54 7883.39 10.78 3.75 370.73 51.72 77.29 163.56 1059.61 222.61 3655 1175.31 3841.89 75.63 52.86 185.32 369.62 761130.78 10.81 29.13 137.2 145.1634 13.7748 38.82 103.02 88.4776 22.5938 5416 4.9376 1321 18.93 211.09 3.5666 560.7205 653.08 10.91 366.17 31.2656 63.8942 350882.75 83.167 5.2679 6.1218 1.4 8646.1 466.2515 6738.5 2834.17 3.858 4.2784 328.6 15.29 261.21 8751.4 4585 801.63 5440 1.733 2524 2.22896 3488.1 51.03 6472 8842 9133.5 10085.2 13689.3 1.211 9267.8 383.211 6.7467 296.3991 168.557 702 10107.7 8005.2 504.8649 5007.6 3.9535 3195 54938 6779.3 16.3077 122.5885 8.05175 26.43 5182.4 42.441 10.0704 7799.7 26.40 8273.4 9.506 31.756 203.68 2.44303 14537 1497.3 5.415 2342.485 38.26 5087.2 14.834 243.36 6160 128.753 13.26 116.20 577995.91 22.193 650.51 2.44729 12.69 1299.277 43.3943 14327 7909.16 289.778 5.92005 78.74 31507 479.06 36.7954 54.3057 14.841 4.4038 7797 7138.73 4400 50.01 8.44523 79.50 6893 37.1469 53.7706 6413.8 4.2047 6.09457 1.59378 188.679 2939 7472.9 51.0121 13.207 3680.74 5341 4.8785 3697.25 35.7772 7178.37 3699.05 7178.23 9651.58 0.07 0.15 0.15 15.59 54.85 141.51 124.98 97.95 71.04 59.68 66.65 63.8 63.33 56.62 133.77 178.4 134.78 56.24 149.31 349.97 164.33 266.51 328.87 174.48 265.17 180.35 17.89 127.58 336.66 55.01 341.71 64.89 91.44 1933.05 541.65 534.25 1952.08 36.33 5819.48 10909 541147.88 8453.72 4574104.46 67.68 2687720.74 4034.91 15226.03 141.41 33836 9046086.62 60163.92 35.1265 56.8622 1028677.94 1185.5 22910 1454355.06 1179.3 15404.09 1647.77 497416.91 85.43 36.39 1350.58 1089870.73 1755.69 436.97 2126.84 33753.34 6915.21 105 39043 456.13 14.2479 140.0582 221.835 3335 37916 16983.95 5.423 367.1136 1264 3187 9.93517 4280472.18 22.2758 33.441 4.25 0.53 230559.53 72.42 55.2 7460.73 77712.52 2989197.58 1409614.1 47.69 83.82 37 9.62 239831 943953 7343 122118 123787 1.01 7425 242527 9898.3 470269 145066 285007 951381 1119430 8722 325.29 561556 7.39175 476659 75.96 6.56 71.468 27.9721 608.75 3922.05 94.6327 21.1279 312.33 41.07 12.79 97.33 38.791 7888.21 10.72 3.75 372.34 51.79 77.19 155.8 1061.06 239.07 3541 1107.03 3666.05 75.52 52.94 177.32 370.61 808351.87 10.77 29.26 136.61 136.0052 14.6976 38.72 103.24 92.4501 21.6236 5444 4.6745 1362 18.88 211.63 3.3874 590.3738 675.81 10.86 368 32.7516 61.0055 335942.42 87.017 5.0433 5.889 1.43 8660 485.9236 6470.2 2763.4 3.711 4.1156 327.19 15.17 263.43 8437.5 4654 773.35 5419 1.679 2612 2.18576 3375.6 49.38 6687 8780.8 8841.6 10073.9 13287 1.175 9052 392.474 6.6347 301.4003 172.811 690 9885.9 7884.2 514.9428 4922.9 3.8837 3198 53711 6665.5 16.0836 124.1804 7.95953 25.88 5107.4 43.136 10.0389 7839.9 25.93 8221.1 9.597 31.753 203.49 2.40573 14307 1479.59 5.343 2374.917 38.6 5010.3 14.951 241.64 6249 130.586 13.11 117.62 577081.19 22.449 657.96 2.4729 12.76 1309.168 43.1633 14456 7843.28 288.155 5.93496 78.68 31324 480.7 37.0116 53.9931 14.759 4.4308 7848 7174.36 4428 49.79 8.47369 79.15 6876 37.2447 53.6553 6391.1 4.2112 6.07427 1.5909 188.515 2947 7452.8 50.9697 13.174 3680.01 5351 4.8776 3699.85 35.7339 7178.94 3703.5 7176.24 9653.77 0.07 0.15 0.15 OpenBenchmarking.org
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface a b c 5 10 15 20 25 0.92 21.67 15.59 MIN: 0.89 / MAX: 816.69 MIN: 0.92 / MAX: 443.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 a b c 15 30 45 60 75 3.48 66.64 54.85 MIN: 3.32 / MAX: 878.52 MIN: 3.5 / MAX: 559.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b c 30 60 90 120 150 7.44 119.06 141.51 MIN: 6.3 / MAX: 796.52 MIN: 6.25 / MAX: 1066.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet a b c 30 60 90 120 150 6.81 122.08 124.98 MIN: 6.21 / MAX: 909.4 MIN: 6.26 / MAX: 828.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 a b c 20 40 60 80 100 5.61 86.30 97.95 MIN: 5.43 / MAX: 937.59 MIN: 5.43 / MAX: 957.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b c 20 40 60 80 100 4.42 76.64 71.04 MIN: 4.27 / MAX: 801.15 MIN: 4.41 / MAX: 684.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b c 13 26 39 52 65 3.45 55.66 59.68 MIN: 3.29 / MAX: 1450.73 MIN: 3.23 / MAX: 851.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 a b c 20 40 60 80 100 4.39 75.30 66.65 MIN: 4.37 / MAX: 730.97 MIN: 4.26 / MAX: 485.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b c 14 28 42 56 70 3.77 59.72 63.80 MIN: 3.62 / MAX: 740.07 MIN: 3.6 / MAX: 607.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet a b c 14 28 42 56 70 3.77 62.71 63.33 MIN: 3.61 / MAX: 584.51 MIN: 3.55 / MAX: 651.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 a b c 13 26 39 52 65 3.40 54.38 56.62 MIN: 3.2 / MAX: 675.09 MIN: 3.26 / MAX: 747.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m a b c 30 60 90 120 150 8.05 132.95 133.77 MIN: 7.8 / MAX: 1665.01 MIN: 7.72 / MAX: 816.16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet a b c 40 80 120 160 200 11.36 184.62 178.40 MIN: 10.94 / MAX: 775.13 MIN: 10.91 / MAX: 877.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a b c 30 60 90 120 150 8.36 130.84 134.78 MIN: 7.64 / MAX: 808.51 MIN: 7.75 / MAX: 881.26 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b c 13 26 39 52 65 3.50 54.67 56.24 MIN: 3.37 / MAX: 1059.45 MIN: 3.38 / MAX: 450.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b c 30 60 90 120 150 9.44 132.13 149.31 MIN: 7.58 / MAX: 744.32 MIN: 7.75 / MAX: 1201.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b c 80 160 240 320 400 23.94 372.91 349.97 MIN: 23.69 / MAX: 1184.42 MIN: 19.77 / MAX: 1364.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd a b c 40 80 120 160 200 11.90 177.20 164.33 MIN: 9.87 / MAX: 889.9 MIN: 9.8 / MAX: 1016.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet a b c 60 120 180 240 300 18.97 282.29 266.51 MIN: 15.13 / MAX: 1073.47 MIN: 16.56 / MAX: 1096.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 a b c 80 160 240 320 400 23.98 354.34 328.87 MIN: 30.09 / MAX: 1285.23 MIN: 30.98 / MAX: 1188.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b c 40 80 120 160 200 11.91 174.87 174.48 MIN: 9.79 / MAX: 788.66 MIN: 9.81 / MAX: 819.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet a b c 60 120 180 240 300 18.82 273.80 265.17 MIN: 22.92 / MAX: 1358.79 MIN: 23.2 / MAX: 1294.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b c 40 80 120 160 200 13.84 196.54 180.35 MIN: 11 / MAX: 1629.29 MIN: 10.97 / MAX: 1001.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a b c 4 8 12 16 20 1.29 13.67 17.89 MIN: 0.9 / MAX: 442.99 MIN: 0.92 / MAX: 528.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b c 30 60 90 120 150 10.75 143.48 127.58 MIN: 7.77 / MAX: 1315.68 MIN: 7.7 / MAX: 964.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b c 80 160 240 320 400 26.69 354.78 336.66 MIN: 22.96 / MAX: 2085.26 MIN: 30.41 / MAX: 1419.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet a b c 14 28 42 56 70 4.83 62.98 55.01 MIN: 3.49 / MAX: 651.5 MIN: 3.49 / MAX: 519.61 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny a b c 80 160 240 320 400 26.68 347.17 341.71 MIN: 22.81 / MAX: 1008.19 MIN: 29 / MAX: 1084.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b c 14 28 42 56 70 5.05 57.01 64.89 MIN: 3.5 / MAX: 623.64 MIN: 3.54 / MAX: 903.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b c 20 40 60 80 100 7.17 88.38 91.44 MIN: 5.49 / MAX: 973.69 MIN: 5.45 / MAX: 732.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer a b c 400 800 1200 1600 2000 182.09 2069.02 1933.05 MIN: 1126.73 / MAX: 3158.74 MIN: 513.75 / MAX: 3020.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b c 120 240 360 480 600 47.81 542.46 541.65 MIN: 171.77 / MAX: 1341.09 MIN: 93.35 / MAX: 1265.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b c 120 240 360 480 600 49.76 563.22 534.25 MIN: 198.43 / MAX: 1375.45 MIN: 87.21 / MAX: 1172.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b c 400 800 1200 1600 2000 185.96 2056.91 1952.08 MIN: 381.09 / MAX: 3140.23 MIN: 384.65 / MAX: 2889.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MMAP a b c 15 30 45 60 75 69.50 69.31 36.33 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Socket Activity a b c 1400 2800 4200 5600 7000 6702.46 4404.68 5819.48 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradebeans a b c 2K 4K 6K 8K 10K 7683 7183 10909
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache a b c 200K 400K 600K 800K 1000K 788775.98 614301.31 541147.88 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point a b c 3K 6K 9K 12K 15K 11875.81 9743.95 8453.72 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing a b c 1.4M 2.8M 4.2M 5.6M 7M 6382311.82 5221036.03 4574104.46 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: NUMA a b c 20 40 60 80 100 91.38 90.40 67.68 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions a b c 800K 1600K 2400K 3200K 4000K 3597888.50 3169430.98 2687720.74 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Skeincoin a b c 1100 2200 3300 4400 5500 3997.83 5234.04 4034.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Forking a b c 4K 8K 12K 16K 20K 19571.06 17963.97 15226.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
QMCPACK Input: FeCO6_b3lyp_gms OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms a b c 30 60 90 120 150 110.52 110.13 141.41 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 7K 14K 21K 28K 35K 26473 26522 33836
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Semaphores a b c 2M 4M 6M 8M 10M 8568428.80 7131559.06 9046086.62 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: SENDFILE a b c 16K 32K 48K 64K 80K 74606.13 58965.05 60163.92 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 8 16 24 32 40 29.05 27.95 35.13
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 16 32 48 64 80 68.74 71.43 56.86
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Malloc a b c 300K 600K 900K 1200K 1500K 1275453.62 1281240.89 1028677.94 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin a b c 300 600 900 1200 1500 992.85 1232.90 1185.50 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Blake-2 S a b c 5K 10K 15K 20K 25K 18920 23420 22910 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mutex a b c 400K 800K 1200K 1600K 2000K 1800239.74 1732350.97 1454355.06 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Memory Copying a b c 300 600 900 1200 1500 1450.68 1175.93 1179.30 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Math a b c 4K 8K 12K 16K 20K 19001.52 15774.03 15404.09 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl a b c 400 800 1200 1600 2000 1627.74 1337.88 1647.77 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: AVX-512 VNNI OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI a b c 120K 240K 360K 480K 600K 575521.32 467858.43 497416.91 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting a b c 20 40 60 80 100 104.16 85.03 85.43 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: scrypt a b c 10 20 30 40 50 44.52 36.67 36.39 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point a b c 400 800 1200 1600 2000 1644.27 1377.23 1350.58 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching a b c 300K 600K 900K 1200K 1500K 1325769.49 1291162.21 1089870.73 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a b c 400 800 1200 1600 2000 1723.39 2080.34 1755.69 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Ringcoin a b c 110 220 330 440 550 526.74 436.98 436.97 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Stress-NG Test: Function Call OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Function Call a b c 500 1000 1500 2000 2500 2533.58 2144.47 2126.84 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Shuffle OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle a b c 9K 18K 27K 36K 45K 40150.80 34114.33 33753.34 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Stress a b c 2K 4K 6K 8K 10K 8190.18 7691.29 6915.21 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVKL Benchmark: vklBenchmarkCPU ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC a b c 30 60 90 120 150 124 124 105 MIN: 8 / MAX: 1979 MIN: 8 / MAX: 1977 MIN: 7 / MAX: 1847
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 8K 16K 24K 32K 40K 33081 33073 39043
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Garlicoin a b c 110 220 330 440 550 530.61 451.17 456.13 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 12.14 12.66 14.25
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 40 80 120 160 200 164.34 157.58 140.06
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b c 50 100 150 200 250 191.61 189.29 221.84 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
DaCapo Benchmark Java Test: H2O In-Memory Platform For Machine Learning OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2O In-Memory Platform For Machine Learning a b c 800 1600 2400 3200 4000 3223 3772 3335
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes a b c 9K 18K 27K 36K 45K 43984 37649 37916
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix Math a b c 4K 8K 12K 16K 20K 19810.07 18411.16 16983.95 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 1.2202 2.4404 3.6606 4.8808 6.101 4.6989 5.0779 5.4230
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 90 180 270 360 450 423.50 391.97 367.11
DaCapo Benchmark Java Test: Apache Xalan XSLT OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Xalan XSLT a b c 300 600 900 1200 1500 1248 1438 1264
DaCapo Benchmark Java Test: PMD Source Code Analyzer OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: PMD Source Code Analyzer a b c 800 1600 2400 3200 4000 3315 3645 3187
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 11.33250 10.26470 9.93517 MIN: 9.35 MIN: 8.95 MIN: 9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Stress-NG Test: Fused Multiply-Add OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add a b c 900K 1800K 2700K 3600K 4500K 4206478.30 3753786.13 4280472.18 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b c 6 12 18 24 30 21.44 24.37 22.28 MIN: 17.25 MIN: 23.9 MIN: 17.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b c 9 18 27 36 45 37.70 37.90 33.44 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b c 1.0802 2.1604 3.2406 4.3208 5.401 4.593 4.801 4.250 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 0.1193 0.2386 0.3579 0.4772 0.5965 0.47 0.53 0.53 MIN: 0.3 / MAX: 16.79 MIN: 0.28 / MAX: 30.87 MIN: 0.29 / MAX: 32.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Stress-NG Test: Wide Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math a b c 60K 120K 180K 240K 300K 259593.94 230331.68 230559.53 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 16 32 48 64 80 65.40 73.40 72.42 MIN: 52.18 / MAX: 86.91 MIN: 40.77 / MAX: 125.7 MIN: 40.47 / MAX: 145.92 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 14 28 42 56 70 61.12 54.46 55.20 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 2K 4K 6K 8K 10K 8360.99 7457.91 7460.73 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Stress-NG Test: Pthread OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread a b c 20K 40K 60K 80K 100K 87060.44 83444.87 77712.52 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Pipe OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe a b c 700K 1400K 2100K 2800K 3500K 3286500.94 2941192.56 2989197.58 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Futex a b c 300K 600K 900K 1200K 1500K 1556806.04 1573811.12 1409614.10 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 12 24 36 48 60 53.07 47.86 47.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Handwritten English Recognition FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 20 40 60 80 100 75.35 83.53 83.82 MIN: 54.28 / MAX: 95.91 MIN: 51.8 / MAX: 139.82 MIN: 41.32 / MAX: 136.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVKL Benchmark: vklBenchmarkCPU Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar a b c 9 18 27 36 45 41 40 37 MIN: 3 / MAX: 708 MIN: 3 / MAX: 653 MIN: 3 / MAX: 629
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a b c 3 6 9 12 15 10.613 9.924 9.620 1. (CXX) g++ options: -O3 -fopenmp
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 217399 218393 239831
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 858149 856610 943953
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 1600 3200 4800 6400 8000 6673 6704 7343
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 111136 111078 122118
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 112613 113201 123787
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 0.2498 0.4996 0.7494 0.9992 1.249 1.11 1.07 1.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 1600 3200 4800 6400 8000 6768 6786 7425
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 221087 222412 242527
C-Blosc Test: blosclz shuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 10149.8 10851.9 9898.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 429631 429276 470269
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 132433 132979 145066
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 60K 120K 180K 240K 300K 260252 261073 285007
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 869726 870122 951381
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 1025430 1024292 1119430
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 2K 4K 6K 8K 10K 7982 7987 8722
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Atomic a b c 70 140 210 280 350 297.71 298.85 325.29 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 120K 240K 360K 480K 600K 514920 514145 561556
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 6.76989 6.76791 7.39175 MIN: 6.04 MIN: 6 MIN: 7.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 436532 438395 476659
QMCPACK Input: H4_ae OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae a b c 20 40 60 80 100 76.19 69.78 75.96 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 2 4 6 8 10 6.04 6.59 6.56 MIN: 3.47 / MAX: 20.67 MIN: 3.81 / MAX: 34.71 MIN: 3.98 / MAX: 36.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 16 32 48 64 80 72.27 66.28 71.47
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 7 14 21 28 35 27.66 30.16 27.97
OpenVINO Model: Face Detection Retail FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 140 280 420 560 700 660.54 606.16 608.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 800 1600 2400 3200 4000 3600.29 3719.15 3922.05 MIN: 3227.71 / MAX: 3788.77 MIN: 2883.49 / MAX: 4105.58 MIN: 2940.15 / MAX: 4178.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 86.90 92.70 94.63
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c 6 12 18 24 30 23.01 21.57 21.13
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 70 140 210 280 350 287.28 312.41 312.33 MIN: 200.6 / MAX: 301.65 MIN: 250.11 / MAX: 403.55 MIN: 237.04 / MAX: 398.05 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 9 18 27 36 45 37.89 41.18 41.07 MIN: 19.79 / MAX: 50.35 MIN: 21.43 / MAX: 94.08 MIN: 22.76 / MAX: 99.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 4 8 12 16 20 13.90 12.79 12.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 20 40 60 80 100 105.49 97.09 97.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b c 10 20 30 40 50 42.13 41.54 38.79 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite a b c 2K 4K 6K 8K 10K 8552.11 7883.39 7888.21 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 3 6 9 12 15 11.59 10.78 10.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 0.9113 1.8226 2.7339 3.6452 4.5565 4.05 3.75 3.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 80 160 240 320 400 344.94 370.73 372.34 MIN: 331.68 / MAX: 379.01 MIN: 305.49 / MAX: 506.4 MIN: 298.79 / MAX: 478.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 12 24 36 48 60 48.04 51.72 51.79 MIN: 39.51 / MAX: 63.6 MIN: 25.15 / MAX: 115.97 MIN: 27.85 / MAX: 114.6 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 20 40 60 80 100 83.21 77.29 77.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MEMFD a b c 40 80 120 160 200 151.83 163.56 155.80 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 200 400 600 800 1000 985.39 1059.61 1061.06 MIN: 643.01 / MAX: 1010.66 MIN: 796.92 / MAX: 1223.56 MIN: 813.72 / MAX: 1252.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
QMCPACK Input: O_ae_pyscf_UHF OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b c 50 100 150 200 250 222.05 222.61 239.07 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
DaCapo Benchmark Java Test: Apache Lucene Search Index OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Index a b c 800 1600 2400 3200 4000 3397 3655 3541
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: x86_64 RdRand a b c 300 600 900 1200 1500 1190.72 1175.31 1107.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Mixed Scheduler OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler a b c 800 1600 2400 3200 4000 3933.30 3841.89 3666.05 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 20 40 60 80 100 81.00 75.63 75.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 12 24 36 48 60 49.36 52.86 52.94 MIN: 35.42 / MAX: 65.52 MIN: 37.97 / MAX: 96.18 MIN: 37.37 / MAX: 105.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live a b c 40 80 120 160 200 189.77 185.32 177.32 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 80 160 240 320 400 346.37 369.62 370.61 MIN: 328.56 / MAX: 384.23 MIN: 213.11 / MAX: 517.81 MIN: 200.89 / MAX: 476.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Stress-NG Test: Hash OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Hash a b c 200K 400K 600K 800K 1000K 814204.59 761130.78 808351.87 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 3 6 9 12 15 11.52 10.81 10.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 7 14 21 28 35 27.41 29.13 29.26 MIN: 19.48 / MAX: 43.38 MIN: 17.21 / MAX: 62 MIN: 17.66 / MAX: 65.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 30 60 90 120 150 145.83 137.20 136.61 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 136.21 145.16 136.01
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 14.68 13.77 14.70
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 9 18 27 36 45 41.06 38.82 38.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Road Segmentation ADAS FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 20 40 60 80 100 97.39 103.02 103.24 MIN: 89.34 / MAX: 113.31 MIN: 73.24 / MAX: 176.26 MIN: 72.38 / MAX: 172.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 93.65 88.48 92.45
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 5 10 15 20 25 21.35 22.59 21.62
DaCapo Benchmark Java Test: Spring Boot OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Spring Boot a b c 1200 2400 3600 4800 6000 5151 5416 5444
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b c 1.111 2.222 3.333 4.444 5.555 4.9067 4.9376 4.6745 MIN: 4.86 / MAX: 5 MIN: 4.9 / MAX: 5.01 MIN: 4.64 / MAX: 4.75
DaCapo Benchmark Java Test: Batik SVG Toolkit OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Batik SVG Toolkit a b c 300 600 900 1200 1500 1392 1321 1362
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 5 10 15 20 25 17.97 18.93 18.88 MIN: 13.13 / MAX: 31.2 MIN: 10.24 / MAX: 58.75 MIN: 10.24 / MAX: 56.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 50 100 150 200 250 222.27 211.09 211.63 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 0.8025 1.605 2.4075 3.21 4.0125 3.3887 3.5666 3.3874
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 130 260 390 520 650 590.17 560.72 590.37
QMCPACK Input: Li2_STO_ae OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae a b c 150 300 450 600 750 687.22 653.08 675.81 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 3 6 9 12 15 10.38 10.91 10.86 MIN: 5.52 / MAX: 22.1 MIN: 5.56 / MAX: 40.59 MIN: 5.41 / MAX: 42.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 80 160 240 320 400 384.36 366.17 368.00 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 8 16 24 32 40 31.29 31.27 32.75
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 14 28 42 56 70 63.85 63.89 61.01
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: IO_uring a b c 80K 160K 240K 320K 400K 335053.51 350882.75 335942.42 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
QMCPACK Input: LiH_ae_MSD OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD a b c 20 40 60 80 100 83.22 83.17 87.02 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c 1.1867 2.3734 3.5601 4.7468 5.9335 5.2742 5.2679 5.0433 MIN: 5.25 / MAX: 5.34 MIN: 5.24 / MAX: 5.33 MIN: 5.01 / MAX: 5.11
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b c 2 4 6 8 10 6.1486 6.1218 5.8890 MIN: 6.11 / MAX: 6.24 MIN: 6.09 / MAX: 6.21 MIN: 5.86 / MAX: 5.95
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 0.3218 0.6436 0.9654 1.2872 1.609 1.37 1.40 1.43 MIN: 0.79 / MAX: 14.03 MIN: 0.73 / MAX: 32.28 MIN: 0.72 / MAX: 46.48 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
C-Blosc Test: blosclz bitshuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8299.4 8646.1 8660.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c 110 220 330 440 550 469.68 466.25 485.92
C-Blosc Test: blosclz bitshuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 128MB a b c 1400 2800 4200 5600 7000 6601.2 6738.5 6470.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 600 1200 1800 2400 3000 2874.25 2834.17 2763.40 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
VVenC Video Input: Bosphorus 4K - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b c 0.8681 1.7362 2.6043 3.4724 4.3405 3.843 3.858 3.711 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c 0.9626 1.9252 2.8878 3.8504 4.813 4.2577 4.2784 4.1156
Stress-NG Test: Zlib OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib a b c 70 140 210 280 350 339.75 328.60 327.19 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 4 8 12 16 20 14.73 15.29 15.17 MIN: 7.6 / MAX: 27.51 MIN: 7.76 / MAX: 45.51 MIN: 7.75 / MAX: 46.13 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenVINO Model: Face Detection Retail FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 60 120 180 240 300 271.08 261.21 263.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
C-Blosc Test: blosclz shuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8499.9 8751.4 8437.5 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Jython a b c 1000 2000 3000 4000 5000 4755 4585 4654
Stress-NG Test: Cloning OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning a b c 200 400 600 800 1000 773.96 801.63 773.35 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
DaCapo Benchmark Java Test: H2 Database Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2 Database Engine a b c 1200 2400 3600 4800 6000 5250 5440 5419
VVenC Video Input: Bosphorus 4K - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b c 0.3911 0.7822 1.1733 1.5644 1.9555 1.738 1.733 1.679 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
DaCapo Benchmark Java Test: Avrora AVR Simulation Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Avrora AVR Simulation Framework a b c 600 1200 1800 2400 3000 2554 2524 2612
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b c 0.5015 1.003 1.5045 2.006 2.5075 2.15486 2.22896 2.18576 MIN: 1.9 MIN: 1.94 MIN: 1.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
QuantLib Configuration: Single-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b c 700 1400 2100 2800 3500 3489.3 3488.1 3375.6 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform a b c 12 24 36 48 60 50.35 51.03 49.38 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
DaCapo Benchmark Java Test: Apache Lucene Search Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Engine a b c 1400 2800 4200 5600 7000 6558 6472 6687
C-Blosc Test: blosclz noshuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 8558.6 8842.0 8780.8 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 8877.9 9133.5 8841.6 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz bitshuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 9769.5 10085.2 10073.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
QuantLib Configuration: Multi-Threaded OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a b c 3K 6K 9K 12K 15K 13707.0 13689.3 13287.0 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c 0.2725 0.545 0.8175 1.09 1.3625 1.209 1.211 1.175 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
C-Blosc Test: blosclz bitshuffle - Buffer Size: 16MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 16MB a b c 2K 4K 6K 8K 10K 9000.6 9267.8 9052.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 0 a b c 90 180 270 360 450 381.27 383.21 392.47 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 2 4 6 8 10 6.8296 6.7467 6.6347
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 70 140 210 280 350 292.80 296.40 301.40
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 2 a b c 40 80 120 160 200 168.17 168.56 172.81 1. (CXX) g++ options: -O3 -fPIC -lm
DaCapo Benchmark Java Test: FOP Print Formatter OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: FOP Print Formatter a b c 150 300 450 600 750 709 702 690
C-Blosc Test: blosclz noshuffle - Buffer Size: 8MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 8MB a b c 2K 4K 6K 8K 10K 9850.2 10107.7 9885.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
C-Blosc Test: blosclz shuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 64MB a b c 2K 4K 6K 8K 10K 7801.5 8005.2 7884.2 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 110 220 330 440 550 501.90 504.86 514.94
C-Blosc Test: blosclz noshuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 4883.8 5007.6 4922.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 0.8954 1.7908 2.6862 3.5816 4.477 3.9794 3.9535 3.8837
DaCapo Benchmark Java Test: Zxing 1D/2D Barcode Image Processing OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Zxing 1D/2D Barcode Image Processing a b c 700 1400 2100 2800 3500 3123 3195 3198
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c 12K 24K 36K 48K 60K 54646 54938 53711 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
C-Blosc Test: blosclz shuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 128MB a b c 1500 3000 4500 6000 7500 6629.3 6779.3 6665.5 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 4 8 12 16 20 16.44 16.31 16.08
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 121.50 122.59 124.18
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b c 2 4 6 8 10 7.88294 8.05175 7.95953 MIN: 7.71 MIN: 7.66 MIN: 7.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand a b c 6 12 18 24 30 26.42 26.43 25.88 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
C-Blosc Test: blosclz shuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz shuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 5080.8 5182.4 5107.4 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b c 10 20 30 40 50 42.30 42.44 43.14 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 9.87897 10.07040 10.03890 MIN: 9.55 MIN: 9.57 MIN: 9.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
C-Blosc Test: blosclz bitshuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 64MB a b c 2K 4K 6K 8K 10K 7691.6 7799.7 7839.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform a b c 6 12 18 24 30 26.42 26.40 25.93 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
C-Blosc Test: blosclz noshuffle - Buffer Size: 32MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 32MB a b c 2K 4K 6K 8K 10K 8124.8 8273.4 8221.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 10, Lossless a b c 3 6 9 12 15 9.678 9.506 9.597 1. (CXX) g++ options: -O3 -fPIC -lm
QMCPACK Input: simple-H2O OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O a b c 7 14 21 28 35 31.22 31.76 31.75 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
CloverLeaf Input: clover_bm64_short OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short a b c 50 100 150 200 250 206.97 203.68 203.49 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b c 0.5497 1.0994 1.6491 2.1988 2.7485 2.40204 2.44303 2.40573 MIN: 2.29 MIN: 2.3 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
DaCapo Benchmark Java Test: Eclipse OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Eclipse a b c 3K 6K 9K 12K 15K 14549 14537 14307
Stress-NG Test: Matrix 3D Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math a b c 300 600 900 1200 1500 1503.69 1497.30 1479.59 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
VVenC Video Input: Bosphorus 1080p - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b c 1.2213 2.4426 3.6639 4.8852 6.1065 5.428 5.415 5.343 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Timed GCC Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed GCC Compilation 13.2 Time To Compile a b c 500 1000 1500 2000 2500 2337.79 2342.49 2374.92
Stress-NG Test: AVL Tree OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree a b c 9 18 27 36 45 38.00 38.26 38.60 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
C-Blosc Test: blosclz bitshuffle - Buffer Size: 256MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 256MB a b c 1100 2200 3300 4400 5500 5086.8 5087.2 5010.3 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6 a b c 4 8 12 16 20 14.73 14.83 14.95 1. (CXX) g++ options: -O3 -fPIC -lm
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Cell Phone Drop Test a b c 50 100 150 200 250 245.14 243.36 241.64
DaCapo Benchmark Java Test: BioJava Biological Data Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: BioJava Biological Data Framework a b c 1300 2600 3900 5200 6500 6163 6160 6249
Timed FFmpeg Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed FFmpeg Compilation 6.1 Time To Compile a b c 30 60 90 120 150 128.96 128.75 130.59
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a b c 3 6 9 12 15 13.29 13.26 13.11 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
CloverLeaf Input: clover_bm OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm a b c 30 60 90 120 150 117.10 116.20 117.62 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
Stress-NG Test: Poll OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll a b c 130K 260K 390K 520K 650K 583755.41 577995.91 577081.19 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6, Lossless a b c 5 10 15 20 25 22.31 22.19 22.45 1. (CXX) g++ options: -O3 -fPIC -lm
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bird Strike on Windshield a b c 140 280 420 560 700 653.24 650.51 657.96
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c 0.5564 1.1128 1.6692 2.2256 2.782 2.45216 2.44729 2.47290 MIN: 2.21 MIN: 2.21 MIN: 2.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload a b c 3 6 9 12 15 12.63 12.69 12.76 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed Gem5 Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 23.0.1 Time To Compile a b c 300 600 900 1200 1500 1296.13 1299.28 1309.17
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b c 10 20 30 40 50 42.98 43.39 43.16 MIN: 41.08 MIN: 41.62 MIN: 41.01 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
DaCapo Benchmark Java Test: Apache Tomcat OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Tomcat a b c 3K 6K 9K 12K 15K 14326 14327 14456
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Crypto a b c 2K 4K 6K 8K 10K 7870.33 7909.16 7843.28 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b c 60 120 180 240 300 290.54 289.78 288.16 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b c 1.3429 2.6858 4.0287 5.3716 6.7145 5.96832 5.92005 5.93496 MIN: 4.97 MIN: 4.94 MIN: 5.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a b c 20 40 60 80 100 79.32 78.74 78.68 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 7K 14K 21K 28K 35K 31575 31507 31324
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Rubber O-Ring Seal Installation a b c 100 200 300 400 500 482.80 479.06 480.70
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c 9 18 27 36 45 36.73 36.80 37.01
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c 12 24 36 48 60 54.41 54.31 53.99
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b c 4 8 12 16 20 14.86 14.84 14.76 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c 0.9975 1.995 2.9925 3.99 4.9875 4.4332 4.4038 4.4308 MIN: 4.41 / MAX: 4.52 MIN: 4.38 / MAX: 4.47 MIN: 4.4 / MAX: 4.51
DaCapo Benchmark Java Test: Apache Cassandra OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Cassandra a b c 2K 4K 6K 8K 10K 7845 7797 7848
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c 1500 3000 4500 6000 7500 7128.22 7138.73 7174.36 MIN: 7074.03 MIN: 7092.66 MIN: 7126.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
DaCapo Benchmark Java Test: GraphChi OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: GraphChi a b c 900 1800 2700 3600 4500 4418 4400 4428
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand a b c 11 22 33 44 55 49.70 50.01 49.79 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 8.49762 8.44523 8.47369 MIN: 8.21 MIN: 8.17 MIN: 8.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live a b c 20 40 60 80 100 79.05 79.50 79.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
DaCapo Benchmark Java Test: jMonkeyEngine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: jMonkeyEngine a b c 1500 3000 4500 6000 7500 6914 6893 6876
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 9 18 27 36 45 37.08 37.15 37.24
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 12 24 36 48 60 53.89 53.77 53.66
C-Blosc Test: blosclz noshuffle - Buffer Size: 128MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 128MB a b c 1400 2800 4200 5600 7000 6398.4 6413.8 6391.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b c 0.9475 1.895 2.8425 3.79 4.7375 4.1966 4.2047 4.2112 MIN: 4.17 / MAX: 4.26 MIN: 4.18 / MAX: 4.26 MIN: 4.19 / MAX: 4.27
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c 2 4 6 8 10 6.08399 6.09457 6.07427 MIN: 5.89 MIN: 5.85 MIN: 5.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b c 0.3586 0.7172 1.0758 1.4344 1.793 1.58930 1.59378 1.59090 MIN: 1.37 MIN: 1.39 MIN: 1.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
easyWave Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b c 40 80 120 160 200 189.03 188.68 188.52 1. (CXX) g++ options: -O3 -fopenmp
DaCapo Benchmark Java Test: Tradesoap OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradesoap a b c 600 1200 1800 2400 3000 2943 2939 2947
C-Blosc Test: blosclz noshuffle - Buffer Size: 64MB OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 64MB a b c 1600 3200 4800 6400 8000 7456.9 7472.9 7452.8 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b c 12 24 36 48 60 50.88 51.01 50.97 MIN: 50.58 MIN: 50.58 MIN: 50.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
VVenC Video Input: Bosphorus 1080p - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b c 3 6 9 12 15 13.20 13.21 13.17 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b c 800 1600 2400 3200 4000 3671.69 3680.74 3680.01 MIN: 3629.79 MIN: 3643.32 MIN: 3636.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
DaCapo Benchmark Java Test: Apache Kafka OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Kafka a b c 1100 2200 3300 4400 5500 5346 5341 5351
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c 1.0977 2.1954 3.2931 4.3908 5.4885 4.8695 4.8785 4.8776 MIN: 4.85 / MAX: 4.91 MIN: 4.86 / MAX: 4.92 MIN: 4.86 / MAX: 4.94
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b c 800 1600 2400 3200 4000 3703.86 3697.25 3699.85 MIN: 3673.82 MIN: 3666.72 MIN: 3668.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b c 8 16 24 32 40 35.72 35.78 35.73 MIN: 35.34 MIN: 35.46 MIN: 35.41 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b c 1500 3000 4500 6000 7500 7189.12 7178.37 7178.94 MIN: 7157.19 MIN: 7134.3 MIN: 7128.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c 800 1600 2400 3200 4000 3701.08 3699.05 3703.50 MIN: 3672.95 MIN: 3670.29 MIN: 3669.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b c 1500 3000 4500 6000 7500 7179.51 7178.23 7176.24 MIN: 7137.59 MIN: 7137.21 MIN: 7139.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin a b c 2K 4K 6K 8K 10K 9651.39 9651.58 9653.77 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b c 0.0158 0.0316 0.0474 0.0632 0.079 0.07 0.07 0.07
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b c 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 0.15
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b c 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 0.15
Phoronix Test Suite v10.8.5