xeon okt

Intel Xeon E5-2609 v4 testing with a MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210269-NE-XEONOKT7916&rdt&grr.

xeon oktProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionABIntel Xeon E5-2609 v4 @ 1.70GHz (8 Cores)MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS)Intel Xeon E7 v4/Xeon16GB256GB CORSAIR FORCE LXllvmpipeRealtek ALC892Intel I218-VUbuntu 20.045.9.0-050900rc6daily20200926-generic (x86_64) 20200925GNOME Shell 3.36.2X Server 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits)GCC 9.3.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0xb000038Python Details- Python 2.7.18rc1 + Python 3.8.2Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT disabled

xeon oktbuild-linux-kernel: allmodconfigbuild-nodejs: Time To Compilebuild-gem5: Time To Compilebrl-cad: VGR Performance Metricjpegxl: JPEG - 100jpegxl: PNG - 100build-python: Released Build, PGO + LTO Optimizedavifenc: 0blender: BMW27 - CPU-Onlyaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kgromacs: MPI CPU - water_GMX50_barejpegxl: JPEG - 80jpegxl: PNG - 80clickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachencnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetbuild-linux-kernel: defconfigmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetjpegxl: JPEG - 90build-erlang: Time To Compilejpegxl: PNG - 90avifenc: 2astcenc: Exhaustiveaom-av1: Speed 6 Two-Pass - Bosphorus 4Kbuild-php: Time To Compilejpegxl-decode: 1chia-vdf: Square Plain C++node-web-tooling: aom-av1: Speed 4 Two-Pass - Bosphorus 1080pastcenc: Thoroughaom-av1: Speed 0 Two-Pass - Bosphorus 1080pbuild-mplayer: Time To Compilekvazaar: Bosphorus 4K - Very Fastnatron: Spaceshipchia-vdf: Square Assembly Optimizedonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUastcenc: Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamkvazaar: Bosphorus 4K - Ultra Fastaom-av1: Speed 6 Two-Pass - Bosphorus 1080pjpegxl-decode: Allaom-av1: Speed 6 Realtime - Bosphorus 4Kdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamgraphics-magick: Noise-Gaussiangraphics-magick: Enhancedgraphics-magick: Sharpengraphics-magick: Swirlgraphics-magick: Resizinggraphics-magick: Rotategraphics-magick: HWB Color Spacebuild-python: Defaultencode-flac: WAV To FLACcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamastcenc: Mediumdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamavifenc: 6, Losslessdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamaom-av1: Speed 8 Realtime - Bosphorus 4Kcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kavifenc: 6aom-av1: Speed 10 Realtime - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUavifenc: 10, Losslesskvazaar: Bosphorus 1080p - Ultra Fastonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 1080ponednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080ponednn: IP Shapes 3D - f32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: IP Shapes 3D - u8s8f32 - CPUlammps: Rhodopsin Proteinonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUglibc-bench: expglibc-bench: sinonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUglibc-bench: sincosglibc-bench: cosglibc-bench: atanhglibc-bench: asinhglibc-bench: pthread_onceglibc-bench: ffsllglibc-bench: tanhglibc-bench: sqrtglibc-bench: sinhglibc-bench: modfglibc-bench: log2glibc-bench: ffsonednn: IP Shapes 1D - bf16bf16bf16 - CPUAB3885.7872040.121396.872409810.270.27878.598579.164479.30.051.840.4563.333.4550.3349.8644.698.41761.2420.4821.5935.1727.459.6911.7655.8917.162.0713.536.686.816.237.0423.87311.82754.15.7735.37.64640.6775.3282.79921.523.24293.8183.35270.4750.18563.38202.30115.28542005.044.41.88660.15119.9045.120.9778007894.497868.717874.2740.4501367.558310.86244341.024083.824077.011497.93552.64131512.78362.6288342.535111.671100.70989.9288.9710.8293.259.6384.20742.6027385.60122.593399.664610.032778714310431233537155.46852.6161511422928172.693123.1105237.486616.831450.132719.943114.803161.222316.3292118.002633.799140.02832.734130.536316.7332.30921.4423.5126.98524.3422.915.98548.3923418.1638.738.128575.6688747.754.340393.5058162.368.1712265.563.84182.69914.165317.117227.786788.678114.654911.467854.1233102.65350.253941.68977.005956.02446.75198.0315936.510111.004430.67456.028943881.5792039.3951397.053414650.270.27877.31588.842479.790.051.850.4563.333.4550.8249.6843.808.44760.2420.5921.6735.6127.999.6711.6856.0517.162.0713.546.676.776.247.0623.98311.54753.9855.7715.2877.6440.4885.3372.81121.5543.26293.5073.38272.7530.18573.36202.52915.455410054.411.88420.15119.6965.170.9785007935.857890.487866.7940.4707368.571510.82594112.594081.64098.731503.50432.60541501.10132.6644341.93611.658100.54429.94428.9910.7394.949.81383.44882.6078385.83612.591799.918710.007277714310431534538455.34152.8481521322984172.276723.206237.722216.818449.987120.001314.816561.290316.3111118.288133.752339.83732.702230.567616.6332.23421.1923.4127.15924.2322.9415.97338.3426118.07739.088.177295.6694748.14.269213.525662.748.0577366.173.859652.71314.145617.105627.731288.676414.672611.452254.1258102.63150.227741.62257.005626.0317846.75528.0315336.509911.000930.6646.0309OpenBenchmarking.org

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigAB80016002400320040003885.793881.58

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileAB4008001200160020002040.122039.40

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileAB300600900120015001396.871397.05

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricAB9K18K27K36K45K40981414651. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100AB0.06080.12160.18240.24320.3040.270.271. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100AB0.06080.12160.18240.24320.3040.270.271. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedAB2004006008001000878.60877.31

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0AB130260390520650579.16588.841. (CXX) g++ options: -O3 -fPIC -lm

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyAB100200300400500479.30479.79

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KAB0.01130.02260.03390.04520.05650.050.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KAB0.41630.83261.24891.66522.08151.841.851. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareAB0.10260.20520.30780.41040.5130.4560.4561. (CXX) g++ options: -O3 -pthread

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80AB0.74931.49862.24792.99723.74653.333.331. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80AB0.77631.55262.32893.10523.88153.453.451. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunAB112233445550.3350.82MIN: 4.81 / MAX: 7500MIN: 4.97 / MAX: 75001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunAB112233445549.8649.68MIN: 5.13 / MAX: 5000MIN: 4.93 / MAX: 75001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheAB102030405044.6943.80MIN: 4.76 / MAX: 6666.67MIN: 4.74 / MAX: 4615.381. ClickHouse server version 22.5.4.19 (official build).

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetAB2468108.418.44MIN: 8.33 / MAX: 13.21MIN: 8.39 / MAX: 9.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerAB160320480640800761.24760.24MIN: 755.05 / MAX: 780.87MIN: 753.97 / MAX: 772.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAB51015202520.4820.59MIN: 20.41 / MAX: 20.9MIN: 20.52 / MAX: 21.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdAB51015202521.5921.67MIN: 21.41 / MAX: 42.91MIN: 21.53 / MAX: 23.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAB81624324035.1735.61MIN: 34.61 / MAX: 36.08MIN: 34.51 / MAX: 145.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50AB71421283527.4527.99MIN: 27.29 / MAX: 29.43MIN: 27.87 / MAX: 28.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetAB36912159.699.67MIN: 9.64 / MAX: 9.83MIN: 9.62 / MAX: 9.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18AB369121511.7611.68MIN: 11.68 / MAX: 12.14MIN: 11.57 / MAX: 13.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16AB132639526555.8956.05MIN: 55.71 / MAX: 57.17MIN: 55.71 / MAX: 75.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetAB4812162017.1617.16MIN: 17.05 / MAX: 18.6MIN: 17.04 / MAX: 191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceAB0.46580.93161.39741.86322.3292.072.07MIN: 2.04 / MAX: 2.17MIN: 2.04 / MAX: 2.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AB369121513.5313.54MIN: 13.48 / MAX: 13.64MIN: 13.49 / MAX: 13.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAB2468106.686.67MIN: 6.59 / MAX: 12.03MIN: 6.61 / MAX: 7.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AB2468106.816.77MIN: 6.7 / MAX: 25.84MIN: 6.73 / MAX: 6.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AB2468106.236.24MIN: 6.18 / MAX: 6.3MIN: 6.19 / MAX: 6.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AB2468107.047.06MIN: 6.98 / MAX: 7.12MIN: 7 / MAX: 7.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAB61218243023.8723.98MIN: 23.26 / MAX: 24.39MIN: 23.02 / MAX: 46.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigAB70140210280350311.83311.55

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3AB122436486054.1053.99MIN: 53.89 / MAX: 101.31MIN: 53.83 / MAX: 73.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0AB1.29892.59783.89675.19566.49455.7735.771MIN: 5.74 / MAX: 7.01MIN: 5.73 / MAX: 25.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224AB1.19252.3853.57754.775.96255.3005.287MIN: 5.27 / MAX: 6.53MIN: 5.25 / MAX: 10.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AB2468107.6467.640MIN: 7.61 / MAX: 10.41MIN: 7.6 / MAX: 8.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AB91827364540.6840.49MIN: 40.52 / MAX: 58.97MIN: 40.38 / MAX: 59.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AB1.20082.40163.60244.80326.0045.3285.337MIN: 5.29 / MAX: 7.62MIN: 5.31 / MAX: 7.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AB0.63251.2651.89752.533.16252.7992.811MIN: 2.77 / MAX: 3.18MIN: 2.79 / MAX: 3.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetAB51015202521.5221.55MIN: 21.31 / MAX: 40.96MIN: 21.33 / MAX: 40.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90AB0.73351.4672.20052.9343.66753.243.261. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileAB60120180240300293.82293.51

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90AB0.76051.5212.28153.0423.80253.353.381. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2AB60120180240300270.48272.751. (CXX) g++ options: -O3 -fPIC -lm

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveAB0.04180.08360.12540.16720.2090.18560.18571. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KAB0.76051.5212.28153.0423.80253.383.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileAB4080120160200202.30202.53

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1AB4812162015.2815.45

Chia Blockchain VDF

Test: Square Plain C++

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++AB12K24K36K48K60K54200541001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkAB1.1342.2683.4024.5365.675.045.00

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pAB0.99231.98462.97693.96924.96154.404.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughAB0.42450.8491.27351.6982.12251.88661.88421. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pAB0.03380.06760.10140.13520.1690.150.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Timed MPlayer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileAB306090120150119.90119.70

Kvazaar

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastAB1.16332.32663.48994.65325.81655.125.171. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipAB0.20250.4050.60750.811.01250.90.9

Chia Blockchain VDF

Test: Square Assembly Optimized

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedAB20K40K60K80K100K77800785001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUAB2K4K6K8K10K7894.497935.85MIN: 7862.58MIN: 7928.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUAB2K4K6K8K10K7868.717890.48MIN: 7860.85MIN: 7887.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUAB2K4K6K8K10K7874.277866.79MIN: 7868.63MIN: 7862.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ASTC Encoder

Preset: Fast

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastAB91827364540.4540.471. (CXX) g++ options: -O3 -flto -pthread

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB80160240320400367.56368.57

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB369121510.8610.83

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAB90018002700360045004341.024112.59MIN: 4203.37MIN: 4104.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUAB90018002700360045004083.824081.60MIN: 4076.89MIN: 4073.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUAB90018002700360045004077.014098.73MIN: 4074.41MIN: 4092.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB300600900120015001497.941503.50

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB0.59431.18861.78292.37722.97152.64132.6054

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB300600900120015001512.781501.10

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB0.59951.1991.79852.3982.99752.62882.6644

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB70140210280350342.54341.94

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB369121511.6711.66

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB20406080100100.71100.54

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB36912159.92809.9442

Kvazaar

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastAB36912158.978.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pAB369121510.8210.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllAB2040608010093.2594.94

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KAB36912159.609.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB80160240320400384.21383.45

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB0.58681.17361.76042.34722.9342.60272.6078

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB80160240320400385.60385.84

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB0.58351.1671.75052.3342.91752.59332.5917

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB2040608010099.6699.92

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB369121510.0310.01

GraphicsMagick

Operation: Noise-Gaussian

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianAB2040608010078771. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Enhanced

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedAB163248648071711. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Sharpen

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenAB102030405043431. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Swirl

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlAB204060801001041041. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Resizing

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingAB701402102803503123151. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Rotate

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateAB701402102803503353451. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: HWB Color Space

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceAB801602403204003713841. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultAB122436486055.4755.34

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACAB122436486052.6252.851. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingAB3K6K9K12K15K15114152131. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingAB5K10K15K20K25K22928229841. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB4080120160200172.69172.28

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB61218243023.1123.21

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB50100150200250237.49237.72

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB4812162016.8316.82

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB112233445550.1349.99

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB51015202519.9420.00

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumAB4812162014.8014.821. (CXX) g++ options: -O3 -flto -pthread

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB142842567061.2261.29

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB4812162016.3316.31

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB306090120150118.00118.29

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB81624324033.8033.75

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessAB91827364540.0339.841. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB81624324032.7332.70

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB71421283530.5430.57

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KAB4812162016.7316.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Parallel BZIP2 Compression

FreeBSD-13.0-RELEASE-amd64-memstick.img Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionAB81624324032.3132.231. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pAB51015202521.4421.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KAB61218243023.5123.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6AB61218243026.9927.161. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KAB61218243024.3424.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Kvazaar

Video Input: Bosphorus 1080p - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very FastAB51015202522.9022.941. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUAB4812162015.9915.97MIN: 15.86MIN: 15.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUAB2468108.392348.34261MIN: 8.31MIN: 8.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessAB4812162018.1618.081. (CXX) g++ options: -O3 -fPIC -lm

Kvazaar

Video Input: Bosphorus 1080p - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastAB91827364538.7339.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUAB2468108.128578.17729MIN: 8.08MIN: 8.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUAB1.27562.55123.82685.10246.3785.668875.66947MIN: 5.65MIN: 5.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pAB112233445547.7548.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUAB0.97661.95322.92983.90644.8834.340394.26921MIN: 4.21MIN: 4.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUAB0.79331.58662.37993.17323.96653.505813.52560MIN: 3.45MIN: 3.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pAB142842567062.3662.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAB2468108.171228.05773MIN: 8.15MIN: 8.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pAB153045607565.5666.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUAB0.86841.73682.60523.47364.3423.841803.85965MIN: 3.81MIN: 3.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinAB0.61041.22081.83122.44163.0522.6992.7131. (CXX) g++ options: -O3 -pthread -lm -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAB4812162014.1714.15MIN: 13.87MIN: 13.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUAB4812162017.1217.11MIN: 17.04MIN: 17.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Glibc Benchmarks

Benchmark: exp

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: expAB71421283527.7927.731. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: sin

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinAB2040608010088.6888.681. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAB4812162014.6514.67MIN: 14.63MIN: 14.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUAB369121511.4711.45MIN: 11.44MIN: 11.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Glibc Benchmarks

Benchmark: sincos

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sincosAB122436486054.1254.131. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: cos

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: cosAB20406080100102.65102.631. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: atanh

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: atanhAB112233445550.2550.231. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: asinh

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: asinhAB102030405041.6941.621. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: pthread_once

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: pthread_onceAB2468107.005957.005621. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: ffsll

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsllAB2468106.024006.031781. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: tanh

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: tanhAB112233445546.7546.761. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: sqrt

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sqrtAB2468108.031598.031531. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: sinh

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinhAB81624324036.5136.511. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: modf

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: modfAB369121511.0011.001. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: log2

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: log2AB71421283530.6730.661. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Glibc Benchmarks

Benchmark: ffs

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsAB2468106.028946.030901. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s


Phoronix Test Suite v10.8.4