oktoberfest

Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310299-PTS-OKTOBERF29&sor&grt.

oktoberfestProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionabIntel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads)ASUS PRIME Z690-P WIFI D4 (0605 BIOS)Intel Device 7aa716GB1000GB Western Digital WDS100T1X0E-00AFY0ASUS Intel ADL-S GT1 15GB (1450MHz)Realtek ALC897ASUS MG28URealtek RTL8125 2.5GbE + Intel Device 7af0Ubuntu 22.045.19.0-051900rc6daily20220716-generic (x86_64)GNOME Shell 42.1X Server 1.21.1.3 + Wayland4.6 Mesa 22.0.1OpenCL 3.01.2.204GCC 11.4.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9 Java Details- OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.12Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

oktoberfestaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 11 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080paom-av1: Speed 11 Realtime - Bosphorus 1080pcassandra: Writesapache: 100apache: 200apache: 500apache: 1000blender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlybrl-cad: VGR Performance Metricbuild2: Time To Compiledav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitduckdb: IMDBduckdb: TPC-H Parqueteasywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200embree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objespeak: Text-To-Speech Synthesishpcg: 104 104 104 - 60oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslesslibxsmm: 128libxsmm: 32libxsmm: 64liquid-dsp: 1 - 256 - 32liquid-dsp: 1 - 256 - 57liquid-dsp: 2 - 256 - 32liquid-dsp: 2 - 256 - 57liquid-dsp: 4 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57liquid-dsp: 1 - 256 - 512liquid-dsp: 16 - 256 - 32liquid-dsp: 16 - 256 - 57liquid-dsp: 2 - 256 - 512liquid-dsp: 4 - 256 - 512liquid-dsp: 8 - 256 - 512liquid-dsp: 16 - 256 - 512memcached: 1:10memcached: 1:100ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetnekrs: Kershawnekrs: TurboPipe Periodicdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamnginx: 100nginx: 200nginx: 500nginx: 1000onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenvkl: vklBenchmarkCPU ISPCopenvkl: vklBenchmarkCPU Scalarencode-opus: WAV To Opus Encodeospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 1080p - 1 - Path Tracer - CPUospray-studio: 2 - 1080p - 1 - Path Tracer - CPUospray-studio: 3 - 1080p - 1 - Path Tracer - CPUospray-studio: 1 - 1080p - 16 - Path Tracer - CPUospray-studio: 1 - 1080p - 32 - Path Tracer - CPUospray-studio: 2 - 1080p - 16 - Path Tracer - CPUospray-studio: 2 - 1080p - 32 - Path Tracer - CPUospray-studio: 3 - 1080p - 16 - Path Tracer - CPUospray-studio: 3 - 1080p - 32 - Path Tracer - CPUpalabos: 100palabos: 400qmcpack: H4_aeqmcpack: Li2_STO_aeqmcpack: LiH_ae_MSDqmcpack: simple-H2Oqmcpack: O_ae_pyscf_UHFqmcpack: FeCO6_b3lyp_gmsquantlib: Multi-Threadedquantlib: Single-Threadedsqlite: 1sqlite: 2sqlite: 4stress-ng: Hashstress-ng: Pipestress-ng: Pollstress-ng: Zlibstress-ng: Cloningstress-ng: Pthreadstress-ng: AVL Treestress-ng: AVX-512 VNNIstress-ng: Floating Pointstress-ng: Matrix 3D Mathstress-ng: Vector Shufflestress-ng: Mixed Schedulerstress-ng: Wide Vector Mathstress-ng: Fused Multiply-Addstress-ng: Vector Floating Pointsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080ptensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - ResNet-50build-gcc: Time To Compilebuild-godot: Time To Compilevvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterwhisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionz3: 1.smt2z3: 2.smt2ab0.246.7763.8214.2568.6769.7175.6773.190.7315.99146.2146.78183.46199.96178.89193.68123080148966.44140860.12119690.62120445.38112.43316.24156.021237.27385.7215292137.603683.61238.78956.18556.38115.40897.8209.046185.87912.70613.489114.973113.505216.660314.51321.2156.950450.34127.47360.8186.3329.1644.552247.596.3167.957879000788010001157000001593800002218600003057900004029800005187900002078400075779000069580000041548000806800001304200001854500003279849.873016454.910.242.982.382.242.544.870.757.7737.315.524.9612.7716.646.865.98100.28332619600008.4527586.67197.7535128.9697222.91622.410487.64711.4028100.405349.715456.165817.797630.4378164.215521.191947.1806109.142145.793479.150812.6294712.35937.0052349.61512.855552.323895.478945.307122.06479.656511.32898.145122.7701108.118746.186278.990912.655350.969698.078345.475421.98669.719971.636455.41518.042410.4405472.67989.7413102.6438114.519843.637850.782519.684234.6065144.122928.271835.3678.4541588.30097.754128.9608108258.46102547.9589266.3382318.983.8950310.797214.35459.247918.048995530.132636.78222.32157.04347.26283.82717.6927713020.5085.746955.68706140.4412.636932.558873.88095973499101154915966331696516207032260218927937573324562499291242656820034319983367498849682351.238176.757939.88445.38201.7443.891314.45341.8140312.34562.68.94314.4919.0052294409.167171062.721241130.31130.39860.48186367.181.661228899.234377.451401.519447.6510186.26484455.7516384409.6619538.353.47544.977109.879111.12911.19892.681437.372516.217.0117.4917.81918.832323.8614.93110.18115.10833.291244.72794820.195692723.053519.12662.9390.246.7771.5814.2668.1679.981.3870.730.7315.99166.6846.68162.27184.04188.01190.21122667145590.45140213.08121041.48119665.35112.12316.35156.071237.36385.91215193675.83239.22962.35555.739.06185.99912.62313.540514.913113.483316.691314.573621.2886.962380.34126.77359.3566.2749.1394.63424896.416757881000803780001156900001605100002216800003033800004003700005155400002078100075007000067601000041499000811170001312800001962300003183290.593059379.2810.543.342.412.272.64.910.767.7336.625.55.1413.1517.596.996.0595.333.09326084000040798300008.4551589.15317.7442129.1244223.392122.351589.119511.214499.573750.172656.440617.710530.5986163.321521.184647.1966109.058745.811879.373312.5944713.31586.9947355.47452.807952.496995.202745.247422.09399.679511.11988.1485122.7176108.065446.210179.041612.646950.861898.286544.045522.699769.086872.241354.937418.199110.3248479.61069.6372103.7525114.285643.662649.397720.236734.5555144.366928.116135.56338.3633593.6397.7181129.5604107751.55102479.6888913.2482032.623.8940910.694114.34159.23038.046484270.32167.5233.21152.73342.5283.06714.8127713020.5415.760915.70261140.5882.634512.548933.86334971699011153615977031704716275432258618904537596524572496291642536821164313283406499109695751.309176.586340.51445.23202.3444.358313.84338.639967.54573.98.90814.45719.0042304545.327160556.831217218.911129.89865.39186386.6182.11229104.914412.331418.219457.6910290.81484394.8516391410.5319468.513.45544.94109.209111.54111.21593.344427.53499.0717.0917.517.95918.9384.91210.22515.23733.094246.69041818.913562736.7707519.12163.059OpenBenchmarking.org

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kba0.0540.1080.1620.2160.270.240.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kba2468106.776.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kba163248648071.5863.821. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kba4812162014.2614.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kab153045607568.6768.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kba2040608010079.9069.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kba2040608010081.3875.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4Kab163248648073.1970.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pba0.16430.32860.49290.65720.82150.730.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pba4812162015.9915.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pba4080120160200166.68146.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pab112233445546.7846.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pab4080120160200183.46162.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pab4080120160200199.96184.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pba4080120160200188.01178.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080pab4080120160200193.68190.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesab30K60K90K120K150K123080122667

Apache HTTP Server

Concurrent Requests: 100

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 100ab30K60K90K120K150K148966.44145590.451. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

Concurrent Requests: 200

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 200ab30K60K90K120K150K140860.12140213.081. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500ba30K60K90K120K150K121041.48119690.621. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000ab30K60K90K120K150K120445.38119665.351. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyba306090120150112.12112.43

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyab70140210280350316.24316.35

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyab306090120150156.02156.07

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyab300600900120015001237.271237.36

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyab80160240320400385.70385.91

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricab50K100K150K200K250K2152922151931. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilea306090120150137.60

dav1d

Video Input: Chimera 1080p

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab150300450600750683.61675.831. (CC) gcc options: -pthread -lm

dav1d

Video Input: Summer Nature 4K

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kba50100150200250239.22238.781. (CC) gcc options: -pthread -lm

dav1d

Video Input: Summer Nature 1080p

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pba2004006008001000962.35956.181. (CC) gcc options: -pthread -lm

dav1d

Video Input: Chimera 1080p 10-bit

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab120240360480600556.38555.731. (CC) gcc options: -pthread -lm

DuckDB

Benchmark: IMDB

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBa306090120150SE +/- 0.15, N = 3115.411. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

DuckDB

Benchmark: TPC-H Parquet

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H Parqueta20406080100SE +/- 0.18, N = 397.821. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240ab36912159.0469.0601. (CXX) g++ options: -O3 -fopenmp

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200ab4080120160200185.88186.001. (CXX) g++ options: -O3 -fopenmp

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownab369121512.7112.62MIN: 12.62 / MAX: 12.94MIN: 12.52 / MAX: 12.78

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownba369121513.5413.49MIN: 13.42 / MAX: 13.73MIN: 13.36 / MAX: 13.73

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonab4812162014.9714.91MIN: 14.91 / MAX: 15.16MIN: 14.84 / MAX: 15.18

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objab369121513.5113.48MIN: 13.43 / MAX: 13.66MIN: 13.41 / MAX: 13.7

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonba4812162016.6916.66MIN: 16.6 / MAX: 16.96MIN: 16.56 / MAX: 16.95

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objba4812162014.5714.51MIN: 14.49 / MAX: 14.75MIN: 14.43 / MAX: 14.71

eSpeak-NG Speech Engine

Text-To-Speech Synthesis

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 1.51Text-To-Speech Synthesisab51015202521.2221.291. (CXX) g++ options: -O2

High Performance Conjugate Gradient

X Y Z: 104 104 104 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60ba2468106.962386.950451. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyba0.07650.1530.22950.3060.38250.340.34

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0ba306090120150126.77127.471. (CXX) g++ options: -O3 -fPIC -lm

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2ba142842567059.3660.821. (CXX) g++ options: -O3 -fPIC -lm

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6ba2468106.2746.3321. (CXX) g++ options: -O3 -fPIC -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessba36912159.1399.1641. (CXX) g++ options: -O3 -fPIC -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessab1.04272.08543.12814.17085.21354.5524.6341. (CXX) g++ options: -O3 -fPIC -lm

libxsmm

M N K: 128

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ba50100150200250248.0247.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

libxsmm

M N K: 32

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ba2040608010096.496.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

libxsmm

M N K: 64

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab4080120160200167.9167.01. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ba12M24M36M48M60M57881000578790001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ba20M40M60M80M100M80378000788010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32ab20M40M60M80M100M1157000001156900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57ba30M60M90M120M150M1605100001593800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32ab50M100M150M200M250M2218600002216800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57ab70M140M210M280M350M3057900003033800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32ab90M180M270M360M450M4029800004003700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57ab110M220M330M440M550M5187900005155400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ab4M8M12M16M20M20784000207810001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ab160M320M480M640M800M7577900007500700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ab150M300M450M600M750M6958000006760100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512ab9M18M27M36M45M41548000414990001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512ba20M40M60M80M100M81117000806800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512ba30M60M90M120M150M1312800001304200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ba40M80M120M160M200M1962300001854500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10ab700K1400K2100K2800K3500K3279849.873183290.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100ba700K1400K2100K2800K3500K3059379.283016454.901. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab369121510.2410.54MIN: 10.16 / MAX: 10.54MIN: 10.47 / MAX: 10.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab0.75151.5032.25453.0063.75752.983.34MIN: 2.96 / MAX: 3.1MIN: 3.33 / MAX: 3.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab0.54231.08461.62692.16922.71152.382.41MIN: 2.36 / MAX: 2.6MIN: 2.38 / MAX: 2.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab0.51081.02161.53242.04322.5542.242.27MIN: 2.23 / MAX: 2.28MIN: 2.26 / MAX: 2.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab0.5851.171.7552.342.9252.542.60MIN: 2.52 / MAX: 2.6MIN: 2.58 / MAX: 2.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab1.10482.20963.31444.41925.5244.874.91MIN: 4.83 / MAX: 5MIN: 4.86 / MAX: 5.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab0.1710.3420.5130.6840.8550.750.76MIN: 0.74 / MAX: 0.78MIN: 0.75 / MAX: 0.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetba2468107.737.77MIN: 7.62 / MAX: 7.86MIN: 7.62 / MAX: 7.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ba91827364536.6237.31MIN: 36.48 / MAX: 43.16MIN: 37.22 / MAX: 37.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ba1.2422.4843.7264.9686.215.505.52MIN: 5.39 / MAX: 5.62MIN: 5.35 / MAX: 5.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab1.15652.3133.46954.6265.78254.965.14MIN: 4.89 / MAX: 5.09MIN: 5.08 / MAX: 5.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab369121512.7713.15MIN: 12.66 / MAX: 13.17MIN: 12.98 / MAX: 13.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab4812162016.6417.59MIN: 16.5 / MAX: 16.86MIN: 17.43 / MAX: 23.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab2468106.866.99MIN: 6.78 / MAX: 6.97MIN: 6.91 / MAX: 7.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab2468105.986.05MIN: 5.94 / MAX: 6.11MIN: 6.02 / MAX: 6.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerba2040608010095.33100.28MIN: 94.07 / MAX: 152.19MIN: 94 / MAX: 163.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab0.69531.39062.08592.78123.47653.003.09MIN: 2.96 / MAX: 3.14MIN: 3.08 / MAX: 3.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

nekRS

Input: Kershaw

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawab700M1400M2100M2800M3500M326196000032608400001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicb900M1800M2700M3600M4500M40798300001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba2468108.45518.4527

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab130260390520650586.67589.15

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab2468107.75357.7442

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab306090120150128.97129.12

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamba50100150200250223.39222.92

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamba51015202522.3522.41

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamba2040608010089.1287.65

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamba369121511.2111.40

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab20406080100100.4199.57

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab112233445549.7250.17

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamba132639526556.4456.17

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamba4812162017.7117.80

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamba71421283530.6030.44

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamba4080120160200163.32164.22

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamab51015202521.1921.18

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamab112233445547.1847.20

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab20406080100109.14109.06

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab102030405045.7945.81

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamba2040608010079.3779.15

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamba369121512.5912.63

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamba150300450600750713.32712.36

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamba2468106.99477.0052

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamba80160240320400355.47349.62

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamba0.64251.2851.92752.573.21252.80792.8555

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba122436486052.5052.32

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba2040608010095.2095.48

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab102030405045.3145.25

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab51015202522.0622.09

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamba36912159.6799.656

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamba110220330440550511.12511.33

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamba2468108.14858.1450

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamba306090120150122.72122.77

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab20406080100108.12108.07

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab102030405046.1946.21

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba2040608010079.0478.99

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba369121512.6512.66

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab112233445550.9750.86

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2040608010098.0898.29

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab102030405045.4844.05

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab51015202521.9922.70

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab163248648069.7269.09

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab163248648071.6472.24

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab122436486055.4254.94

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab4812162018.0418.20

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab369121510.4410.32

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab100200300400500472.68479.61

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab36912159.74139.6372

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab20406080100102.64103.75

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab306090120150114.52114.29

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab102030405043.6443.66

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab112233445550.7849.40

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab51015202519.6820.24

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab81624324034.6134.56

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab306090120150144.12144.37

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab71421283528.2728.12

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab81624324035.3735.56

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab2468108.45418.3633

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab130260390520650588.30593.64

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab2468107.75407.7181

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab306090120150128.96129.56

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100ab20K40K60K80K100K108258.46107751.551. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200ab20K40K60K80K100K102547.95102479.681. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500ab20K40K60K80K100K89266.3388913.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000ab20K40K60K80K100K82318.9882032.621. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUba0.87641.75282.62923.50564.3823.894093.89503MIN: 3.75MIN: 3.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUba369121510.6910.80MIN: 10.6MIN: 10.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUba4812162014.3414.35MIN: 14.17MIN: 14.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUba36912159.230309.24791MIN: 5.49MIN: 5.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUba2468108.046488.04899MIN: 8.02MIN: 8.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUba120024003600480060004270.305530.13MIN: 4261.15MIN: 4272.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUba60012001800240030002167.502636.78MIN: 2155.94MIN: 2149.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beamab50100150200250222.32233.21

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testba306090120150152.73157.04

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshieldba80160240320400342.50347.26

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationba60120180240300283.06283.82

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containerba150300450600750714.81717.69

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCba60120180240300277277MIN: 20 / MAX: 3808MIN: 20 / MAX: 3813

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarba306090120150130130MIN: 10 / MAX: 2218MIN: 10 / MAX: 2222

Opus Codec Encoding

WAV To Opus Encode

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab51015202520.5120.541. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeba1.29622.59243.88865.18486.4815.760915.74695

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeba1.28312.56623.84935.13246.41555.702615.68706

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeba306090120150140.59140.44

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeab0.59331.18661.77992.37322.96652.636932.63451

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab0.57571.15141.72712.30282.87852.558872.54893

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeab0.87321.74642.61963.49284.3663.880953.86334

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUba2K4K6K8K10K97169734

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUba2K4K6K8K10K99019910

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUba2K4K6K8K10K1153611549

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUab30K60K90K120K150K159663159770

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUab70K140K210K280K350K316965317047

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUab30K60K90K120K150K162070162754

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUba70K140K210K280K350K322586322602

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUba40K80K120K160K200K189045189279

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUab80K160K240K320K400K375733375965

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUab500100015002000250024562457

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUba500100015002000250024962499

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUab600120018002400300029122916

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUba9K18K27K36K45K4253642656

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUab20K40K60K80K100K8200382116

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUba9K18K27K36K45K4313243199

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUab20K40K60K80K100K8336783406

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUab11K22K33K44K55K4988449910

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUab20K40K60K80K100K9682396957

Palabos

Grid Size: 100

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 100ba122436486051.3151.241. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Palabos

Grid Size: 400

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 400ab2040608010076.7676.591. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

QMCPACK

Input: H4_ae

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: H4_aeab91827364539.8840.511. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QMCPACK

Input: Li2_STO_ae

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: Li2_STO_aeba100200300400500445.23445.381. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QMCPACK

Input: LiH_ae_MSD

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: LiH_ae_MSDab4080120160200201.74202.341. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QMCPACK

Input: simple-H2O

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: simple-H2Oab102030405043.8944.361. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QMCPACK

Input: O_ae_pyscf_UHF

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: O_ae_pyscf_UHFba70140210280350313.84314.451. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QMCPACK

Input: FeCO6_b3lyp_gms

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: FeCO6_b3lyp_gmsba70140210280350338.60341.811. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threadedab9K18K27K36K45K40312.339967.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

QuantLib

Configuration: Single-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-Threadedba100020003000400050004573.94562.61. (CXX) g++ options: -O3 -march=native -fPIE -pie

SQLite

Threads / Copies: 1

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 1ba2468108.9088.9431. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

SQLite

Threads / Copies: 2

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2ba4812162014.4614.491. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

SQLite

Threads / Copies: 4

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4ba51015202519.0019.011. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

Stress-NG

Test: Hash

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hashba500K1000K1500K2000K2500K2304545.322294409.161. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Pipe

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipeab1.5M3M4.5M6M7.5M7171062.727160556.831. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Poll

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pollab300K600K900K1200K1500K1241130.301217218.911. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Zlib

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zlibab20040060080010001130.391129.891. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Cloning

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloningba2004006008001000865.39860.481. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreadba40K80K120K160K200K186386.61186367.101. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: AVL Tree

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treeba2040608010082.1081.661. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: AVX-512 VNNI

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIba300K600K900K1200K1500K1229104.911228899.231. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointba90018002700360045004412.334377.451. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Matrix 3D Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Mathba300600900120015001418.211401.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Vector Shuffle

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shuffleba2K4K6K8K10K9457.699447.651. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Mixed Scheduler

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulerba2K4K6K8K10K10290.8110186.261. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Wide Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Mathab100K200K300K400K500K484455.75484394.851. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Fused Multiply-Add

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Addba4M8M12M16M20M16391410.5316384409.661. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Stress-NG

Test: Vector Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointab4K8K12K16K20K19538.3519468.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kab0.78191.56382.34573.12763.90953.4753.4551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kab102030405044.9844.941. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kab20406080100109.88109.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kba20406080100111.54111.131. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pba369121511.2211.201. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pba2040608010093.3492.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pab90180270360450437.37427.531. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pab110220330440550516.20499.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50ba4812162017.0917.01

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50ba4812162017.5017.49

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50ba4812162017.9517.81

Timed GCC Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileab2004006008001000918.83918.94

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compilea70140210280350323.86

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab1.10952.2193.32854.4385.54754.9314.9121. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterba369121510.2310.181. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastba4812162015.2415.111. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab81624324033.2933.091. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the Unionab50100150200250244.73246.691. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the Unionba2004006008001000818.91820.201. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-medium.en - Input: 2016 State of the Unionab60012001800240030002723.052736.771. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Z3 Theorem Prover

SMT File: 1.smt2

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ba51015202519.1219.131. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

Z3 Theorem Prover

SMT File: 2.smt2

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab142842567062.9463.061. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC


Phoronix Test Suite v10.8.5