eee

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304025-NE-EEE49872408&sor&grw.

eeeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionabbb2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860ASPEEDVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.046.1.0-phx (x86_64)GNOME Shell 3.36.9X Server 1.20.13GCC 9.4.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302 Python Details- Python 2.7.18 + Python 3.8.10Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

eeedraco: Liondraco: Church Facadetensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNetspecfem3d: Water-layered Halfspacespecfem3d: Homogeneous Halfspacespecfem3d: Tomographic Modeltensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUbuild-ffmpeg: Time To Compilejohn-the-ripper: bcryptjohn-the-ripper: WPA PSKjohn-the-ripper: Blowfishjohn-the-ripper: HMAC-SHA512john-the-ripper: MD5build-llvm: Ninjabuild-llvm: Unix Makefilescompress-zstd: 3 - Compression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 12 - Decompression Speedspecfem3d: Layered Halfspacecompress-zstd: 3 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demanduvg266: Bosphorus 4K - Slowuvg266: Bosphorus 4K - Mediumuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterbuild-godot: Time To Compileembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objbuild2: Time To Compilebuild-nodejs: Time To Compilenginx: 100nginx: 200nginx: 500nginx: 1000openssl: SHA256openssl: SHA512openssl: RSA4096openssl: RSA4096openssl: ChaCha20openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20-Poly1305clickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runmemcached: 1:5memcached: 1:10memcached: 1:100rocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randpgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Writespecfem3d: Mount St. Helenspgbench: 100 - 50 - Read Write - Average Latencyabbb6694941083.2109.53146.9360.23716758131.37921095625.349439354212.65227.1650.7115.1143.3218.2577.7622.2103.2929.63109.7631.118.3814977.2948.6827115.1583403.046944.6206112.53558.873299.4868180.716642.649923.4331139.5608128.777554.334318.3878351.833251.1119112.50658.8739178.9198100.524465.378415.283640.3347446.167219.556251.111791.4064196.561134.714528.792718.4172965.21348.668115.35291.733543.455991.638781.299785.693534.05733.529714.75132.743474.394630.5635150.6939341384.14786.0031379.976.353198.670719.56667788.8041374.37786.91933.484882121281848756912450004691000343.242439.121664.3579.41093.3146.2987.663.6821057211102.712.8840.6325.91137282.71105.76.718262.5536.413110.282110.6386.22385.659232.248200.80256.07159.5674.47579.52191.6733.20152.11125.37475642440.28241.76402751210.44237.2356786810.64198.34755398638.19370.33863891420.45199.1838.03366.5720.667.918.8318.320.0418.0720.2921.842.9344.6448.173.6575.9919.60516.729232.64128.638329.711433.034929.59537.883832.4996100.27309.57152180.63147630.66141761.59139200.669173311330101457253308029.4534883.815017676035015915128777011825026280080516466550163.13205.05206.052007905.011719545.481290869.2525148711520971523337525965069714910200222891810687070.04780961.8429946550.05394624.14800530112.6784.48109.74146.4861.36735655231.08993581625.268860658212.5227.8550.9615.164.0418.3177.6422.27104.621.71783.093641.748161.280425.687284.306174.7879914.80332.738214.901440.5627440.6883321386.62788.6521385.416.358688.689129.51019785.391381.33797.59133.2734886421381148405919700004719000339.912439.2821640.6545.91093.7153.9996.464.4625494361095.112.8846.72831144.5289.91097.96.45833.22.60836.225117.742111.1546.34486.951235.271201.73232.80153.96115.76554917843.62242.2427549310.42238.6510.58198.96034061738.07369.21978633320.52198.5038.16367.61365715920.617.958.8218.4520.0518.1120.3321.4842.6245.148.243.5796.0269.64116.897234.22128.493829.502333.177829.440837.710432.556599.151309.3119102032020101675984508026.7536533.815007400509015978197265011859642095080512557330157.77211.36210.461672007.691478446.781550585.2510528480.04781161.6569912620.05361024.12549029813.856713938182.22109.07145.4160.51332890931.06573373825.0077492213.3222751.1515.0963.7818.377.7922.35104.229.43109.2730.9518.7109961.13468.6642115.4054394.453445.5898112.22798.897799.7544180.283242.269723.6443139.499128.810254.329818.3879352.043551.0891109.45189.1226179.5686100.153666.092515.118540.8629440.408519.716850.695391.353196.741234.930828.615318.7079961.97078.6925115.02911.732573.171731.693361.282535.686513.330734.7749814.64642.73834.358620.5622730.6910371374.19789.0031377.456.352498.672119.51252787.6931380.03789.80833.1784891221420048562982710004712000338.885419.8471584.1572.31097.6151.4996.564.4809145251095.812.9849310.51140.2279.41100.76.79834.22.64735.893117.555110.2866.22286.484228.891201.44455.76159.2774.63580.85192.1132.83153.822107334124.5140.56241.1310.47235.94144941810.70198.2738.21369.54018218820.50197.89616466438.28366.2920.687.918.8518.4720.2218.1420.0721.6242.3245.2348.43.6345.979.74616.675232.0228.745529.536233.171129.485337.717632.4438100.747309.086151733.73149820.05143874.15139997.589206384930101933736108109.8534630.515005924284015916093462011849999346080513755000154.30200.24211.121937351.981700246.181587757.5424368011483014622870324998775274875480225151710679540.04780861.9139936600.05303124.63982217716.497OpenBenchmarking.org

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionabb14002800420056007000669467131. (CXX) g++ options: -O3

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadebba2K4K6K8K10K938194101. (CXX) g++ options: -O3

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetbabb2040608010084.4883.2082.22

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetbabb20406080100109.74109.53109.07

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetabbb306090120150146.93146.48145.41

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspaceabbb142842567060.2460.5161.371. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacebbba71421283531.0731.0931.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelbbba61218243025.0125.2725.351. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetbbab50100150200250213.32212.65212.50

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetbabb50100150200250227.85227.16227.00

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetbbba122436486051.1550.9650.71

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abbb4812162015.1115.1015.09

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetbbba142842567064.0463.7843.32

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50bbba51015202518.3118.3018.25

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetbbab2040608010077.7977.7677.64

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50bbba51015202522.3522.2722.20

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetbbba20406080100104.62104.20103.29

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50abb71421283529.6329.43

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetabb20406080100109.76109.27

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50abb71421283531.1030.95

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba51015202518.7118.38

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba2004006008001000961.13977.29

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabb2468108.68278.6642

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabb306090120150115.16115.41

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabb90180270360450403.05394.45

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabb102030405044.6245.59

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabb306090120150112.54112.23

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabb2468108.87328.8977

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba2040608010099.7599.49

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba4080120160200180.28180.72

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabb102030405042.6542.27

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabb61218243023.4323.64

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabb306090120150139.56139.50

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabb306090120150128.78128.81

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabb122436486054.3354.33

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabb51015202518.3918.39

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba80160240320400352.04351.83

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba122436486051.0951.11

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabb306090120150112.51109.45

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabb36912158.87399.1226

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba4080120160200179.57178.92

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba20406080100100.15100.52

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba153045607566.0965.38

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba4812162015.1215.28

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba91827364540.8640.33

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba100200300400500440.41446.17

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba51015202519.7219.56

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba122436486050.7051.11

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabb2040608010091.4191.35

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabb4080120160200196.56196.74

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba81624324034.9334.71

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba71421283528.6228.79

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba51015202518.7118.42

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba2004006008001000961.97965.21

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba2468108.69258.6680

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba306090120150115.03115.35

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUbbba0.390.781.171.561.951.717801.732571.73354MIN: 1.64MIN: 1.64MIN: 1.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbbba0.77761.55522.33283.11043.8883.093643.171733.45599MIN: 1.81MIN: 1.84MIN: 1.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabbb0.39330.78661.17991.57321.96651.638781.693361.74816MIN: 1.5MIN: 1.56MIN: 1.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbbba0.29250.5850.87751.171.46251.280421.282531.29978MIN: 1.07MIN: 1.1MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUbbba1.2812.5623.8435.1246.4055.686515.687285.69353MIN: 5.55MIN: 5.54MIN: 5.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUbbab0.96891.93782.90673.87564.84453.330734.057304.30617MIN: 2.9MIN: 2.84MIN: 2.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabbb1.07732.15463.23194.30925.38653.529704.774984.78799MIN: 2.77MIN: 2.76MIN: 2.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUbbab4812162014.6514.7514.80MIN: 12.43MIN: 12.24MIN: 12.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbbba0.61731.23461.85192.46923.08652.738212.738302.74347MIN: 2.72MIN: 2.72MIN: 2.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUbbab1.10282.20563.30844.41125.5144.358624.394634.90144MIN: 2.47MIN: 2.48MIN: 2.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUbbba0.12680.25360.38040.50720.6340.5622730.5627440.563515MIN: 0.54MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbbba0.15610.31220.46830.62440.78050.6883320.6910370.693934MIN: 0.68MIN: 0.68MIN: 0.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbbab300600900120015001374.191384.141386.62MIN: 1367.79MIN: 1379.57MIN: 1378.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabbb2004006008001000786.00788.65789.00MIN: 779.97MIN: 781.04MIN: 779.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUbbab300600900120015001377.451379.971385.41MIN: 1371.04MIN: 1367.35MIN: 1377.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUbbab2468106.352496.353196.35868MIN: 6.3MIN: 6.3MIN: 6.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUabbb2468108.670718.672118.68912MIN: 8.58MIN: 8.57MIN: 8.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUbbba36912159.510199.512529.56667MIN: 9.44MIN: 9.44MIN: 9.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUbbba2004006008001000785.39787.69788.80MIN: 775.34MIN: 778.68MIN: 782.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabbb300600900120015001374.371380.031381.33MIN: 1363.77MIN: 1374.88MIN: 1376.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabbb2004006008001000786.92789.81797.59MIN: 779.8MIN: 783.36MIN: 783.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilebbba81624324033.1833.2733.48

John The Ripper

Test: bcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptbbba10K20K30K40K50K4891248864488211. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: WPA PSK

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKbbba50K100K150K200K250K2142002138112128181. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishabbb10K20K30K40K50K4875648562484051. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: HMAC-SHA512

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512bbba20M40M60M80M100M9827100091970000912450001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: MD5

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5bbba1000K2000K3000K4000K5000K4719000471200046910001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjabbba70140210280350338.89339.91343.24

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesbbab100200300400500419.85439.12439.28

Zstd Compression

Compression Level: 3 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedabbb4008001200160020001664.31640.61584.11. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedabbb130260390520650579.4572.3545.91. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedbbba20040060080010001097.61093.71093.31. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedbbba306090120150153.9151.4146.21. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedbbba2004006008001000996.5996.4987.61. (CC) gcc options: -O3 -pthread -lz -llzma

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspaceabbb142842567063.6864.4664.481. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Zstd Compression

Compression Level: 3 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedabbb20040060080010001102.71095.81095.11. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedbbba369121512.912.812.81. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedbbba2004006008001000849.0846.7840.61. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedabbb70140210280350325.9310.5283.01. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedbbba20040060080010001144.51140.21137.01. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedbabb60120180240300289.9282.7279.41. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedabbb20040060080010001105.71100.71097.91. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedbbab2468106.796.716.451. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedbbba2004006008001000834.2833.2826.01. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4Kbbba0.59561.19121.78682.38242.9782.6472.6082.5501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4Kabbb81624324036.4136.2335.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4Kbbba306090120150117.74117.56110.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4Kbabb20406080100111.15110.64110.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pbabb2468106.3446.2236.2221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pbbba2040608010086.9586.4885.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pbabb50100150200250235.27232.25228.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pbbba4080120160200201.73201.44200.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlybba132639526555.7656.07

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlybba4080120160200159.27159.56

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyabb2040608010074.4774.63

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyabb130260390520650579.52580.85

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyabb4080120160200191.67192.11

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba81624324032.8032.8333.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba306090120150153.96153.82152.111. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba306090120150115.77124.51125.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba102030405043.6240.5640.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbab50100150200250241.13241.76242.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbab369121510.4710.4410.421. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbab50100150200250235.94237.24238.651. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbab369121510.7010.6410.581. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbab4080120160200198.27198.35198.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbab91827364538.2138.1938.071. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba80160240320400369.22369.54370.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba51015202520.5220.5020.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba4080120160200197.90198.50199.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba91827364538.2838.1638.031. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbab80160240320400366.29366.57367.611. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbab51015202520.6820.6620.611. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

uvg266

Video Input: Bosphorus 4K - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowbbba2468107.957.917.91

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumbbab2468108.858.838.82

uvg266

Video Input: Bosphorus 1080p - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowbbba51015202518.4718.4518.30

uvg266

Video Input: Bosphorus 1080p - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumbbba51015202520.2220.0520.04

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastbbba4812162018.1418.1118.07

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastbabb51015202520.3320.2920.07

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastabbb51015202521.8021.6221.48

uvg266

Video Input: Bosphorus 1080p - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastabbb102030405042.9342.6242.32

uvg266

Video Input: Bosphorus 1080p - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastbbba102030405045.2345.1044.64

uvg266

Video Input: Bosphorus 1080p - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastbbba112233445548.4048.2448.17

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastabbb0.82281.64562.46843.29124.1143.6573.6343.5791. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterbabb2468106.0265.9915.9701. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastbbba36912159.7469.6419.6051. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterbabb4812162016.9016.7316.681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compilebbab50100150200250232.02232.64234.22

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownbbab71421283528.7528.6428.49MIN: 28.39 / MAX: 29.15MIN: 28.27 / MAX: 29.08MIN: 28.14 / MAX: 29

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownabbb71421283529.7129.5429.50MIN: 29.3 / MAX: 30.17MIN: 29.06 / MAX: 30.1MIN: 29.05 / MAX: 30.07

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonbbba81624324033.1833.1733.03MIN: 32.97 / MAX: 33.5MIN: 32.95 / MAX: 33.5MIN: 32.82 / MAX: 33.48

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objabbb71421283529.6029.4929.44MIN: 29.37 / MAX: 30.06MIN: 29.24 / MAX: 29.77MIN: 29.24 / MAX: 29.86

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonabbb91827364537.8837.7237.71MIN: 37.64 / MAX: 38.3MIN: 37.47 / MAX: 38.19MIN: 37.4 / MAX: 38.13

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objbabb81624324032.5632.5032.44MIN: 32.27 / MAX: 32.95MIN: 32.25 / MAX: 32.9MIN: 32.17 / MAX: 32.79

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilebabb2040608010099.15100.27100.75

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compilebbba70140210280350309.09309.31309.57

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100abb30K60K90K120K150K152180.63151733.731. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200bba30K60K90K120K150K149820.05147630.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500bba30K60K90K120K150K143874.15141761.591. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000bba30K60K90K120K150K139997.58139200.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256bbab2000M4000M6000M8000M10000M9206384930917331133091020320201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512bbba2000M4000M6000M8000M10000M1019337361010167598450101457253301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bbab2K4K6K8K10K8109.88029.48026.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096babb110K220K330K440K550K536533.8534883.8534630.51. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20abbb30000M60000M90000M120000M150000M1501767603501500740050901500592428401. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMbbba30000M60000M90000M120000M150000M1597819726501591609346201591512877701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMbbba30000M60000M90000M120000M150000M1185964209501184999934601182502628001. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305abbb20000M40000M60000M80000M100000M8051646655080513755000805125573301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheabbb4080120160200163.13157.77154.30MIN: 16.41 / MAX: 1200MIN: 16.57 / MAX: 1621.62MIN: 15.97 / MAX: 1578.95

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runbabb50100150200250211.36205.05200.24MIN: 18.65 / MAX: 2500MIN: 18.52 / MAX: 1363.64MIN: 18.21 / MAX: 1714.29

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runbbba50100150200250211.12210.46206.05MIN: 18.35 / MAX: 1875MIN: 18.39 / MAX: 1276.6MIN: 18.75 / MAX: 1090.91

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5abbb400K800K1200K1600K2000K2007905.011937351.981672007.691. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10abbb400K800K1200K1600K2000K1719545.481700246.181478446.781. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100bbba300K600K900K1200K1500K1587757.541550585.251290869.251. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fillabb50K100K150K200K250K2514872436801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readabb20M40M60M80M100M1152097151148301461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randomabb50K100K150K200K250K2333752287031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Fillabb60K120K180K240K300K2596502499871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncbba16003200480064008000752769711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingabb1.1M2.2M3.3M4.4M5.5M491020048754801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randombba500K1000K1500K2000K2500K225151722289181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlyabbb200K400K600K800K1000K1068707106795410528481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencyabbb0.01060.02120.03180.04240.0530.0470.0470.0471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writebabb20040060080010008118098081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencybabb142842567061.6661.8461.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlyabbb200K400K600K800K1000K9946559936609912621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencyabbb0.01130.02260.03390.04520.05650.050.050.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writeabbb80016002400320040003946361030311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensbabb61218243024.1324.1524.641. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencyabbb4812162012.6713.8516.501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm


Phoronix Test Suite v10.8.4