eee

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304029-NE-EEE72972408&rdt&grw.

eeeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionbbba2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860ASPEEDVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.046.1.0-phx (x86_64)GNOME Shell 3.36.9X Server 1.20.13GCC 9.4.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302 Python Details- Python 2.7.18 + Python 3.8.10Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

eeedraco: Liondraco: Church Facadetensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNetspecfem3d: Water-layered Halfspacespecfem3d: Homogeneous Halfspacespecfem3d: Tomographic Modeltensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUbuild-ffmpeg: Time To Compilejohn-the-ripper: bcryptjohn-the-ripper: WPA PSKjohn-the-ripper: Blowfishjohn-the-ripper: HMAC-SHA512john-the-ripper: MD5build-llvm: Ninjabuild-llvm: Unix Makefilescompress-zstd: 3 - Compression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 12 - Decompression Speedspecfem3d: Layered Halfspacecompress-zstd: 3 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demanduvg266: Bosphorus 4K - Slowuvg266: Bosphorus 4K - Mediumuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterbuild-godot: Time To Compileembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objbuild2: Time To Compilebuild-nodejs: Time To Compilenginx: 100nginx: 200nginx: 500nginx: 1000openssl: SHA256openssl: SHA512openssl: RSA4096openssl: RSA4096openssl: ChaCha20openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20-Poly1305clickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runmemcached: 1:5memcached: 1:10memcached: 1:100rocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randpgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Writespecfem3d: Mount St. Helenspgbench: 100 - 50 - Read Write - Average Latencybbba84.48109.74146.4861.36735655231.08993581625.268860658212.5227.8550.9615.164.0418.3177.6422.27104.621.71783.093641.748161.280425.687284.306174.7879914.80332.738214.901440.5627440.6883321386.62788.6521385.416.358688.689129.51019785.391381.33797.59133.2734886421381148405919700004719000339.912439.2821640.6545.91093.7153.9996.464.4625494361095.112.8846.72831144.5289.91097.96.45833.22.60836.225117.742111.1546.34486.951235.271201.73232.80153.96115.76554917843.62242.2427549310.42238.6510.58198.96034061738.07369.21978633320.52198.5038.16367.61365715920.617.958.8218.4520.0518.1120.3321.4842.6245.148.243.5796.0269.64116.897234.22128.493829.502333.177829.440837.710432.556599.151309.3119102032020101675984508026.7536533.815007400509015978197265011859642095080512557330157.77211.36210.461672007.691478446.781550585.2510528480.04781161.6569912620.05361024.12549029813.856713938182.22109.07145.4160.51332890931.06573373825.0077492213.3222751.1515.0963.7818.377.7922.35104.229.43109.2730.9518.7109961.13468.6642115.4054394.453445.5898112.22798.897799.7544180.283242.269723.6443139.499128.810254.329818.3879352.043551.0891109.45189.1226179.5686100.153666.092515.118540.8629440.408519.716850.695391.353196.741234.930828.615318.7079961.97078.6925115.02911.732573.171731.693361.282535.686513.330734.7749814.64642.73834.358620.5622730.6910371374.19789.0031377.456.352498.672119.51252787.6931380.03789.80833.1784891221420048562982710004712000338.885419.8471584.1572.31097.6151.4996.564.4809145251095.812.9849310.51140.2279.41100.76.79834.22.64735.893117.555110.2866.22286.484228.891201.44455.76159.2774.63580.85192.1132.83153.822107334124.5140.56241.1310.47235.94144941810.70198.2738.21369.54018218820.50197.89616466438.28366.2920.687.918.8518.4720.2218.1420.0721.6242.3245.2348.43.6345.979.74616.675232.0228.745529.536233.171129.485337.717632.4438100.747309.086151733.73149820.05143874.15139997.589206384930101933736108109.8534630.515005924284015916093462011849999346080513755000154.30200.24211.121937351.981700246.181587757.5424368011483014622870324998775274875480225151710679540.04780861.9139936600.05303124.63982217716.4976694941083.2109.53146.9360.23716758131.37921095625.349439354212.65227.1650.7115.1143.3218.2577.7622.2103.2929.63109.7631.118.3814977.2948.6827115.1583403.046944.6206112.53558.873299.4868180.716642.649923.4331139.5608128.777554.334318.3878351.833251.1119112.50658.8739178.9198100.524465.378415.283640.3347446.167219.556251.111791.4064196.561134.714528.792718.4172965.21348.668115.35291.733543.455991.638781.299785.693534.05733.529714.75132.743474.394630.5635150.6939341384.14786.0031379.976.353198.670719.56667788.8041374.37786.91933.484882121281848756912450004691000343.242439.121664.3579.41093.3146.2987.663.6821057211102.712.8840.6325.91137282.71105.76.718262.5536.413110.282110.6386.22385.659232.248200.80256.07159.5674.47579.52191.6733.20152.11125.37475642440.28241.76402751210.44237.2356786810.64198.34755398638.19370.33863891420.45199.1838.03366.5720.667.918.8318.320.0418.0720.2921.842.9344.6448.173.6575.9919.60516.729232.64128.638329.711433.034929.59537.883832.4996100.27309.57152180.63147630.66141761.59139200.669173311330101457253308029.4534883.815017676035015915128777011825026280080516466550163.13205.05206.052007905.011719545.481290869.2525148711520971523337525965069714910200222891810687070.04780961.8429946550.05394624.14800530112.67OpenBenchmarking.org

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionbba14002800420056007000671366941. (CXX) g++ options: -O3

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadebba2K4K6K8K10K938194101. (CXX) g++ options: -O3

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetbbba2040608010084.4882.2283.20

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetbbba20406080100109.74109.07109.53

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetbbba306090120150146.48145.41146.93

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacebbba142842567061.3760.5160.241. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacebbba71421283531.0931.0731.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelbbba61218243025.2725.0125.351. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetbbba50100150200250212.50213.32212.65

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetbbba50100150200250227.85227.00227.16

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetbbba122436486050.9651.1550.71

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50bbba4812162015.1015.0915.11

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetbbba142842567064.0463.7843.32

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50bbba51015202518.3118.3018.25

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetbbba2040608010077.6477.7977.76

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50bbba51015202522.2722.3522.20

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetbbba20406080100104.62104.20103.29

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50bba71421283529.4329.63

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetbba20406080100109.27109.76

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50bba71421283530.9531.10

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba51015202518.7118.38

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba2004006008001000961.13977.29

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambba2468108.66428.6827

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambba306090120150115.41115.16

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streambba90180270360450394.45403.05

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streambba102030405045.5944.62

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streambba306090120150112.23112.54

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streambba2468108.89778.8732

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba2040608010099.7599.49

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba4080120160200180.28180.72

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambba102030405042.2742.65

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambba61218243023.6423.43

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambba306090120150139.50139.56

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambba306090120150128.81128.78

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambba122436486054.3354.33

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambba51015202518.3918.39

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba80160240320400352.04351.83

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba122436486051.0951.11

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambba306090120150109.45112.51

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambba36912159.12268.8739

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba4080120160200179.57178.92

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba20406080100100.15100.52

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba153045607566.0965.38

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba4812162015.1215.28

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba91827364540.8640.33

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba100200300400500440.41446.17

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba51015202519.7219.56

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba122436486050.7051.11

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambba2040608010091.3591.41

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambba4080120160200196.74196.56

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba81624324034.9334.71

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba71421283528.6228.79

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba51015202518.7118.42

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba2004006008001000961.97965.21

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba2468108.69258.6680

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba306090120150115.03115.35

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUbbba0.390.781.171.561.951.717801.732571.73354MIN: 1.64MIN: 1.64MIN: 1.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbbba0.77761.55522.33283.11043.8883.093643.171733.45599MIN: 1.81MIN: 1.84MIN: 1.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUbbba0.39330.78661.17991.57321.96651.748161.693361.63878MIN: 1.56MIN: 1.56MIN: 1.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbbba0.29250.5850.87751.171.46251.280421.282531.29978MIN: 1.07MIN: 1.1MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUbbba1.2812.5623.8435.1246.4055.687285.686515.69353MIN: 5.54MIN: 5.55MIN: 5.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUbbba0.96891.93782.90673.87564.84454.306173.330734.05730MIN: 2.98MIN: 2.9MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbbba1.07732.15463.23194.30925.38654.787994.774983.52970MIN: 2.75MIN: 2.76MIN: 2.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUbbba4812162014.8014.6514.75MIN: 12.39MIN: 12.43MIN: 12.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbbba0.61731.23461.85192.46923.08652.738212.738302.74347MIN: 2.72MIN: 2.72MIN: 2.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUbbba1.10282.20563.30844.41125.5144.901444.358624.39463MIN: 2.4MIN: 2.47MIN: 2.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUbbba0.12680.25360.38040.50720.6340.5627440.5622730.563515MIN: 0.54MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbbba0.15610.31220.46830.62440.78050.6883320.6910370.693934MIN: 0.68MIN: 0.68MIN: 0.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbbba300600900120015001386.621374.191384.14MIN: 1378.88MIN: 1367.79MIN: 1379.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbbba2004006008001000788.65789.00786.00MIN: 781.04MIN: 779.2MIN: 779.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUbbba300600900120015001385.411377.451379.97MIN: 1377.53MIN: 1371.04MIN: 1367.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUbbba2468106.358686.352496.35319MIN: 6.3MIN: 6.3MIN: 6.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUbbba2468108.689128.672118.67071MIN: 8.58MIN: 8.57MIN: 8.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUbbba36912159.510199.512529.56667MIN: 9.44MIN: 9.44MIN: 9.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUbbba2004006008001000785.39787.69788.80MIN: 775.34MIN: 778.68MIN: 782.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUbbba300600900120015001381.331380.031374.37MIN: 1376.64MIN: 1374.88MIN: 1363.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUbbba2004006008001000797.59789.81786.92MIN: 783.04MIN: 783.36MIN: 779.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilebbba81624324033.2733.1833.48

John The Ripper

Test: bcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptbbba10K20K30K40K50K4886448912488211. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: WPA PSK

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKbbba50K100K150K200K250K2138112142002128181. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishbbba10K20K30K40K50K4840548562487561. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: HMAC-SHA512

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512bbba20M40M60M80M100M9197000098271000912450001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

John The Ripper

Test: MD5

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5bbba1000K2000K3000K4000K5000K4719000471200046910001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjabbba70140210280350339.91338.89343.24

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesbbba100200300400500439.28419.85439.12

Zstd Compression

Compression Level: 3 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedbbba4008001200160020001640.61584.11664.31. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedbbba130260390520650545.9572.3579.41. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedbbba20040060080010001093.71097.61093.31. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedbbba306090120150153.9151.4146.21. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedbbba2004006008001000996.4996.5987.61. (CC) gcc options: -O3 -pthread -lz -llzma

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacebbba142842567064.4664.4863.681. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Zstd Compression

Compression Level: 3 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedbbba20040060080010001095.11095.81102.71. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedbbba369121512.812.912.81. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedbbba2004006008001000846.7849.0840.61. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedbbba70140210280350283.0310.5325.91. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedbbba20040060080010001144.51140.21137.01. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedbbba60120180240300289.9279.4282.71. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 8, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedbbba20040060080010001097.91100.71105.71. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedbbba2468106.456.796.711. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedbbba2004006008001000833.2834.2826.01. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4Kbbba0.59561.19121.78682.38242.9782.6082.6472.5501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4Kbbba81624324036.2335.8936.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4Kbbba306090120150117.74117.56110.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4Kbbba20406080100111.15110.29110.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pbbba2468106.3446.2226.2231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pbbba2040608010086.9586.4885.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pbbba50100150200250235.27228.89232.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pbbba4080120160200201.73201.44200.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlybba132639526555.7656.07

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlybba4080120160200159.27159.56

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlybba2040608010074.6374.47

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlybba130260390520650580.85579.52

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlybba4080120160200192.11191.67

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba81624324032.8032.8333.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba306090120150153.96153.82152.111. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba306090120150115.77124.51125.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba102030405043.6240.5640.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbba50100150200250242.24241.13241.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbba369121510.4210.4710.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbba50100150200250238.65235.94237.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbba369121510.5810.7010.641. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbba4080120160200198.96198.27198.351. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbba91827364538.0738.2138.191. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba80160240320400369.22369.54370.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba51015202520.5220.5020.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba4080120160200198.50197.90199.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba91827364538.1638.2838.031. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbba80160240320400367.61366.29366.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbba51015202520.6120.6820.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

uvg266

Video Input: Bosphorus 4K - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowbbba2468107.957.917.91

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumbbba2468108.828.858.83

uvg266

Video Input: Bosphorus 1080p - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowbbba51015202518.4518.4718.30

uvg266

Video Input: Bosphorus 1080p - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumbbba51015202520.0520.2220.04

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastbbba4812162018.1118.1418.07

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastbbba51015202520.3320.0720.29

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastbbba51015202521.4821.6221.80

uvg266

Video Input: Bosphorus 1080p - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastbbba102030405042.6242.3242.93

uvg266

Video Input: Bosphorus 1080p - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastbbba102030405045.1045.2344.64

uvg266

Video Input: Bosphorus 1080p - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastbbba112233445548.2448.4048.17

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastbbba0.82281.64562.46843.29124.1143.5793.6343.6571. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterbbba2468106.0265.9705.9911. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastbbba36912159.6419.7469.6051. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterbbba4812162016.9016.6816.731. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compilebbba50100150200250234.22232.02232.64

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownbbba71421283528.4928.7528.64MIN: 28.14 / MAX: 29MIN: 28.39 / MAX: 29.15MIN: 28.27 / MAX: 29.08

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownbbba71421283529.5029.5429.71MIN: 29.05 / MAX: 30.07MIN: 29.06 / MAX: 30.1MIN: 29.3 / MAX: 30.17

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonbbba81624324033.1833.1733.03MIN: 32.97 / MAX: 33.5MIN: 32.95 / MAX: 33.5MIN: 32.82 / MAX: 33.48

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objbbba71421283529.4429.4929.60MIN: 29.24 / MAX: 29.86MIN: 29.24 / MAX: 29.77MIN: 29.37 / MAX: 30.06

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonbbba91827364537.7137.7237.88MIN: 37.4 / MAX: 38.13MIN: 37.47 / MAX: 38.19MIN: 37.64 / MAX: 38.3

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objbbba81624324032.5632.4432.50MIN: 32.27 / MAX: 32.95MIN: 32.17 / MAX: 32.79MIN: 32.25 / MAX: 32.9

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilebbba2040608010099.15100.75100.27

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compilebbba70140210280350309.31309.09309.57

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100bba30K60K90K120K150K151733.73152180.631. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200bba30K60K90K120K150K149820.05147630.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500bba30K60K90K120K150K143874.15141761.591. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000bba30K60K90K120K150K139997.58139200.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256bbba2000M4000M6000M8000M10000M9102032020920638493091733113301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512bbba2000M4000M6000M8000M10000M1016759845010193373610101457253301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bbba2K4K6K8K10K8026.78109.88029.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bbba110K220K330K440K550K536533.8534630.5534883.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20bbba30000M60000M90000M120000M150000M1500740050901500592428401501767603501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMbbba30000M60000M90000M120000M150000M1597819726501591609346201591512877701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMbbba30000M60000M90000M120000M150000M1185964209501184999934601182502628001. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305bbba20000M40000M60000M80000M100000M8051255733080513755000805164665501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachebbba4080120160200157.77154.30163.13MIN: 16.57 / MAX: 1621.62MIN: 15.97 / MAX: 1578.95MIN: 16.41 / MAX: 1200

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runbbba50100150200250211.36200.24205.05MIN: 18.65 / MAX: 2500MIN: 18.21 / MAX: 1714.29MIN: 18.52 / MAX: 1363.64

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runbbba50100150200250210.46211.12206.05MIN: 18.39 / MAX: 1276.6MIN: 18.35 / MAX: 1875MIN: 18.75 / MAX: 1090.91

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5bbba400K800K1200K1600K2000K1672007.691937351.982007905.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10bbba400K800K1200K1600K2000K1478446.781700246.181719545.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100bbba300K600K900K1200K1500K1550585.251587757.541290869.251. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fillbba50K100K150K200K250K2436802514871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readbba20M40M60M80M100M1148301461152097151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randombba50K100K150K200K250K2287032333751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Fillbba60K120K180K240K300K2499872596501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncbba16003200480064008000752769711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingbba1.1M2.2M3.3M4.4M5.5M487548049102001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randombba500K1000K1500K2000K2500K225151722289181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlybbba200K400K600K800K1000K1052848106795410687071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencybbba0.01060.02120.03180.04240.0530.0470.0470.0471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writebbba20040060080010008118088091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencybbba142842567061.6661.9161.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlybbba200K400K600K800K1000K9912629936609946551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencybbba0.01130.02260.03390.04520.05650.050.050.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writebbba80016002400320040003610303139461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensbbba61218243024.1324.6424.151. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencybbba4812162013.8516.5012.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm


Phoronix Test Suite v10.8.5