eee

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304029-NE-EEE72972408&rdt&grs.

eeeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionbbba2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860ASPEEDVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.046.1.0-phx (x86_64)GNOME Shell 3.36.9X Server 1.20.13GCC 9.4.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302 Python Details- Python 2.7.18 + Python 3.8.10Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

eeetensorflow: CPU - 32 - GoogLeNetonednn: Convolution Batch Shapes Auto - f32 - CPUpgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writeonednn: IP Shapes 3D - bf16bf16bf16 - CPUmemcached: 1:100memcached: 1:5memcached: 1:10compress-zstd: 3, Long Mode - Compression Speedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUffmpeg: libx265 - Liveffmpeg: libx265 - Liverocksdb: Rand Fill Syncjohn-the-ripper: HMAC-SHA512svt-av1: Preset 12 - Bosphorus 4Konednn: IP Shapes 1D - u8s8f32 - CPUcompress-zstd: 8 - Compression Speedclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-llvm: Unix Makefilesrocksdb: Seq Fillsvt-av1: Preset 4 - Bosphorus 4Kcompress-zstd: 8, Long Mode - Compression Speedrocksdb: Rand Filldeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamsvt-av1: Preset 12 - Bosphorus 1080ptensorflow: CPU - 16 - AlexNetclickhouse: 100M Rows Hits Dataset, Third Runvvenc: Bosphorus 4K - Fastdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamspecfem3d: Mount St. Helensrocksdb: Update Randsvt-av1: Preset 4 - Bosphorus 1080pspecfem3d: Water-layered Halfspacedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streambuild2: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: IP Shapes 3D - u8s8f32 - CPUsvt-av1: Preset 8 - Bosphorus 1080ppgbench: 1 - 50 - Read Onlynginx: 500uvg266: Bosphorus 4K - Ultra Fastnginx: 200vvenc: Bosphorus 1080p - Fastsvt-av1: Preset 8 - Bosphorus 4Kuvg266: Bosphorus 1080p - Very Fastspecfem3d: Tomographic Modelonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUvvenc: Bosphorus 1080p - Fasteruvg266: Bosphorus 1080p - Super Fastdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamuvg266: Bosphorus 4K - Super Fasttensorflow: CPU - 256 - GoogLeNetbuild-llvm: Ninjaspecfem3d: Layered Halfspaceffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Uploadopenssl: SHA256ffmpeg: libx265 - Uploaddeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamonednn: Deconvolution Batch shapes_1d - f32 - CPUtensorflow: CPU - 64 - AlexNetopenssl: RSA4096rocksdb: Read Rand Write Randspecfem3d: Homogeneous Halfspacecompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Decompression Speedbuild-godot: Time To Compilevvenc: Bosphorus 4K - Fasteruvg266: Bosphorus 1080p - Slowonednn: IP Shapes 1D - f32 - CPUbuild-ffmpeg: Time To Compileonednn: Recurrent Neural Network Training - f32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamcompress-zstd: 12 - Decompression Speeddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Mediumembree: Pathtracer - Crowntensorflow: CPU - 16 - GoogLeNetdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 13 - Bosphorus 4Kcompress-zstd: 19 - Compression Speedjohn-the-ripper: Blowfishrocksdb: Read While Writingcompress-zstd: 8, Long Mode - Decompression Speedembree: Pathtracer ISPC - Crowncompress-zstd: 3 - Decompression Speedtensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 64 - ResNet-50compress-zstd: 3, Long Mode - Decompression Speedffmpeg: libx264 - Video On Demandjohn-the-ripper: WPA PSKffmpeg: libx264 - Video On Demanddeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 32 - AlexNetjohn-the-ripper: MD5onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUnginx: 1000blender: BMW27 - CPU-Onlyembree: Pathtracer - Asian Dragon Objonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUuvg266: Bosphorus 4K - Slowtensorflow: CPU - 512 - ResNet-50ffmpeg: libx264 - Uploaduvg266: Bosphorus 1080p - Ultra Fastopenssl: SHA512svt-av1: Preset 13 - Bosphorus 1080pffmpeg: libx264 - Uploadembree: Pathtracer ISPC - Asian Dragontensorflow: CPU - 512 - GoogLeNetonednn: Recurrent Neural Network Inference - u8s8f32 - CPUembree: Pathtracer - Asian Dragonpgbench: 1 - 50 - Read Write - Average Latencyopenssl: AES-128-GCMcompress-zstd: 8 - Decompression Speeduvg266: Bosphorus 4K - Very Fasttensorflow: CPU - 256 - AlexNetonednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 512 - AlexNetpgbench: 1 - 50 - Read Writedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamffmpeg: libx264 - Platformdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamffmpeg: libx265 - Video On Demandopenssl: RSA4096ffmpeg: libx264 - Platformembree: Pathtracer ISPC - Asian Dragon Objffmpeg: libx265 - Platformpgbench: 100 - 50 - Read Onlyuvg266: Bosphorus 4K - Mediumffmpeg: libx265 - Video On Demanddeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamrocksdb: Rand Readtensorflow: CPU - 32 - ResNet-50draco: Church Facadeffmpeg: libx265 - Platformnginx: 100openssl: AES-256-GCMdraco: Liondeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamblender: Pabellon Barcelona - CPU-Onlyblender: Barbershop - CPU-Onlyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUblender: Fishy Cat - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUtensorflow: CPU - 64 - GoogLeNetonednn: Deconvolution Batch shapes_3d - f32 - CPUjohn-the-ripper: bcryptblender: Classroom - CPU-Onlybuild-nodejs: Time To Compiletensorflow: CPU - 16 - ResNet-50onednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenssl: ChaCha20deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamopenssl: ChaCha20-Poly1305deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streampgbench: 100 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Only - Average Latencyapache: 100bbba64.044.7879913.8536104.306171550585.251672007.691478446.782834.901443.09364115.76554917843.6291970000117.7421.74816545.9157.77211.366.45153.91640.6439.2822.608289.9235.27184.48210.463.57924.1254902986.34461.36735655299.1511.2804286.951105284821.489.64136.22542.6225.268860658797.59116.89745.120.33104.62339.91264.46254943632.80153.96238.65910203202010.5814.8033146.488026.731.089935816846.7833.2234.2216.02618.451.717833.2731386.62996.420.0528.493850.960.688332111.15412.8484051097.929.50231095.122.271144.538.16213811198.50109.7447190009.510191385.4129.44081381.337.9510.4248.2410167598450201.732242.2427549337.7104785.3933.177861.6561597819726501093.718.11212.5788.652227.8581138.07367.613657159536533.8198.96034061732.556520.529912628.8220.6118.31369.2197863331185964209500.5627448.6891277.642.7382148864309.31115.15.687286.35868150074005090805125573300.050.04763.784.7749816.49730313.330731587757.541937351.981700246.18310.54.358623.17173124.5140.56752798271000117.5551.69336572.3154.30200.246.79151.41584.1419.8472499872.647279.42436809.1226109.4518228.89182.22211.123.634394.453445.589824.6398221772287036.22260.51332890918.7109961.1346100.74718.70791.2825386.4841067954143874.1521.62149820.059.74635.89342.3225.0077492789.80816.67545.2340.8629440.408520.07104.2338.88564.48091452532.83153.822107334235.941449418920638493010.7066.092515.118514.6464145.418109.8225151731.065733738849834.2232.025.9718.471.7325733.1781374.1923.6443996.542.269720.2228.745551.1550.695319.71680.691037110.28612.94856248754801100.729.53621095.829.4322.351140.238.28214200197.89616466434.930828.6153109.0747120009.512521377.45139997.5855.7629.48531380.037.9130.9510.4748.410193373610201.444241.1337.7176109.27787.69333.171161.9131591609346201097.618.14213.32789.003227808100.153638.21179.5686366.29534630.5198.2732.443820.509936608.8520.68961.970711483014618.39381369.540182188151733.7311849999346067138.6925115.02918.8977112.227999.7544180.2832192.11580.850.56227374.63115.40548.66428.6721177.792.738348912159.27309.08615.095.686516.35249196.7412150059242840352.043591.35351.0891139.499128.810254.32988051375500018.38790.050.04743.323.529712.6739464.05731290869.252007905.011719545.48325.94.394633.45599125.37475642440.28697191245000110.2821.63878579.4163.13205.056.71146.21664.3439.122596502.55282.72514878.8739112.5065232.24883.2206.053.657403.046944.620624.1480053012333756.22360.23716758118.3814977.294100.2718.41721.2997885.6591068707141761.5921.8147630.669.60536.41342.9325.349439354786.91916.72944.6440.3347446.167220.29103.29343.24263.68210572133.20152.11237.23567868917331133010.6465.378415.283614.7513146.938029.4222891831.379210956840.6826232.6415.99118.31.7335433.481384.1423.4331987.642.649920.0428.638350.7151.111719.55620.693934110.63812.84875649102001105.729.71141102.729.6322.2113738.03212818199.1834.714528.7927109.5346910009.566671379.97139200.6656.0729.5951374.377.9131.110.4448.1710145725330200.802241.76402751237.8838109.76788.80433.034961.8421591512877701093.318.07212.65786.003227.16809100.524438.19178.9198366.57534883.8198.34755398632.499620.459946558.8320.66965.213411520971518.259410370.338638914152180.6311825026280066948.668115.35298.8732112.535599.4868180.7166191.67579.520.56351574.47115.15838.68278.6707177.762.7434748821159.56309.5715.115.693536.35319196.5611150176760350351.833291.406451.1119139.5608128.777554.33438051646655018.38780.050.047OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetbbba142842567064.0463.7843.32

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbbba1.07732.15463.23194.30925.38654.787994.774983.52970MIN: 2.75MIN: 2.76MIN: 2.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencybbba4812162013.8516.5012.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writebbba80016002400320040003610303139461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUbbba0.96891.93782.90673.87564.84454.306173.330734.05730MIN: 2.98MIN: 2.9MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100bbba300K600K900K1200K1500K1550585.251587757.541290869.251. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5bbba400K800K1200K1600K2000K1672007.691937351.982007905.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10bbba400K800K1200K1600K2000K1478446.781700246.181719545.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Zstd Compression

Compression Level: 3, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedbbba70140210280350283.0310.5325.91. (CC) gcc options: -O3 -pthread -lz -llzma

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUbbba1.10282.20563.30844.41125.5144.901444.358624.39463MIN: 2.4MIN: 2.47MIN: 2.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbbba0.77761.55522.33283.11043.8883.093643.171733.45599MIN: 1.81MIN: 1.84MIN: 1.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba306090120150115.77124.51125.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livebbba102030405043.6240.5640.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncbba16003200480064008000752769711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

John The Ripper

Test: HMAC-SHA512

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512bbba20M40M60M80M100M9197000098271000912450001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4Kbbba306090120150117.74117.56110.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUbbba0.39330.78661.17991.57321.96651.748161.693361.63878MIN: 1.56MIN: 1.56MIN: 1.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

Compression Level: 8 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedbbba130260390520650545.9572.3579.41. (CC) gcc options: -O3 -pthread -lz -llzma

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachebbba4080120160200157.77154.30163.13MIN: 16.57 / MAX: 1621.62MIN: 15.97 / MAX: 1578.95MIN: 16.41 / MAX: 1200

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runbbba50100150200250211.36200.24205.05MIN: 18.65 / MAX: 2500MIN: 18.21 / MAX: 1714.29MIN: 18.52 / MAX: 1363.64

Zstd Compression

Compression Level: 19, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedbbba2468106.456.796.711. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedbbba306090120150153.9151.4146.21. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedbbba4008001200160020001640.61584.11664.31. (CC) gcc options: -O3 -pthread -lz -llzma

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesbbba100200300400500439.28419.85439.12

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Fillbba60K120K180K240K300K2499872596501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4Kbbba0.59561.19121.78682.38242.9782.6082.6472.5501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

Compression Level: 8, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedbbba60120180240300289.9279.4282.71. (CC) gcc options: -O3 -pthread -lz -llzma

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fillbba50K100K150K200K250K2436802514871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambba36912159.12268.8739

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambba306090120150109.45112.51

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pbbba50100150200250235.27228.89232.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetbbba2040608010084.4882.2283.20

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runbbba50100150200250210.46211.12206.05MIN: 18.39 / MAX: 1276.6MIN: 18.35 / MAX: 1875MIN: 18.75 / MAX: 1090.91

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastbbba0.82281.64562.46843.29124.1143.5793.6343.6571. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streambba90180270360450394.45403.05

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streambba102030405045.5944.62

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensbbba61218243024.1324.6424.151. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randombba50K100K150K200K250K2287032333751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pbbba2468106.3446.2226.2231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacebbba142842567061.3760.5160.241. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba51015202518.7118.38

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambba2004006008001000961.13977.29

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilebbba2040608010099.15100.75100.27

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba51015202518.7118.42

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbbba0.29250.5850.87751.171.46251.280421.282531.29978MIN: 1.07MIN: 1.1MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pbbba2040608010086.9586.4885.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlybbba200K400K600K800K1000K1052848106795410687071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500bba30K60K90K120K150K143874.15141761.591. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastbbba51015202521.4821.6221.80

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200bba30K60K90K120K150K149820.05147630.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastbbba36912159.6419.7469.6051. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4Kbbba81624324036.2335.8936.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

Video Input: Bosphorus 1080p - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastbbba102030405042.6242.3242.93

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelbbba61218243025.2725.0125.351. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUbbba2004006008001000797.59789.81786.92MIN: 783.04MIN: 783.36MIN: 779.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterbbba4812162016.9016.6816.731. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

uvg266

Video Input: Bosphorus 1080p - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastbbba102030405045.1045.2344.64

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba91827364540.8640.33

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambba100200300400500440.41446.17

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastbbba51015202520.3320.0720.29

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetbbba20406080100104.62104.20103.29

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjabbba70140210280350339.91338.89343.24

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacebbba142842567064.4664.4863.681. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba81624324032.8032.8333.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebbba306090120150153.96153.82152.111. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbba50100150200250238.65235.94237.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256bbba2000M4000M6000M8000M10000M9102032020920638493091733113301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadbbba369121510.5810.7010.641. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba153045607566.0965.38

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambba4812162015.1215.28

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUbbba4812162014.8014.6514.75MIN: 12.39MIN: 12.43MIN: 12.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetbbba306090120150146.48145.41146.93

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bbba2K4K6K8K10K8026.78109.88029.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randombba500K1000K1500K2000K2500K225151722289181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacebbba71421283531.0931.0731.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Zstd Compression

Compression Level: 19 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedbbba2004006008001000846.7849.0840.61. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedbbba2004006008001000833.2834.2826.01. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compilebbba50100150200250234.22232.02232.64

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterbbba2468106.0265.9705.9911. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

uvg266

Video Input: Bosphorus 1080p - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowbbba51015202518.4518.4718.30

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUbbba0.390.781.171.561.951.717801.732571.73354MIN: 1.64MIN: 1.64MIN: 1.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilebbba81624324033.2733.1833.48

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbbba300600900120015001386.621374.191384.14MIN: 1378.88MIN: 1367.79MIN: 1379.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambba61218243023.6423.43

Zstd Compression

Compression Level: 12 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedbbba2004006008001000996.4996.5987.61. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streambba102030405042.2742.65

uvg266

Video Input: Bosphorus 1080p - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumbbba51015202520.0520.2220.04

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownbbba71421283528.4928.7528.64MIN: 28.14 / MAX: 29MIN: 28.39 / MAX: 29.15MIN: 28.27 / MAX: 29.08

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetbbba122436486050.9651.1550.71

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba122436486050.7051.11

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambba51015202519.7219.56

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbbba0.15610.31220.46830.62440.78050.6883320.6910370.693934MIN: 0.68MIN: 0.68MIN: 0.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4Kbbba20406080100111.15110.29110.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

Compression Level: 19 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedbbba369121512.812.912.81. (CC) gcc options: -O3 -pthread -lz -llzma

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishbbba10K20K30K40K50K4840548562487561. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingbba1.1M2.2M3.3M4.4M5.5M487548049102001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Zstd Compression

Compression Level: 8, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedbbba20040060080010001097.91100.71105.71. (CC) gcc options: -O3 -pthread -lz -llzma

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownbbba71421283529.5029.5429.71MIN: 29.05 / MAX: 30.07MIN: 29.06 / MAX: 30.1MIN: 29.3 / MAX: 30.17

Zstd Compression

Compression Level: 3 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedbbba20040060080010001095.11095.81102.71. (CC) gcc options: -O3 -pthread -lz -llzma

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50bba71421283529.4329.63

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50bbba51015202522.2722.3522.20

Zstd Compression

Compression Level: 3, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedbbba20040060080010001144.51140.21137.01. (CC) gcc options: -O3 -pthread -lz -llzma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba91827364538.1638.2838.031. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

John The Ripper

Test: WPA PSK

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKbbba50K100K150K200K250K2138112142002128181. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbbba4080120160200198.50197.90199.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba81624324034.9334.71

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streambba71421283528.6228.79

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetbbba20406080100109.74109.07109.53

John The Ripper

Test: MD5

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5bbba1000K2000K3000K4000K5000K4719000471200046910001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUbbba36912159.510199.512529.56667MIN: 9.44MIN: 9.44MIN: 9.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUbbba300600900120015001385.411377.451379.97MIN: 1377.53MIN: 1371.04MIN: 1367.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000bba30K60K90K120K150K139997.58139200.661. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlybba132639526555.7656.07

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objbbba71421283529.4429.4929.60MIN: 29.24 / MAX: 29.86MIN: 29.24 / MAX: 29.77MIN: 29.37 / MAX: 30.06

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUbbba300600900120015001381.331380.031374.37MIN: 1376.64MIN: 1374.88MIN: 1363.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

uvg266

Video Input: Bosphorus 4K - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowbbba2468107.957.917.91

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50bba71421283530.9531.10

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbba369121510.4210.4710.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

uvg266

Video Input: Bosphorus 1080p - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastbbba112233445548.2448.4048.17

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512bbba2000M4000M6000M8000M10000M1016759845010193373610101457253301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pbbba4080120160200201.73201.44200.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbbba50100150200250242.24241.13241.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonbbba91827364537.7137.7237.88MIN: 37.4 / MAX: 38.13MIN: 37.47 / MAX: 38.19MIN: 37.64 / MAX: 38.3

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetbba20406080100109.27109.76

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUbbba2004006008001000785.39787.69788.80MIN: 775.34MIN: 778.68MIN: 782.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonbbba81624324033.1833.1733.03MIN: 32.97 / MAX: 33.5MIN: 32.95 / MAX: 33.5MIN: 32.82 / MAX: 33.48

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencybbba142842567061.6661.9161.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMbbba30000M60000M90000M120000M150000M1597819726501591609346201591512877701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

Compression Level: 8 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedbbba20040060080010001093.71097.61093.31. (CC) gcc options: -O3 -pthread -lz -llzma

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastbbba4812162018.1118.1418.07

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetbbba50100150200250212.50213.32212.65

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbbba2004006008001000788.65789.00786.00MIN: 781.04MIN: 779.2MIN: 779.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetbbba50100150200250227.85227.00227.16

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writebbba20040060080010008118088091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba20406080100100.15100.52

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbba91827364538.0738.2138.191. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambba4080120160200179.57178.92

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbba80160240320400367.61366.29366.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bbba110K220K330K440K550K536533.8534630.5534883.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformbbba4080120160200198.96198.27198.351. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objbbba81624324032.5632.4432.50MIN: 32.27 / MAX: 32.95MIN: 32.17 / MAX: 32.79MIN: 32.25 / MAX: 32.9

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba51015202520.5220.5020.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlybbba200K400K600K800K1000K9912629936609946551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumbbba2468108.828.858.83

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandbbba51015202520.6120.6820.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambba2004006008001000961.97965.21

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readbba20M40M60M80M100M1148301461152097151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50bbba51015202518.3118.3018.25

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadebba2K4K6K8K10K938194101. (CXX) g++ options: -O3

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformbbba80160240320400369.22369.54370.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100bba30K60K90K120K150K151733.73152180.631. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMbbba30000M60000M90000M120000M150000M1185964209501184999934601182502628001. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionbba14002800420056007000671366941. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba2468108.69258.6680

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambba306090120150115.03115.35

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streambba2468108.89778.8732

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streambba306090120150112.23112.54

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba2040608010099.7599.49

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambba4080120160200180.28180.72

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlybba4080120160200192.11191.67

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlybba130260390520650580.85579.52

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUbbba0.12680.25360.38040.50720.6340.5627440.5622730.563515MIN: 0.54MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlybba2040608010074.6374.47

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambba306090120150115.41115.16

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambba2468108.66428.6827

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUbbba2468108.689128.672118.67071MIN: 8.58MIN: 8.57MIN: 8.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetbbba2040608010077.6477.7977.76

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbbba0.61731.23461.85192.46923.08652.738212.738302.74347MIN: 2.72MIN: 2.72MIN: 2.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

John The Ripper

Test: bcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptbbba10K20K30K40K50K4886448912488211. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlybba4080120160200159.27159.56

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compilebbba70140210280350309.31309.09309.57

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50bbba4812162015.1015.0915.11

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUbbba1.2812.5623.8435.1246.4055.687285.686515.69353MIN: 5.54MIN: 5.55MIN: 5.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUbbba2468106.358686.352496.35319MIN: 6.3MIN: 6.3MIN: 6.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambba4080120160200196.74196.56

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20bbba30000M60000M90000M120000M150000M1500740050901500592428401501767603501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba80160240320400352.04351.83

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambba2040608010091.3591.41

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambba122436486051.0951.11

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambba306090120150139.50139.56

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambba306090120150128.81128.78

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambba122436486054.3354.33

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305bbba20000M40000M60000M80000M100000M8051255733080513755000805164665501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambba51015202518.3918.39

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencybbba0.01130.02260.03390.04520.05650.050.050.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencybbba0.01060.02120.03180.04240.0530.0470.0470.0471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm


Phoronix Test Suite v10.8.5