eee

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304029-NE-EEE72972408&grs&sro.

eeeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionabbb2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860ASPEEDVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.046.1.0-phx (x86_64)GNOME Shell 3.36.9X Server 1.20.13GCC 9.4.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003302 Python Details- Python 2.7.18 + Python 3.8.10Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

eeetensorflow: CPU - 32 - GoogLeNetonednn: Convolution Batch Shapes Auto - f32 - CPUpgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writeonednn: IP Shapes 3D - bf16bf16bf16 - CPUmemcached: 1:100memcached: 1:5memcached: 1:10compress-zstd: 3, Long Mode - Compression Speedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUffmpeg: libx265 - Liveffmpeg: libx265 - Liverocksdb: Rand Fill Syncjohn-the-ripper: HMAC-SHA512svt-av1: Preset 12 - Bosphorus 4Konednn: IP Shapes 1D - u8s8f32 - CPUcompress-zstd: 8 - Compression Speedclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-llvm: Unix Makefilesrocksdb: Seq Fillsvt-av1: Preset 4 - Bosphorus 4Kcompress-zstd: 8, Long Mode - Compression Speedrocksdb: Rand Filldeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamsvt-av1: Preset 12 - Bosphorus 1080ptensorflow: CPU - 16 - AlexNetclickhouse: 100M Rows Hits Dataset, Third Runvvenc: Bosphorus 4K - Fastdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamspecfem3d: Mount St. Helensrocksdb: Update Randsvt-av1: Preset 4 - Bosphorus 1080pspecfem3d: Water-layered Halfspacedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streambuild2: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: IP Shapes 3D - u8s8f32 - CPUsvt-av1: Preset 8 - Bosphorus 1080ppgbench: 1 - 50 - Read Onlynginx: 500uvg266: Bosphorus 4K - Ultra Fastnginx: 200vvenc: Bosphorus 1080p - Fastsvt-av1: Preset 8 - Bosphorus 4Kuvg266: Bosphorus 1080p - Very Fastspecfem3d: Tomographic Modelonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUvvenc: Bosphorus 1080p - Fasteruvg266: Bosphorus 1080p - Super Fastdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamuvg266: Bosphorus 4K - Super Fasttensorflow: CPU - 256 - GoogLeNetbuild-llvm: Ninjaspecfem3d: Layered Halfspaceffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Uploadopenssl: SHA256ffmpeg: libx265 - Uploaddeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamonednn: Deconvolution Batch shapes_1d - f32 - CPUtensorflow: CPU - 64 - AlexNetopenssl: RSA4096rocksdb: Read Rand Write Randspecfem3d: Homogeneous Halfspacecompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Decompression Speedbuild-godot: Time To Compilevvenc: Bosphorus 4K - Fasteruvg266: Bosphorus 1080p - Slowonednn: IP Shapes 1D - f32 - CPUbuild-ffmpeg: Time To Compileonednn: Recurrent Neural Network Training - f32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamcompress-zstd: 12 - Decompression Speeddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Mediumembree: Pathtracer - Crowntensorflow: CPU - 16 - GoogLeNetdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 13 - Bosphorus 4Kcompress-zstd: 19 - Compression Speedjohn-the-ripper: Blowfishrocksdb: Read While Writingcompress-zstd: 8, Long Mode - Decompression Speedembree: Pathtracer ISPC - Crowncompress-zstd: 3 - Decompression Speedtensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 64 - ResNet-50compress-zstd: 3, Long Mode - Decompression Speedffmpeg: libx264 - Video On Demandjohn-the-ripper: WPA PSKffmpeg: libx264 - Video On Demanddeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 32 - AlexNetjohn-the-ripper: MD5onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUnginx: 1000blender: BMW27 - CPU-Onlyembree: Pathtracer - Asian Dragon Objonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUuvg266: Bosphorus 4K - Slowtensorflow: CPU - 512 - ResNet-50ffmpeg: libx264 - Uploaduvg266: Bosphorus 1080p - Ultra Fastopenssl: SHA512svt-av1: Preset 13 - Bosphorus 1080pffmpeg: libx264 - Uploadembree: Pathtracer ISPC - Asian Dragontensorflow: CPU - 512 - GoogLeNetonednn: Recurrent Neural Network Inference - u8s8f32 - CPUembree: Pathtracer - Asian Dragonpgbench: 1 - 50 - Read Write - Average Latencyopenssl: AES-128-GCMcompress-zstd: 8 - Decompression Speeduvg266: Bosphorus 4K - Very Fasttensorflow: CPU - 256 - AlexNetonednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 512 - AlexNetpgbench: 1 - 50 - Read Writedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamffmpeg: libx264 - Platformdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamffmpeg: libx265 - Video On Demandopenssl: RSA4096ffmpeg: libx264 - Platformembree: Pathtracer ISPC - Asian Dragon Objffmpeg: libx265 - Platformpgbench: 100 - 50 - Read Onlyuvg266: Bosphorus 4K - Mediumffmpeg: libx265 - Video On Demanddeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamrocksdb: Rand Readtensorflow: CPU - 32 - ResNet-50draco: Church Facadeffmpeg: libx265 - Platformnginx: 100openssl: AES-256-GCMdraco: Liondeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamblender: Pabellon Barcelona - CPU-Onlyblender: Barbershop - CPU-Onlyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUblender: Fishy Cat - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUtensorflow: CPU - 64 - GoogLeNetonednn: Deconvolution Batch shapes_3d - f32 - CPUjohn-the-ripper: bcryptblender: Classroom - CPU-Onlybuild-nodejs: Time To Compiletensorflow: CPU - 16 - ResNet-50onednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenssl: ChaCha20deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamopenssl: ChaCha20-Poly1305deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streampgbench: 100 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Only - Average Latencyapache: 100abbb43.323.529712.6739464.05731290869.252007905.011719545.48325.94.394633.45599125.37475642440.28697191245000110.2821.63878579.4163.13205.056.71146.21664.3439.122596502.55282.72514878.8739112.5065232.24883.2206.053.657403.046944.620624.1480053012333756.22360.23716758118.3814977.294100.2718.41721.2997885.6591068707141761.5921.8147630.669.60536.41342.9325.349439354786.91916.72944.6440.3347446.167220.29103.29343.24263.68210572133.20152.11237.23567868917331133010.6465.378415.283614.7513146.938029.4222891831.379210956840.6826232.6415.99118.31.7335433.481384.1423.4331987.642.649920.0428.638350.7151.111719.55620.693934110.63812.84875649102001105.729.71141102.729.6322.2113738.03212818199.1834.714528.7927109.5346910009.566671379.97139200.6656.0729.5951374.377.9131.110.4448.1710145725330200.802241.76402751237.8838109.76788.80433.034961.8421591512877701093.318.07212.65786.003227.16809100.524438.19178.9198366.57534883.8198.34755398632.499620.459946558.8320.66965.213411520971518.259410370.338638914152180.6311825026280066948.668115.35298.8732112.535599.4868180.7166191.67579.520.56351574.47115.15838.68278.6707177.762.7434748821159.56309.5715.115.693536.35319196.5611150176760350351.833291.406451.1119139.5608128.777554.33438051646655018.38780.050.04764.044.7879913.8536104.306171550585.251672007.691478446.782834.901443.09364115.76554917843.6291970000117.7421.74816545.9157.77211.366.45153.91640.6439.2822.608289.9235.27184.48210.463.57924.1254902986.34461.36735655299.1511.2804286.951105284821.489.64136.22542.6225.268860658797.59116.89745.120.33104.62339.91264.46254943632.80153.96238.65910203202010.5814.8033146.488026.731.089935816846.7833.2234.2216.02618.451.717833.2731386.62996.420.0528.493850.960.688332111.15412.8484051097.929.50231095.122.271144.538.16213811198.50109.7447190009.510191385.4129.44081381.337.9510.4248.2410167598450201.732242.2427549337.7104785.3933.177861.6561597819726501093.718.11212.5788.652227.8581138.07367.613657159536533.8198.96034061732.556520.529912628.8220.6118.31369.2197863331185964209500.5627448.6891277.642.7382148864309.31115.15.687286.35868150074005090805125573300.050.04763.784.7749816.49730313.330731587757.541937351.981700246.18310.54.358623.17173124.5140.56752798271000117.5551.69336572.3154.30200.246.79151.41584.1419.8472499872.647279.42436809.1226109.4518228.89182.22211.123.634394.453445.589824.6398221772287036.22260.51332890918.7109961.1346100.74718.70791.2825386.4841067954143874.1521.62149820.059.74635.89342.3225.0077492789.80816.67545.2340.8629440.408520.07104.2338.88564.48091452532.83153.822107334235.941449418920638493010.7066.092515.118514.6464145.418109.8225151731.065733738849834.2232.025.9718.471.7325733.1781374.1923.6443996.542.269720.2228.745551.1550.695319.71680.691037110.28612.94856248754801100.729.53621095.829.4322.351140.238.28214200197.89616466434.930828.6153109.0747120009.512521377.45139997.5855.7629.48531380.037.9130.9510.4748.410193373610201.444241.1337.7176109.27787.69333.171161.9131591609346201097.618.14213.32789.003227808100.153638.21179.5686366.29534630.5198.2732.443820.509936608.8520.68961.970711483014618.39381369.540182188151733.7311849999346067138.6925115.02918.8977112.227999.7544180.2832192.11580.850.56227374.63115.40548.66428.6721177.792.738348912159.27309.08615.095.686516.35249196.7412150059242840352.043591.35351.0891139.499128.810254.32988051375500018.38790.050.047OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetabbb142842567043.3264.0463.78

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabbb1.07732.15463.23194.30925.38653.529704.787994.77498MIN: 2.77MIN: 2.75MIN: 2.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencyabbb4812162012.6713.8516.501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writeabbb80016002400320040003946361030311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUabbb0.96891.93782.90673.87564.84454.057304.306173.33073MIN: 2.84MIN: 2.98MIN: 2.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100abbb300K600K900K1200K1500K1290869.251550585.251587757.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5abbb400K800K1200K1600K2000K2007905.011672007.691937351.981. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10abbb400K800K1200K1600K2000K1719545.481478446.781700246.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Zstd Compression

Compression Level: 3, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedabbb70140210280350325.9283.0310.51. (CC) gcc options: -O3 -pthread -lz -llzma

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUabbb1.10282.20563.30844.41125.5144.394634.901444.35862MIN: 2.48MIN: 2.4MIN: 2.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUabbb0.77761.55522.33283.11043.8883.455993.093643.17173MIN: 1.83MIN: 1.81MIN: 1.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Liveabbb306090120150125.37115.77124.511. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Liveabbb102030405040.2843.6240.561. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncabb16003200480064008000697175271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

John The Ripper

Test: HMAC-SHA512

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512abbb20M40M60M80M100M9124500091970000982710001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4Kabbb306090120150110.28117.74117.561. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabbb0.39330.78661.17991.57321.96651.638781.748161.69336MIN: 1.5MIN: 1.56MIN: 1.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

Compression Level: 8 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedabbb130260390520650579.4545.9572.31. (CC) gcc options: -O3 -pthread -lz -llzma

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheabbb4080120160200163.13157.77154.30MIN: 16.41 / MAX: 1200MIN: 16.57 / MAX: 1621.62MIN: 15.97 / MAX: 1578.95

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runabbb50100150200250205.05211.36200.24MIN: 18.52 / MAX: 1363.64MIN: 18.65 / MAX: 2500MIN: 18.21 / MAX: 1714.29

Zstd Compression

Compression Level: 19, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedabbb2468106.716.456.791. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 12 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedabbb306090120150146.2153.9151.41. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 3 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedabbb4008001200160020001664.31640.61584.11. (CC) gcc options: -O3 -pthread -lz -llzma

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesabbb100200300400500439.12439.28419.85

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Fillabb60K120K180K240K300K2596502499871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4Kabbb0.59561.19121.78682.38242.9782.5502.6082.6471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

Compression Level: 8, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedabbb60120180240300282.7289.9279.41. (CC) gcc options: -O3 -pthread -lz -llzma

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fillabb50K100K150K200K250K2514872436801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabb36912158.87399.1226

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabb306090120150112.51109.45

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pabbb50100150200250232.25235.27228.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabbb2040608010083.2084.4882.22

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runabbb50100150200250206.05210.46211.12MIN: 18.75 / MAX: 1090.91MIN: 18.39 / MAX: 1276.6MIN: 18.35 / MAX: 1875

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastabbb0.82281.64562.46843.29124.1143.6573.5793.6341. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabb90180270360450403.05394.45

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabb102030405044.6245.59

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensabbb61218243024.1524.1324.641. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randomabb50K100K150K200K250K2333752287031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pabbb2468106.2236.3446.2221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspaceabbb142842567060.2461.3760.511. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabb51015202518.3818.71

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabb2004006008001000977.29961.13

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compileabbb20406080100100.2799.15100.75

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabb51015202518.4218.71

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUabbb0.29250.5850.87751.171.46251.299781.280421.28253MIN: 1.08MIN: 1.07MIN: 1.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pabbb2040608010085.6686.9586.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlyabbb200K400K600K800K1000K1068707105284810679541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500abb30K60K90K120K150K141761.59143874.151. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

uvg266

Video Input: Bosphorus 4K - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastabbb51015202521.8021.4821.62

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200abb30K60K90K120K150K147630.66149820.051. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastabbb36912159.6059.6419.7461. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4Kabbb81624324036.4136.2335.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

Video Input: Bosphorus 1080p - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastabbb102030405042.9342.6242.32

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelabbb61218243025.3525.2725.011. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabbb2004006008001000786.92797.59789.81MIN: 779.8MIN: 783.04MIN: 783.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterabbb4812162016.7316.9016.681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

uvg266

Video Input: Bosphorus 1080p - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastabbb102030405044.6445.1045.23

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabb91827364540.3340.86

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabb100200300400500446.17440.41

uvg266

Video Input: Bosphorus 4K - Video Preset: Super Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastabbb51015202520.2920.3320.07

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetabbb20406080100103.29104.62104.20

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaabbb70140210280350343.24339.91338.89

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspaceabbb142842567063.6864.4664.481. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Liveabbb81624324033.2032.8032.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Liveabbb306090120150152.11153.96153.821. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadabbb50100150200250237.24238.65235.941. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256abbb2000M4000M6000M8000M10000M9173311330910203202092063849301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadabbb369121510.6410.5810.701. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabb153045607565.3866.09

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabb4812162015.2815.12

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUabbb4812162014.7514.8014.65MIN: 12.24MIN: 12.39MIN: 12.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetabbb306090120150146.93146.48145.41

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096abbb2K4K6K8K10K8029.48026.78109.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randomabb500K1000K1500K2000K2500K222891822515171. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspaceabbb71421283531.3831.0931.071. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Zstd Compression

Compression Level: 19 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedabbb2004006008001000840.6846.7849.01. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Compression Level: 19, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedabbb2004006008001000826.0833.2834.21. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileabbb50100150200250232.64234.22232.02

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterabbb2468105.9916.0265.9701. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto -lpthread

uvg266

Video Input: Bosphorus 1080p - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowabbb51015202518.3018.4518.47

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabbb0.390.781.171.561.951.733541.717801.73257MIN: 1.65MIN: 1.64MIN: 1.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compileabbb81624324033.4833.2733.18

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUabbb300600900120015001384.141386.621374.19MIN: 1379.57MIN: 1378.88MIN: 1367.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabb61218243023.4323.64

Zstd Compression

Compression Level: 12 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedabbb2004006008001000987.6996.4996.51. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabb102030405042.6542.27

uvg266

Video Input: Bosphorus 1080p - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumabbb51015202520.0420.0520.22

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownabbb71421283528.6428.4928.75MIN: 28.27 / MAX: 29.08MIN: 28.14 / MAX: 29MIN: 28.39 / MAX: 29.15

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabbb122436486050.7150.9651.15

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabb122436486051.1150.70

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabb51015202519.5619.72

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUabbb0.15610.31220.46830.62440.78050.6939340.6883320.691037MIN: 0.69MIN: 0.68MIN: 0.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4Kabbb20406080100110.64111.15110.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

Compression Level: 19 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedabbb369121512.812.812.91. (CC) gcc options: -O3 -pthread -lz -llzma

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishabbb10K20K30K40K50K4875648405485621. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingabb1.1M2.2M3.3M4.4M5.5M491020048754801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Zstd Compression

Compression Level: 8, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedabbb20040060080010001105.71097.91100.71. (CC) gcc options: -O3 -pthread -lz -llzma

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownabbb71421283529.7129.5029.54MIN: 29.3 / MAX: 30.17MIN: 29.05 / MAX: 30.07MIN: 29.06 / MAX: 30.1

Zstd Compression

Compression Level: 3 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedabbb20040060080010001102.71095.11095.81. (CC) gcc options: -O3 -pthread -lz -llzma

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50abb71421283529.6329.43

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50abbb51015202522.2022.2722.35

Zstd Compression

Compression Level: 3, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedabbb20040060080010001137.01144.51140.21. (CC) gcc options: -O3 -pthread -lz -llzma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandabbb91827364538.0338.1638.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

John The Ripper

Test: WPA PSK

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKabbb50K100K150K200K250K2128182138112142001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandabbb4080120160200199.18198.50197.901. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabb81624324034.7134.93

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabb71421283528.7928.62

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetabbb20406080100109.53109.74109.07

John The Ripper

Test: MD5

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5abbb1000K2000K3000K4000K5000K4691000471900047120001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUabbb36912159.566679.510199.51252MIN: 9.44MIN: 9.44MIN: 9.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabbb300600900120015001379.971385.411377.45MIN: 1367.35MIN: 1377.53MIN: 1371.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000abb30K60K90K120K150K139200.66139997.581. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlyabb132639526556.0755.76

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objabbb71421283529.6029.4429.49MIN: 29.37 / MAX: 30.06MIN: 29.24 / MAX: 29.86MIN: 29.24 / MAX: 29.77

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabbb300600900120015001374.371381.331380.03MIN: 1363.77MIN: 1376.64MIN: 1374.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

uvg266

Video Input: Bosphorus 4K - Video Preset: Slow

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowabbb2468107.917.957.91

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50abb71421283531.1030.95

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadabbb369121510.4410.4210.471. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

uvg266

Video Input: Bosphorus 1080p - Video Preset: Ultra Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastabbb112233445548.1748.2448.40

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512abbb2000M4000M6000M8000M10000M1014572533010167598450101933736101. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pabbb4080120160200200.80201.73201.441. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadabbb50100150200250241.76242.24241.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonabbb91827364537.8837.7137.72MIN: 37.64 / MAX: 38.3MIN: 37.4 / MAX: 38.13MIN: 37.47 / MAX: 38.19

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetabb20406080100109.76109.27

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabbb2004006008001000788.80785.39787.69MIN: 782.96MIN: 775.34MIN: 778.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonabbb81624324033.0333.1833.17MIN: 32.82 / MAX: 33.48MIN: 32.97 / MAX: 33.5MIN: 32.95 / MAX: 33.5

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencyabbb142842567061.8461.6661.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMabbb30000M60000M90000M120000M150000M1591512877701597819726501591609346201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

Compression Level: 8 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedabbb20040060080010001093.31093.71097.61. (CC) gcc options: -O3 -pthread -lz -llzma

uvg266

Video Input: Bosphorus 4K - Video Preset: Very Fast

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastabbb4812162018.0718.1118.14

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetabbb50100150200250212.65212.50213.32

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabbb2004006008001000786.00788.65789.00MIN: 779.97MIN: 781.04MIN: 779.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetabbb50100150200250227.16227.85227.00

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writeabbb20040060080010008098118081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabb20406080100100.52100.15

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformabbb91827364538.1938.0738.211. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabb4080120160200178.92179.57

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandabbb80160240320400366.57367.61366.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096abbb110K220K330K440K550K534883.8536533.8534630.51. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformabbb4080120160200198.35198.96198.271. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objabbb81624324032.5032.5632.44MIN: 32.25 / MAX: 32.9MIN: 32.27 / MAX: 32.95MIN: 32.17 / MAX: 32.79

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformabbb51015202520.4520.5220.501. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlyabbb200K400K600K800K1000K9946559912629936601. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

uvg266

Video Input: Bosphorus 4K - Video Preset: Medium

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumabbb2468108.838.828.85

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandabbb51015202520.6620.6120.681. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabb2004006008001000965.21961.97

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readabb20M40M60M80M100M1152097151148301461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50abbb51015202518.2518.3118.30

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadeabb2K4K6K8K10K941093811. (CXX) g++ options: -O3

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformabbb80160240320400370.34369.22369.541. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100abb30K60K90K120K150K152180.63151733.731. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMabbb30000M60000M90000M120000M150000M1182502628001185964209501184999934601. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionabb14002800420056007000669467131. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabb2468108.66808.6925

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabb306090120150115.35115.03

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabb2468108.87328.8977

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabb306090120150112.54112.23

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabb2040608010099.4999.75

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabb4080120160200180.72180.28

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyabb4080120160200191.67192.11

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyabb130260390520650579.52580.85

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUabbb0.12680.25360.38040.50720.6340.5635150.5627440.562273MIN: 0.54MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyabb2040608010074.4774.63

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabb306090120150115.16115.41

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabb2468108.68278.6642

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUabbb2468108.670718.689128.67211MIN: 8.58MIN: 8.58MIN: 8.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetabbb2040608010077.7677.6477.79

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUabbb0.61731.23461.85192.46923.08652.743472.738212.73830MIN: 2.72MIN: 2.72MIN: 2.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

John The Ripper

Test: bcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptabbb10K20K30K40K50K4882148864489121. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lrt -lz -ldl -lcrypt -lbz2

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlyabb4080120160200159.56159.27

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compileabbb70140210280350309.57309.31309.09

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abbb4812162015.1115.1015.09

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUabbb1.2812.5623.8435.1246.4055.693535.687285.68651MIN: 5.55MIN: 5.54MIN: 5.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUabbb2468106.353196.358686.35249MIN: 6.3MIN: 6.3MIN: 6.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabb4080120160200196.56196.74

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20abbb30000M60000M90000M120000M150000M1501767603501500740050901500592428401. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabb80160240320400351.83352.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabb2040608010091.4191.35

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabb122436486051.1151.09

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabb306090120150139.56139.50

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabb306090120150128.78128.81

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabb122436486054.3354.33

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305abbb20000M40000M60000M80000M100000M8051646655080512557330805137550001. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabb51015202518.3918.39

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencyabbb0.01130.02260.03390.04520.05650.050.050.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencyabbb0.01060.02120.03180.04240.0530.0470.0470.0471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm


Phoronix Test Suite v10.8.5