epyc 9654 AMD March

Tests for a future article. 2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303299-NE-EPYC9654A14
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

Timed Code Compilation 5 Tests
C/C++ Compiler Tests 10 Tests
CPU Massive 10 Tests
Creator Workloads 5 Tests
Cryptography 2 Tests
Database Test Suite 5 Tests
Encoding 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 6 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 4 Tests
Multi-Core 13 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 7 Tests
Server 8 Tests
Server CPU Tests 6 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 28 2023
  4 Hours, 39 Minutes
b
March 28 2023
  4 Hours, 39 Minutes
c
March 28 2023
  4 Hours, 39 Minutes
d
March 29 2023
  5 Hours, 7 Minutes
e
March 29 2023
  5 Hours, 14 Minutes
Invert Hiding All Results Option
  4 Hours, 52 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc 9654 AMD MarchProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabcdeAMD EPYC 9654 96-Core @ 3.71GHz (96 Cores / 192 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a4768GB800GB INTEL SSDPF21Q800GBASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeUbuntu 23.045.19.0-21-generic (x86_64)GNOME Shell 43.1X Server 1.21.1.41.3.224GCC 12.2.0ext41920x10802 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads)1520GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-l0Aoyl/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-l0Aoyl/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.10.9Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdeResult OverviewPhoronix Test Suite100%131%162%192%MariaDBOpenSSLSPECFEM3DGROMACSJohn The RipperEmbreePostgreSQLTensorFlowONNX RuntimeApache HTTP ServerTimed Node.js CompilationnginxDarmstadt Automotive Parallel Heterogeneous SuiteRocksDBTimed FFmpeg CompilationMemcachedTimed LLVM CompilationNeural Magic DeepSparseClickHouseTimed Godot Game Engine CompilationBuild2Zstd CompressionFFmpegOpenCVGoogle Draco

epyc 9654 AMD Marchopencv: Corepgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 800 - Read Writepgbench: 100 - 800 - Read Write - Average Latencyopencv: Videoopencv: Object Detectionopencv: Image Processingmysqlslap: 512mysqlslap: 1024mysqlslap: 2048rocksdb: Rand Fill Synctensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNetopencv: DNN - Deep Neural Networkdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamopenssl: ChaCha20openssl: RSA4096openssl: RSA4096openssl: AES-256-GCMopenssl: SHA256rocksdb: Rand Readopenssl: AES-128-GCMopenssl: ChaCha20-Poly1305openssl: SHA512mysqlslap: 4096deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamjohn-the-ripper: bcryptjohn-the-ripper: WPA PSKjohn-the-ripper: Blowfishtensorflow: CPU - 16 - AlexNetonnx: ResNet50 v1-12-int8 - CPU - Paralleldeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamopencv: Graph APIspecfem3d: Water-layered Halfspacedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamtensorflow: CPU - 32 - AlexNetrocksdb: Read While Writingonnx: ArcFace ResNet-100 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelspecfem3d: Mount St. Helensdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamtensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNetjohn-the-ripper: MD5specfem3d: Tomographic Modelspecfem3d: Homogeneous Halfspacegromacs: MPI CPU - water_GMX50_bareonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardembree: Pathtracer - Crownspecfem3d: Layered Halfspaceopencv: Features 2Drocksdb: Read Rand Write Randembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragononnx: yolov4 - CPU - Paralleltensorflow: CPU - 64 - ResNet-50mysqlslap: 8192memcached: 1:5rocksdb: Seq Filldaphne: OpenMP - Points2Imagerocksdb: Rand Fillrocksdb: Update Randonnx: yolov4 - CPU - Standardapache: 500tensorflow: CPU - 64 - AlexNetonnx: GPT-2 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelopencv: Stitchingpgbench: 1 - 1000 - Read Write - Average Latencypgbench: 1 - 1000 - Read Writeonnx: bertsquad-12 - CPU - Parallelpgbench: 1 - 800 - Read Writepgbench: 1 - 800 - Read Write - Average Latencyonnx: bertsquad-12 - CPU - Standardtensorflow: CPU - 512 - AlexNetonnx: CaffeNet 12-int8 - CPU - Standardbuild-llvm: Ninjabuild-nodejs: Time To Compileonnx: super-resolution-10 - CPU - Paralleldaphne: OpenMP - NDT Mappingonnx: GPT-2 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelnginx: 500tensorflow: CPU - 256 - GoogLeNetbuild-ffmpeg: Time To Compilepgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencyapache: 200onnx: fcn-resnet101-11 - CPU - Standardclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runpgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlytensorflow: CPU - 256 - ResNet-50clickhouse: 100M Rows Hits Dataset, First Run / Cold Cachedeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streammemcached: 1:100build-godot: Time To Compileonnx: super-resolution-10 - CPU - Standarddaphne: OpenMP - Euclidean Clusterbuild-llvm: Unix Makefilestensorflow: CPU - 256 - AlexNetcompress-zstd: 8, Long Mode - Compression Speedjohn-the-ripper: HMAC-SHA512ffmpeg: libx265 - Liveffmpeg: libx265 - Livedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standarddeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streambuild2: Time To Compilepgbench: 1 - 800 - Read Only - Average Latencytensorflow: CPU - 512 - ResNet-50memcached: 1:10pgbench: 1 - 800 - Read Onlydeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamffmpeg: libx265 - Platformffmpeg: libx265 - Platformdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 512 - GoogLeNetdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdraco: Church Facadedeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamffmpeg: libx264 - Uploadffmpeg: libx264 - Uploaddeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamcompress-zstd: 8 - Decompression Speeddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdraco: Lioncompress-zstd: 12 - Decompression Speedpgbench: 1 - 1000 - Read Only - Average Latencyffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandpgbench: 1 - 1000 - Read Onlycompress-zstd: 19 - Decompression Speedffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandcompress-zstd: 8, Long Mode - Decompression Speedffmpeg: libx264 - Platformffmpeg: libx264 - Platformdav1d: Summer Nature 4Knginx: 200compress-zstd: 8 - Compression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19 - Compression Speedffmpeg: libx264 - Liveffmpeg: libx264 - Livedav1d: Summer Nature 1080pffmpeg: libx265 - Uploaddav1d: Chimera 1080pffmpeg: libx265 - Uploaddav1d: Chimera 1080p 10-bitonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelnginx: 100abcde6577218.461541695826213.7314199924950119907915912860445786142.3257.39241.432294443.019842.32715107456024601462850.135951.3780271471000129947980460432927777908982494280356999237630400289262906541003.5666316.6097624.9625163238653913163353355.72194.561440.462623049420.426995567400.507149.0497593.9993992424.1547599.4968.5492480831567.753380.72316.04155560008.69580670410.66113347611.248207.15629.4284104.244719.841829247718502792738106.5442110.7402121.1083113.1673132.71416.40254104.124463870015.666261318078.7691973936443566455306.61035173757.38856.98159.1091.253311909872619.50938212.17426761184.0829.36521375.44524.041125.877133.701111.698954.82128.48530.7765240111.29459.9712.81138338220.209143188.95.11448612.78602.500.2673741941146.79584.9830.56952851726.85107.66123.8791637.357959197217.1871276.22903.230917500037.07136.22119.5961321.474937.036328.021735.6789108.804663.1670.215163.853203112.63718995195.19955.119876.7052195.1285.122928.219235.4293151.3746132.59700419657.1311.268188.7046516.8747.77438.51109.105268721108.7024194.5375.138512.45202.82901313206.93914.82871580.9101.78899.820262.345316.024953211633.70.26757.02132.84208749637389721395.148.08157.5348413261619.8156.97975620848.25379.84257954.011225.5316.81329.817.423.17217.98807.1689.614916611657.528.18602.6426.99832.48918.071858.950954.826625.1381133.978341.3976195.521797.881.907371.66637106.77682.137151.274156.1837.780726.278086525618.322545796163512.983814324394119436898874839446883157.6557.85239.452375542.278342.09615066316030001462987.435946.3776495266470129484883100435267657910814515750356991460690400186415406931001.9837320.1841626.0654163353654104163241353.36194.057439.924720494520.448751145400.3873149.2181594.27910888224.5996563.2988.4335004941567.608881.8310.53156080008.69935470910.38690170711.237233.81232.4597105.182219.458733603737892785663107.1054111.2807121.2691113.4477132.70686.36316105.044383833862.6466204617717.9305457126409776447876.48661208703.78853.06159.9881.2581906872099.01147612.26925521449.38912.01251375.77552.793126.288132.796111.351949.71128.97930.6798237868.2454.6713.01238166660.21164665.514.89263606.58603.930.2683730123146.9582.4430.58272813977.52107.352113.0161637.00217.4511272.13900.730849200036.891181239136.89119.6426320.985137.394528.201835.4512108.801663.0140.21163.763163183.273803554195.77295.104576.5973193.39835.168628.248535.3934149.8129132.94701356356.9811.280788.5975515.1147.87368.561110.673867881108.9774199.04395.022212.42203.29206.61594.83651593101.39779.857761.926316.1331529616290.2757.18132.4837073151399.248.19157.2015764791625.7156.9848.26381.16258099.681217.1317.41336.117.423.15218.14806.0889.730906933656.3128.14603.0526.73932.59198.847688.978834.276225.1519130.805740.6488204.386794.9091.808551.7735283.244281.5006154.16157.157.750786.243376874318.477541206574012.1693717323509122137894873852451734158.7657.9239.482314442.106542.81735109592965101462827.135968.3779191711330130061831940434781404909186377320356961832960400024283906781005.7215317.4936625.2598163353653913163299354.96183.218439.023420709019.898906002399.6568149.1959591.781000015824.648598.4328.266046041569.255280.41295.75156080008.46316134610.66591381811.246207.34532.5181105.12419.778071377751802802641106.9305111.3003120.9085113.3183132.53156.3862105.294393858723.9566078317373.1255121456413386474426.96155185857.48857.87159.5961.171211916342357.90742412.17357111125.1049.352311378.85536.343126.228133.106112.031937.41126.83929.0291241662.73455.4513.16137858760.211165838.185.10242623.39592.330.2653776352146.99578.7630.54432821181.21107.075123.1291636.09214.0211276.62893.230962100036.98136.56119.8062321.152637.065728.094735.5865109.225363.2220.21163.663154822.343804637194.07885.149276.6635196.14785.096128.188435.4677150.7648132.87217589257.0111.25188.831518.5247.68618.381108.418768881112.3295197.08615.072112.45202.88206.67424.83531577.1100.93389.903462.293316.038652701641.70.27257.12132.62663324736724711393.848.27156.931613.8157.0048.25383.95255419.281220.5314.8133817.423.203361631217.64809.8689.78657.2228.13603.5126.976634.44528.120978.924274.822115.4567730.750140.5691195.983853.8161.863831.66912106.92382.1422143.643156.5837.881886.2591626706622.312448184686017.0721269477147733396162456165035729867.825.41120.063450285.022984.551610173521687902936562.972050.215527086621502586417946208634916501810537338580710753283550796155857705781953.9329620.41241209.92173153401263000315110184.36101.311840.731439016712.810395283754.4392281.229330.971695549313.2688326.9834.7096176912845.820845.4191.95272760005.0784476446.27687325418.413137.86819.7663172.276911.9169892281106971689260174.2633180.4766194.9019181.4535211.98034.0979768.593842575013.5243883313064.1537041234380234368614.84593141164.84588.27111.9620.9225952688692198.8964558.854235651415.16210.54651843.74417.11897.849106.0588.9539802.27126.73824.7447196034.9382.2910.85736434960.224.78422551.92536.530.2963381479131.65525.8633.64652571874.2697.311123.9321506.75199.2611347.52865.528615600039.276270815128.58126.9401339.577836.244929.451533.947113.90560.4740.22166.893068513.783638802188.20835.309679.1804191.64215.215929.231234.2025154.2824134.3755255156.3711.596686.1837530.5149.02738.341133.260567211127.5771196.61585.084212.70198.824978146203.70384.90531611.399.6510.030461.044716.3645521816360.2757.42131.93153836637009261406.148.76155.3595467611615.1155.27651261748.781217.7317.81334.717.323.087381477218.7389.6928.1527.587340.40838.0683711.23897.252629.8686350.588775.3612209.0181083.892.396873.055394.8158112.935206.355244.0177.888128.9211318254871.159140531702047.0021220213338631208233433632717416867.0724.48106.14783484.400984.277910170842181502937037.272086.615514666803202586006799908595555421804869906900710739693380795376658303511964.2605617.57811204.66013149281255000314188184.99102.184838.034538245410.759381397754.7591279.9601319.031434405413.307323.4314.6772018072819.959645.9177.73271690005.3224928036.23389283719.134139.61219.2692173.578612.577285281193681713878173.9641180.8434195.2731182.3542213.29954.0650568.532932550554.2743866711981.0459852514312174358404.70693142512.83597.46112.5050.8807612412291871.5585348.984415231529.1088.858441775.18482.69497.521104.35589.8277760.15102.87226.622194753.92377.5611.1932749200.2444.46687568.25536.900.2893461133132.39527.9533.94112595069.8698.009123.2821494.17200.021386.72943.229256900037.18135.83126.8569340.465435.532429.087134.372114.309360.3480.218171.183063463.553669550190.66215.241279.542189.00715.288628.950834.5339155.0272137.0455.2811.582586.2895521.8748.80748.461136.299367841135.447196.31695.091812.66199.473346601202.4644.93531599.2100.23649.971261.696116.190453001611.20.26956.39134.343716270138548.60155.8542040671606.4155.68443947648.661213.4315.21330.717.323.09218.7189.5928.1828.140337.55888.1109111.12947.162089.7836351.894475.1448223.8671135.382.071123.08913112.883111.298212.449245.9929.718248.8782OpenBenchmarking.org

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Coredecab60K120K180K240K300K2670661825486874365772652561. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latencyedcab163248648071.1622.3118.4818.4618.321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Writeedcab12K24K36K48K60K14053448185412054169545791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Writeedabc14K28K42K56K70K17020468605826261635657401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencyedabc112233445547.0017.0713.7312.9812.171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videodeabc30K60K90K120K150K1269471220214199938143371731. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectiondeabc15K30K45K60K75K71477333862495024394235091. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingdecab70K140K210K280K350K3339613120821221371199071194361. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 512edcba20040060080010003346248948989151. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 1024edcba20040060080010003365618738749121. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 2048edbca20040060080010003276508398528601. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncedabc100K200K300K400K500K1741683572984457864468834517341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetedabc408012016020067.0767.80142.32157.65158.76

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50edabc132639526524.4825.4157.3957.8557.90

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetedbca50100150200250106.10120.06239.45239.48241.43

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural Networkedbca10K20K30K40K50K47834345022375523144229441. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcbaed2040608010042.1142.2843.0284.4085.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambaced2040608010042.1042.3342.8284.2884.55

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20baced200000M400000M600000M800000M1000000M506631603000510745602460510959296510101708421815010173521687901. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096cabde600K1200K1800K2400K3000K1462827.11462850.11462987.42936562.92937037.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bacde15K30K45K60K75K35946.335951.335968.372050.272086.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMbcaed300000M600000M900000M1200000M1500000M776495266470779191711330780271471000155146668032015527086621501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256baced60000M120000M180000M240000M300000M1294848831001299479804601300618319402586006799902586417946201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readacbed200M400M600M800M1000M4329277774347814044352676578595555428634916501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMacbed400000M800000M1200000M1600000M2000000M908982494280909186377320910814515750180486990690018105373385801. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305cbaed150000M300000M450000M600000M750000M3569618329603569914606903569992376307107396933807107532835501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512cbaed20000M40000M60000M80000M100000M40002428390400186415404002892629079537665830796155857701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 4096edacb1503004506007503515786546786931. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambacde4008001200160020001001.981003.571005.721953.931964.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamacbed130260390520650316.61317.49320.18617.58620.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamacbed30060090012001500624.96625.26626.071204.661209.92

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptabced70K140K210K280K350K1632381633531633533149283153401. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKacbed300K600K900K1200K1500K653913653913654104125500012630001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishbcaed70K140K210K280K350K1632411632991633533141883151101. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetdebca80160240320400184.36184.99353.36354.96355.72

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Paralleldecba4080120160200101.31102.18183.22194.06194.561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcbaed2004006008001000439.02439.92440.46838.03840.73

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Graph APIdeacb80K160K240K320K400K3901673824542304942070902049451. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacebacde51015202520.4520.4319.9012.8110.761. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcbade160320480640800399.66400.39400.51754.44754.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamacbed60120180240300149.05149.20149.22279.96281.23

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetedcab130260390520650319.03330.97591.78593.90594.27

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingbaced4M8M12M16M20M910888299399241000015814344054169554931. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Paralleldeabc61218243013.2713.3124.1524.6024.651. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Paralleledbca130260390520650323.43326.98563.30598.43599.501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensabcde2468108.5492480838.4335004948.2660460404.7096176914.6772018071. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streambaced60012001800240030001567.611567.751569.262819.962845.82

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50decab2040608010045.4045.9080.4180.7281.80

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetedcba70140210280350177.73191.95295.75310.53316.04

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5abced6M12M18M24M30M15556000156080001560800027169000272760001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelbaced2468108.6993547098.6958067048.4631613465.3224928035.0784476441. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacecabde369121510.66591381810.66113347610.3869017076.2768732546.2338928371. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barebcade51015202511.2411.2511.2518.4119.131. (CXX) g++ options: -O3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarddeacb50100150200250137.87139.61207.16207.35233.811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardedabc81624324019.2719.7729.4332.4632.521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownacbde4080120160200104.24105.12105.18172.28173.58MIN: 102.16 / MAX: 107.49MIN: 102.85 / MAX: 108.38MIN: 103.34 / MAX: 107.96MIN: 167.64 / MAX: 181.13MIN: 168.69 / MAX: 180.7

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspaceacbed51015202519.8419.7819.4612.5811.921. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Features 2Dedcba30K60K90K120K150K1193681106977518073789718501. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randomdebac600K1200K1800K2400K3000K168926017138782785663279273828026411. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objacbed4080120160200106.54106.93107.11173.96174.26MIN: 104.86 / MAX: 109.02MIN: 105.38 / MAX: 108.8MIN: 105.52 / MAX: 109.66MIN: 169.75 / MAX: 180.18MIN: 170.44 / MAX: 179.73

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownabcde4080120160200110.74111.28111.30180.48180.84MIN: 108.24 / MAX: 114.35MIN: 108.92 / MAX: 115.18MIN: 108.84 / MAX: 114.92MIN: 174.62 / MAX: 189.72MIN: 174.89 / MAX: 190.36

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragoncabde4080120160200120.91121.11121.27194.90195.27MIN: 119.01 / MAX: 122.81MIN: 118.84 / MAX: 124.01MIN: 119.43 / MAX: 123.26MIN: 190.69 / MAX: 207.25MIN: 191.17 / MAX: 206.18

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objacbde4080120160200113.17113.32113.45181.45182.35MIN: 111.68 / MAX: 116.14MIN: 111.66 / MAX: 115.92MIN: 111.85 / MAX: 115.79MIN: 177.49 / MAX: 186.64MIN: 178.38 / MAX: 188.72

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragoncbade50100150200250132.53132.71132.71211.98213.30MIN: 130.91 / MAX: 135.37MIN: 130.87 / MAX: 135.14MIN: 131.03 / MAX: 135.12MIN: 207.56 / MAX: 231.22MIN: 208.86 / MAX: 228.78

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Paralleledbca2468104.065054.097976.363166.386206.402541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50edabc2040608010068.5368.59104.12105.04105.29

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 8192edbca1002003004005002933844384394461. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5edbca800K1600K2400K3200K4000K2550554.272575013.523833862.643858723.953870015.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Filledcba140K280K420K560K700K4386674388336607836620466626131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Imageedcba4K8K12K16K20K11981.0513064.1517373.1317717.9318078.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Filledbca140K280K420K560K700K4312174380236409776413386443561. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randomedbac140K280K420K560K700K4358404368616447876455306474421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardedbac2468104.706934.845936.486616.610356.961551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500deacb40K80K120K160K200K141164.84142512.83173757.38185857.48208703.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetdebac2004006008001000588.27597.46853.06856.98857.87

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Paralleldeacb4080120160200111.96112.51159.11159.60159.991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Paralleledcab0.28310.56620.84931.13241.41550.8807610.9225951.1712101.2533101.2580001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingdecab60K120K180K240K300K2688692412291916341909871906871. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latencyacdbe60012001800240030002619.512357.912198.902099.011871.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1000 - Mode: Read Writeacdbe1202403604806003824244554765341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Paralleldecab36912158.854238.9844112.1735012.1742012.269201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Writeebdac1503004506007505235525656767111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latencyebdac300600900120015001529.111449.391415.161184.081125.101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardecadb36912158.858449.352319.3652010.5465012.012501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetabced4008001200160020001375.441375.771378.851775.181843.74

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarddeacb120240360480600417.12482.69524.04536.34552.791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjabcade306090120150126.29126.23125.8897.8597.52

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compileacbde306090120150133.70133.11132.80106.05104.36

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Paralleldebac30609012015088.9589.83111.35111.70112.031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: NDT Mappingedcba2004006008001000760.15802.27937.41949.71954.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standardedcab306090120150102.87126.74126.84128.49128.981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Paralleldecba71421283524.7426.6229.0330.6830.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500edbac50K100K150K200K250K194753.92196034.90237868.20240111.29241662.731. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetedbca100200300400500377.56382.29454.67455.45459.97

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilecbaed369121513.1613.0112.8111.1910.86

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Onlyedcba800K1600K2400K3200K4000K327492036434963785876381666638338221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencyedcba0.05490.10980.16470.21960.27450.2440.2200.2110.2100.2091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 200abc40K80K120K160K200K143188.90164665.51165838.181. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Concurrent Requests: 200

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardedbca1.15082.30163.45244.60325.7544.466874.784224.892635.102425.114481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Rundebac130260390520650551.92568.25606.58612.78623.39MIN: 87.98 / MAX: 6000MIN: 90.09 / MAX: 6666.67MIN: 58.71 / MAX: 5454.55MIN: 59.52 / MAX: 5454.55MIN: 57.97 / MAX: 7500

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Rundecab130260390520650536.53536.90592.33602.50603.93MIN: 74.17 / MAX: 6666.67MIN: 75.09 / MAX: 6000MIN: 58.2 / MAX: 5000MIN: 58.14 / MAX: 6666.67MIN: 59 / MAX: 7500

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencydebac0.06660.13320.19980.26640.3330.2960.2890.2680.2670.2651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlydebac800K1600K2400K3200K4000K338147934611333730123374194137763521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50deabc306090120150131.65132.39146.79146.90146.99

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachedecba130260390520650525.86527.95578.76582.44584.98MIN: 60.3 / MAX: 5454.55MIN: 61.35 / MAX: 5454.55MIN: 57.75 / MAX: 6000MIN: 56.98 / MAX: 6000MIN: 58.37 / MAX: 6000

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamedbac81624324033.9433.6530.5830.5730.54

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100debca600K1200K1800K2400K3000K2571874.262595069.862813977.522821181.212851726.851. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileabced20406080100107.66107.35107.0898.0197.31

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardbcead306090120150113.02123.13123.28123.88123.931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Clusteredcba4008001200160020001494.171506.751636.091637.001637.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesbaced50100150200250217.45217.19214.02200.02199.26

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetbacde300600900120015001272.131276.221276.621347.521386.72

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speeddcbae2004006008001000865.5893.2900.7903.2943.21. (CC) gcc options: -O3 -pthread -lz -llzma

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512debac70M140M210M280M350M2861560002925690003084920003091750003096210001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livedeacb91827364539.2837.1837.0736.9836.891. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Livedeacb306090120150128.58135.83136.22136.56136.891. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamdecba306090120150126.94126.86119.81119.64119.60

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamedacb70140210280350340.47339.58321.47321.15320.99

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardedacb91827364535.5336.2437.0437.0737.391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdebca71421283529.4529.0928.2028.0928.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdebca81624324033.9534.3735.4535.5935.68

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamedcab306090120150114.31113.91109.23108.80108.80

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilecabde142842567063.2263.1763.0160.4760.35

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latencydeacb0.04950.0990.14850.1980.24750.2200.2180.2150.2100.2101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50cbade4080120160200163.66163.76163.85166.89171.18

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10edcba700K1400K2100K2800K3500K3063463.553068513.783154822.343163183.273203112.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Onlydeabc800K1600K2400K3200K4000K363880236695503718995380355438046371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdecab4080120160200188.21190.66194.08195.20195.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdecab1.19472.38943.58414.77885.97355.30965.24125.14925.11985.1045

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamedacb2040608010079.5479.1876.7176.6676.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamedbac4080120160200189.01191.64193.40195.13196.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamedbac1.18992.37983.56974.75965.94955.28865.21595.16865.12295.0961

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdebac71421283529.2328.9528.2528.2228.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdebac81624324034.2034.5335.3935.4335.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamedacb306090120150155.03154.28151.37150.76149.81

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformedbca306090120150137.04134.38132.95132.87132.601. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformedbca132639526555.2856.3756.9857.0157.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamdebac369121511.6011.5811.2811.2711.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamdebac2040608010086.1886.2988.6088.7088.83

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetbaced110220330440550515.11516.87518.52521.87530.51

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdebac112233445549.0348.8147.8747.7747.69

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speeddceab2468108.348.388.468.508.561. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamedbac20040060080010001136.301133.261110.671109.111108.42

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadecabed15003000450060007500688868726788678467211. (CXX) g++ options: -O3

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamedcba20040060080010001135.451127.581112.331108.981108.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamaedcb4080120160200194.54196.32196.62197.09199.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamaedcb1.15622.31243.46864.62485.7815.13855.09185.08425.07215.0222

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbaced369121512.4212.4512.4512.6612.701. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadbcaed4080120160200203.29202.88202.83199.47198.821. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamedbca50100150200250202.46203.70206.62206.67206.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamedbca1.11042.22083.33124.44165.5524.93534.90534.83654.83534.8287

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedcabed300600900120015001577.11580.91593.01599.21611.31. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamdecba2040608010099.65100.24100.93101.40101.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamdecba369121510.03049.97129.90349.85779.8202

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdebca142842567061.0461.7061.9362.2962.35

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdebca4812162016.3616.1916.1316.0416.02

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionaebcd11002200330044005500532153005296527052181. (CXX) g++ options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedebadc4008001200160020001611.21629.01633.71636.01641.71. (CC) gcc options: -O3 -pthread -lz -llzma

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latencycdbea0.06120.12240.18360.24480.3060.2720.2700.2700.2690.2671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandeacbd132639526556.3957.0257.1257.1857.421. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandeacbd306090120150134.34132.84132.63132.48131.931. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1000 - Mode: Read Onlycdbea800K1600K2400K3200K4000K367247137009263707315371627037389721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedecabd300600900120015001385.01393.81395.11399.21406.11. (CC) gcc options: -O3 -pthread -lz -llzma

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandabced112233445548.0848.1948.2748.6048.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandabced306090120150157.53157.20156.93155.85155.361. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedecdab300600900120015001606.41613.81615.11619.81625.71. (CC) gcc options: -O3 -pthread -lz -llzma

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformcbaed306090120150157.00156.98156.98155.68155.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformacbed112233445548.2548.2548.2648.6648.781. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4Kabc80160240320400379.84381.16383.951. (CC) gcc options: -pthread -lm

Video Input: Summer Nature 4K

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200cab60K120K180K240K300K255419.28257954.01258099.681. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 200

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedebdca300600900120015001213.41217.11217.71220.51225.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedceabd70140210280350314.8315.2316.8317.4317.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedaedbc300600900120015001329.81330.71334.71336.11338.01. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speeddeabc4812162017.317.317.417.417.41. (CC) gcc options: -O3 -pthread -lz -llzma

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livecabed61218243023.2023.1723.1523.0923.091. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livecabed50100150200250217.64217.98218.14218.71218.731. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pbac2004006008001000806.08807.16809.861. (CC) gcc options: -pthread -lm

Video Input: Summer Nature 1080p

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadcbdae2040608010089.7889.7389.6989.6189.591. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pbca140280420560700656.31657.22657.501. (CC) gcc options: -pthread -lm

Video Input: Chimera 1080p

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadcbdae71421283528.1328.1428.1528.1828.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitabc130260390520650602.64603.05603.511. (CC) gcc options: -pthread -lm

Video Input: Chimera 1080p 10-bit

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

Concurrent Requests: 1000

a: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

Concurrent Requests: 100

a: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 1000

a: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

Connections: 100

a: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

181 Results Shown

OpenCV
PostgreSQL:
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 800 - Read Write
  100 - 800 - Read Write - Average Latency
OpenCV:
  Video
  Object Detection
  Image Processing
MariaDB:
  512
  1024
  2048
RocksDB
TensorFlow:
  CPU - 16 - GoogLeNet
  CPU - 16 - ResNet-50
  CPU - 32 - GoogLeNet
OpenCV
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
OpenSSL:
  ChaCha20
  RSA4096
  RSA4096
  AES-256-GCM
  SHA256
RocksDB
OpenSSL:
  AES-128-GCM
  ChaCha20-Poly1305
  SHA512
MariaDB
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
John The Ripper:
  bcrypt
  WPA PSK
  Blowfish
TensorFlow
ONNX Runtime
Neural Magic DeepSparse
OpenCV
SPECFEM3D
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
TensorFlow
RocksDB
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Parallel
  CaffeNet 12-int8 - CPU - Parallel
SPECFEM3D
Neural Magic DeepSparse
TensorFlow:
  CPU - 32 - ResNet-50
  CPU - 64 - GoogLeNet
John The Ripper
SPECFEM3D:
  Tomographic Model
  Homogeneous Halfspace
GROMACS
ONNX Runtime:
  ResNet50 v1-12-int8 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
Embree
SPECFEM3D
OpenCV
RocksDB
Embree:
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
ONNX Runtime
TensorFlow
MariaDB
Memcached
RocksDB
Darmstadt Automotive Parallel Heterogeneous Suite
RocksDB:
  Rand Fill
  Update Rand
ONNX Runtime
Apache HTTP Server
TensorFlow
ONNX Runtime:
  GPT-2 - CPU - Parallel
  fcn-resnet101-11 - CPU - Parallel
OpenCV
PostgreSQL:
  1 - 1000 - Read Write - Average Latency
  1 - 1000 - Read Write
ONNX Runtime
PostgreSQL:
  1 - 800 - Read Write
  1 - 800 - Read Write - Average Latency
ONNX Runtime
TensorFlow
ONNX Runtime
Timed LLVM Compilation
Timed Node.js Compilation
ONNX Runtime
Darmstadt Automotive Parallel Heterogeneous Suite
ONNX Runtime:
  GPT-2 - CPU - Standard
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel
nginx
TensorFlow
Timed FFmpeg Compilation
PostgreSQL:
  100 - 800 - Read Only
  100 - 800 - Read Only - Average Latency
Apache HTTP Server
ONNX Runtime
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
TensorFlow
ClickHouse
Neural Magic DeepSparse
Memcached
Timed Godot Game Engine Compilation
ONNX Runtime
Darmstadt Automotive Parallel Heterogeneous Suite
Timed LLVM Compilation
TensorFlow
Zstd Compression
John The Ripper
FFmpeg:
  libx265 - Live:
    Seconds
    FPS
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
ONNX Runtime
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
Build2
PostgreSQL
TensorFlow
Memcached
PostgreSQL
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
FFmpeg:
  libx265 - Platform:
    Seconds
    FPS
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
TensorFlow
Neural Magic DeepSparse
Zstd Compression
Neural Magic DeepSparse
Google Draco
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream
FFmpeg:
  libx264 - Upload:
    FPS
    Seconds
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
Zstd Compression
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
Google Draco
Zstd Compression
PostgreSQL
FFmpeg:
  libx265 - Video On Demand:
    FPS
    Seconds
PostgreSQL
Zstd Compression
FFmpeg:
  libx264 - Video On Demand:
    FPS
    Seconds
Zstd Compression
FFmpeg:
  libx264 - Platform:
    Seconds
    FPS
dav1d
nginx
Zstd Compression:
  8 - Compression Speed
  12 - Compression Speed
  19, Long Mode - Decompression Speed
  19 - Compression Speed
FFmpeg:
  libx264 - Live:
    Seconds
    FPS
dav1d
FFmpeg
dav1d
FFmpeg
dav1d