Intel Optimized Power Mode Xeon Platinum Benchmarks

2 x INTEL XEON PLATINUM 8592 testing by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312153-NE-XEONEMRPO30
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 9 Tests
Compression Tests 2 Tests
CPU Massive 12 Tests
Creator Workloads 9 Tests
Database Test Suite 4 Tests
Encoding 6 Tests
Game Development 2 Tests
HPC - High Performance Computing 6 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 15 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 7 Tests
Scientific Computing 2 Tests
Server 8 Tests
Server CPU Tests 9 Tests
Single-Threaded 2 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
December 14 2023
  16 Hours, 25 Minutes
Optimized Power Mode
December 15 2023
  1 Day, 52 Minutes
Invert Hiding All Results Option
  20 Hours, 38 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Optimized Power Mode Xeon Platinum BenchmarksOpenBenchmarking.orgPhoronix Test Suite2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads)Quanta Cloud S6Q-MB-MPS (3B05.TEL4P1 BIOS)Intel Device 1bce1008GB3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Intel X710 for 10GBASE-TUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionIntel Optimized Power Mode Xeon Platinum Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x21000161- OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Default vs. Optimized Power Mode ComparisonPhoronix Test SuiteBaseline+13.5%+13.5%+27%+27%+40.5%+40.5%+54%+54%15%10.7%8.4%4.6%4.2%3.7%2.7%2.5%1:1054.1%CPU - 256 - ResNet-5041.2%CPU - 256 - ResNet-15239.3%CPU - 64 - ResNet-15238.5%50035.1%CPU - 64 - ResNet-5033.7%motorBike - Execution Time33.3%100031.7%GET - 5024.5%12 - Compression Speed22.6%CPU - 64 - Efficientnet_v2_l20.2%GhostRider - 1M19.4%Bosphorus 4K - Faster19.1%Bosphorus 4K - Ultra Fast18.3%CPU - 256 - Efficientnet_v2_l18.3%19 - Compression Speed17.9%10 - G.M.O.A.Q16.9%e.G.B.S - 120016.5%19, Long Mode - Compression Speed15.7%Barbershop - CPU-OnlyPreset 12 - Bosphorus 1080p14.4%e.G.B.S - 240014.4%IMDB14.2%Bosphorus 4K - Super Fast14.2%Bosphorus 4K - Very Fast14.2%Create - 100 - 10000014%50012.4%19 - D.S12.3%Preset 8 - Bosphorus 1080p12.3%Preset 8 - Bosphorus 4K12.2%libx265 - Live11.5%Bosphorus 1080p - Faster10.9%12 - D.S10.9%1B32 - 256 - 5710.5%Bosphorus 1080p - Fast10.3%VMAF Optimized - Bosphorus 1080p10.1%64 - 256 - 579.9%Preset 12 - Bosphorus 4K9.9%Preset 4 - Bosphorus 4K9.8%Preset 13 - Bosphorus 4K9.6%Preset 13 - Bosphorus 1080p9.4%R.O.R.S.I9.4%Bosphorus 4K - Ultra Fast9.4%Bosphorus 1080p - Very Fast9.2%Bosphorus 4K - Fast9.2%Bosphorus 1080p - Ultra Fast8.7%Bosphorus 1080p - Super Fast8.6%d.S.M.S - Mesh Time19, Long Mode - D.S8.2%P.S.O - Bosphorus 1080p8%Time To Compile7.5%100 - 1000 - Read Write - Average Latency7.2%100 - 1000 - Read Write7.1%V.Q.O - Bosphorus 1080p6.9%Bosphorus 1080p - Ultra Fast6.9%Bosphorus 4K - Very Fast6.8%Preset 4 - Bosphorus 1080p6.5%10005.9%Unix Makefiles5.9%TPC-H Parquet5.8%Bosphorus 4K - Super Fast5.7%Compression Rating5.4%libx265 - Platform5.2%1:1005.1%Bosphorus 1080p - Super Fast5%128 - 256 - 575%Bosphorus 1080p - Very Fast4.8%SET - 5004.8%R.5.S.I - A.M.S4.7%R.5.S.I - A.M.S4.7%Classroom - CPU-Onlylibx265 - Video On Demand4.2%RT.ldr_alb_nrm.3840x2160 - CPU-Onlylibx265 - Upload3.8%128 - 256 - 32A.G.R.R.0.F.I - CPU3.6%32 - 256 - 5123.4%256 - 256 - 573.2%defconfig3%GET - 5002.8%128 - 256 - 512A.G.R.R.0.F.I - CPU2.5%Bumper BeamNinja2.5%10 - Q2210.5%10 - Q2117.7%10 - Q2019.6%10 - Q1912.6%10 - Q1810.1%10 - Q173.4%10 - Q1613.8%10 - Q158.7%10 - Q1413.3%10 - Q1315.8%10 - Q1226.8%10 - Q1133.9%10 - Q1018.4%10 - Q0923.3%10 - Q0830.5%10 - Q0722.5%10 - Q0626.8%10 - Q0539.4%10 - Q0413.7%10 - Q0319.3%10 - Q0240.2%10 - Q0119%MemcachedPyTorchPyTorchPyTorchnginxPyTorchOpenFOAMnginxRedisZstd CompressionPyTorchXmrigVVenCuvg266PyTorchZstd CompressionApache Spark TPC-HeasyWaveZstd CompressionBlenderSVT-AV1easyWaveDuckDBuvg266uvg266Apache HadoopApache HTTP ServerZstd CompressionSVT-AV1SVT-AV1FFmpegVVenCZstd CompressionY-CruncherLiquid-DSPVVenCSVT-VP9Liquid-DSPSVT-AV1SVT-AV1SVT-AV1SVT-AV1OpenRadiossKvazaaruvg266VVenCuvg266uvg266OpenFOAMZstd CompressionSVT-VP9Timed GCC CompilationPostgreSQLPostgreSQLSVT-VP9KvazaarKvazaarSVT-AV1Apache HTTP ServerTimed LLVM CompilationDuckDBKvazaar7-Zip CompressionFFmpegMemcachedKvazaarLiquid-DSPKvazaarRedisNeural Magic DeepSparseNeural Magic DeepSparseBlenderFFmpegIntel Open Image DenoiseFFmpegLiquid-DSPOpenVINOLiquid-DSPLiquid-DSPTimed Linux Kernel CompilationRedisLiquid-DSPOpenVINOOpenRadiossTimed LLVM CompilationApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HDefaultOptimized Power Mode

Intel Optimized Power Mode Xeon Platinum Benchmarkspytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lbuild-gcc: Time To Compilepgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlyspark-tpch: 10 - Q22spark-tpch: 10 - Q21spark-tpch: 10 - Q20spark-tpch: 10 - Q19spark-tpch: 10 - Q18spark-tpch: 10 - Q17spark-tpch: 10 - Q16spark-tpch: 10 - Q15spark-tpch: 10 - Q14spark-tpch: 10 - Q13spark-tpch: 10 - Q12spark-tpch: 10 - Q11spark-tpch: 10 - Q10spark-tpch: 10 - Q09spark-tpch: 10 - Q08spark-tpch: 10 - Q07spark-tpch: 10 - Q06spark-tpch: 10 - Q05spark-tpch: 10 - Q04spark-tpch: 10 - Q03spark-tpch: 10 - Q02spark-tpch: 10 - Q01spark-tpch: 10 - Geometric Mean Of All Queriespytorch: CPU - 64 - ResNet-152memcached: 1:10pytorch: CPU - 256 - ResNet-152blender: Barbershop - CPU-Onlyeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400duckdb: TPC-H Parquetduckdb: IMDBbuild-llvm: Unix Makefilesffmpeg: libx265 - Uploadffmpeg: libx265 - Video On Demandffmpeg: libx265 - Platformbuild-linux-kernel: allmodconfigpgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writeqmcpack: Li2_STO_aeeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200openradioss: Chrysler Neon 1Mopenradioss: Bird Strike on Windshieldopenradioss: INIVOL and Fluid Structure Interaction Drop Containerbuild-llvm: Ninjavvenc: Bosphorus 4K - Fastnginx: 500apache: 500openradioss: Rubber O-Ring Seal Installationliquid-dsp: 64 - 256 - 57openradioss: Bumper Beamcompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speedredis: SET - 500deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedbuild-linux-kernel: defconfignginx: 1000compress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedxmrig: GhostRider - 1Mpytorch: CPU - 64 - ResNet-50pytorch: CPU - 256 - ResNet-50openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUmemcached: 1:100vvenc: Bosphorus 4K - Fasteropenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUquantlib: Multi-Threadedopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUffmpeg: libx265 - Livehadoop: Create - 100 - 100000deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamliquid-dsp: 32 - 256 - 512liquid-dsp: 256 - 256 - 32compress-7zip: Decompression Ratingcompress-7zip: Compression Ratingdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamuvg266: Bosphorus 4K - Very Fastdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastopenradioss: Cell Phone Drop Testblender: Classroom - CPU-Onlyliquid-dsp: 256 - 256 - 512apache: 1000liquid-dsp: 128 - 256 - 512liquid-dsp: 64 - 256 - 512liquid-dsp: 256 - 256 - 57liquid-dsp: 128 - 256 - 57liquid-dsp: 128 - 256 - 32liquid-dsp: 64 - 256 - 32liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57vvenc: Bosphorus 1080p - Fastersvt-av1: Preset 4 - Bosphorus 4Kredis: SET - 50uvg266: Bosphorus 4K - Slowredis: GET - 500quantlib: Single-Threadedxmrig: Wownero - 1Mredis: GET - 50openfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timesvt-av1: Preset 13 - Bosphorus 4Kuvg266: Bosphorus 4K - Mediumuvg266: Bosphorus 1080p - Super Fastxmrig: CryptoNight-Femto UPX2 - 1Mkvazaar: Bosphorus 4K - Slowxmrig: KawPow - 1Mxmrig: Monero - 1Mxmrig: CryptoNight-Heavy - 1Mkvazaar: Bosphorus 4K - Mediumblender: BMW27 - CPU-Onlysvt-av1: Preset 4 - Bosphorus 1080puvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastkvazaar: Bosphorus 4K - Ultra Fastsvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kuvg266: Bosphorus 1080p - Ultra Fastuvg266: Bosphorus 1080p - Slowkvazaar: Bosphorus 4K - Very Fastsvt-vp9: VMAF Optimized - Bosphorus 1080pkvazaar: Bosphorus 4K - Super Fastoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyy-cruncher: 1Bsvt-av1: Preset 8 - Bosphorus 1080puvg266: Bosphorus 1080p - Mediumkvazaar: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Very Fastsvt-av1: Preset 12 - Bosphorus 1080pkvazaar: Bosphorus 1080p - Mediumsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pkvazaar: Bosphorus 1080p - Super Fastkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 1080p - Very Fastopenfoam: motorBike - Execution Timesvt-vp9: Visual Quality Optimized - Bosphorus 4KDefaultOptimized Power Mode2.622.59706.2591.1248930894.4956595126.481879558.561719895.0945846212.8941656812.600738214.972441044.245615805.778444925.644020567.933933265.9976617511.8791221017.5612882012.0530392311.474618282.5661711712.944283498.1769722312.590769777.698114088.841855698.3591968317.673360511.1217.85149.4989.984148.55121.967177.43127.0053.6353.98149.79215.6346396294.38436.54685.82111.2397.8796.5186.921273684.7190823.0681.69281016666784.781422.7333.62503177.67397.7731160.40641183.89.9023.756243384.191173.519.116522.543.9144.58236.70539.693441439.7110.83612.5110205.6153.512390.2542.72748.1014.328930.0028.851106.325.1524795.17388447.238.413330.190.40121577.552.4548795.97131.81549013.94514580.631833.63851899.2223463.0854137.3439462.9932137.35405082450006185600000637311689516350.9008182.159151.70331235.406356.065.673211244.794135.93141778.594576.5445834.176735.66951791.273774.0722862.138219.91525.7434.94214456666780826.711473900000101233666758355000004329866667361186666724760333331211766667142910000032.0917.2973060186.4227.203240001.583650.575002.34744838.923.46982230.04223218.76329.79158.2369677.841.4570383.270469.870573.041.9912.8920.95257.8759.5977.5369.466218.452166.4780.3069.36542.9871.054.535.199142.26588.46127.25160.63502.707131.14463.27542.17628.755263.88282.05268.494.132882.182.19759.2161.1418784684.9695701631.1611911210.242673195.7373820014.1998758313.028431495.656485284.614900046.548335146.5348194710.057694168.0304369914.0628064021.6599576115.7232208214.057704933.2531764218.041547789.2947692915.0188730810.7942171110.525319649.7730412712.762180648.2112.81129.94102.928157.101139.326187.83726.0251.4551.31151.27616.7525969592.67242.55986.30111.7398.4998.9056.340202507.4280817.7389.37255657333382.731282.7272.22389636.67392.4515162.58951094.28.5624.476184861.991044.616.213832.432.8331.58237.36538.203274425.329.09612.5510168.0653.452392.3842.59750.4314.308943.8828.991100.765.1324922.40391468.438.793297.810.41117369.552.4249494.06118.24481413.90954590.337533.90321883.6573461.1469138.0114460.6851138.14974914266676301483333638136654459346.7411183.710451.61761237.025749.105.938510738.135236.12861768.910876.5042834.270635.95611776.525273.9896863.347418.05325.3733.40214100000076315.65151346666799966333356566000004125000000374523333324608666671204500000129333333328.9266.6453050498.0026.713152359.753641.475701.73811819.5023.34883327.70873199.54929.62145.7569847.941.1369196.770298.870338.641.6912.6719.66950.6850.3670.8761.914198.742153.1778.9364.97493.2667.214.724.697126.69786.78125.18147.08439.477130.64433.53502.05574.599251.33263.96256.195.50897OpenBenchmarking.org

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lOptimized Power ModeDefault0.58951.1791.76852.3582.9475SE +/- 0.02, N = 3SE +/- 0.02, N = 32.182.62MIN: 0.98 / MAX: 3.41MIN: 1.32 / MAX: 3.9
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lOptimized Power ModeDefault246810Min: 2.15 / Avg: 2.18 / Max: 2.22Min: 2.59 / Avg: 2.62 / Max: 2.67

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lOptimized Power ModeDefault0.58281.16561.74842.33122.914SE +/- 0.02, N = 3SE +/- 0.00, N = 32.192.59MIN: 0.98 / MAX: 3.3MIN: 1.19 / MAX: 4
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lOptimized Power ModeDefault246810Min: 2.15 / Avg: 2.19 / Max: 2.23Min: 2.58 / Avg: 2.59 / Max: 2.59

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To CompileOptimized Power ModeDefault160320480640800SE +/- 4.73, N = 3SE +/- 2.37, N = 3759.22706.26
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To CompileOptimized Power ModeDefault130260390520650Min: 754.14 / Avg: 759.22 / Max: 768.67Min: 701.9 / Avg: 706.26 / Max: 710.04

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyOptimized Power ModeDefault0.25670.51340.77011.02681.2835SE +/- 0.016, N = 12SE +/- 0.022, N = 101.1411.1241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyOptimized Power ModeDefault246810Min: 1.07 / Avg: 1.14 / Max: 1.29Min: 1.01 / Avg: 1.12 / Max: 1.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyOptimized Power ModeDefault200K400K600K800K1000KSE +/- 11877.48, N = 12SE +/- 17460.42, N = 108784688930891. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyOptimized Power ModeDefault150K300K450K600K750KMin: 775483.19 / Avg: 878468.08 / Max: 938747.21Min: 808478.15 / Avg: 893089.02 / Max: 992473.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache Spark TPC-H

This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22Optimized Power ModeDefault1.11822.23643.35464.47285.591SE +/- 0.13159559, N = 7SE +/- 0.16829529, N = 34.969570164.49565951
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22Optimized Power ModeDefault246810Min: 4.57 / Avg: 4.97 / Max: 5.53Min: 4.17 / Avg: 4.5 / Max: 4.74

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21Optimized Power ModeDefault714212835SE +/- 0.34, N = 7SE +/- 0.40, N = 331.1626.48
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21Optimized Power ModeDefault714212835Min: 30.28 / Avg: 31.16 / Max: 32.81Min: 26.01 / Avg: 26.48 / Max: 27.27

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20Optimized Power ModeDefault3691215SE +/- 0.14303850, N = 7SE +/- 0.45558852, N = 310.242673198.56171989
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20Optimized Power ModeDefault3691215Min: 9.81 / Avg: 10.24 / Max: 10.91Min: 8.01 / Avg: 8.56 / Max: 9.47

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19Optimized Power ModeDefault1.29092.58183.87275.16366.4545SE +/- 0.14465657, N = 7SE +/- 0.17083033, N = 35.737382005.09458462
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19Optimized Power ModeDefault246810Min: 5.27 / Avg: 5.74 / Max: 6.26Min: 4.82 / Avg: 5.09 / Max: 5.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18Optimized Power ModeDefault48121620SE +/- 0.53, N = 7SE +/- 0.44, N = 314.2012.89
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18Optimized Power ModeDefault48121620Min: 12.93 / Avg: 14.2 / Max: 16.72Min: 12.21 / Avg: 12.89 / Max: 13.71

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17Optimized Power ModeDefault3691215SE +/- 0.26, N = 7SE +/- 0.23, N = 313.0312.60
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17Optimized Power ModeDefault48121620Min: 12.3 / Avg: 13.03 / Max: 14.17Min: 12.35 / Avg: 12.6 / Max: 13.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16Optimized Power ModeDefault1.27272.54543.81815.09086.3635SE +/- 0.10743013, N = 7SE +/- 0.28343850, N = 35.656485284.97244104
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16Optimized Power ModeDefault246810Min: 5.34 / Avg: 5.66 / Max: 6.18Min: 4.59 / Avg: 4.97 / Max: 5.53

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15Optimized Power ModeDefault1.03842.07683.11524.15365.192SE +/- 0.07270001, N = 7SE +/- 0.05108329, N = 34.614900044.24561580
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15Optimized Power ModeDefault246810Min: 4.46 / Avg: 4.61 / Max: 4.99Min: 4.15 / Avg: 4.25 / Max: 4.33

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14Optimized Power ModeDefault246810SE +/- 0.14967485, N = 7SE +/- 0.09561638, N = 36.548335145.77844492
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14Optimized Power ModeDefault3691215Min: 6.22 / Avg: 6.55 / Max: 7.24Min: 5.62 / Avg: 5.78 / Max: 5.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13Optimized Power ModeDefault246810SE +/- 0.30330721, N = 7SE +/- 0.08149193, N = 36.534819475.64402056
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13Optimized Power ModeDefault3691215Min: 5.7 / Avg: 6.53 / Max: 7.77Min: 5.53 / Avg: 5.64 / Max: 5.8

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12Optimized Power ModeDefault3691215SE +/- 0.34240704, N = 7SE +/- 0.47598391, N = 310.057694167.93393326
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12Optimized Power ModeDefault3691215Min: 9.01 / Avg: 10.06 / Max: 11.29Min: 7.43 / Avg: 7.93 / Max: 8.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11Optimized Power ModeDefault246810SE +/- 0.21479648, N = 7SE +/- 0.21358847, N = 38.030436995.99766175
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11Optimized Power ModeDefault3691215Min: 7.44 / Avg: 8.03 / Max: 9.07Min: 5.75 / Avg: 6 / Max: 6.42

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10Optimized Power ModeDefault48121620SE +/- 0.34, N = 7SE +/- 0.36, N = 314.0611.88
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10Optimized Power ModeDefault48121620Min: 12.51 / Avg: 14.06 / Max: 15.08Min: 11.16 / Avg: 11.88 / Max: 12.28

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09Optimized Power ModeDefault510152025SE +/- 0.36, N = 7SE +/- 1.37, N = 321.6617.56
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09Optimized Power ModeDefault510152025Min: 20.42 / Avg: 21.66 / Max: 23.22Min: 15.25 / Avg: 17.56 / Max: 20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08Optimized Power ModeDefault48121620SE +/- 0.45, N = 7SE +/- 0.29, N = 315.7212.05
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08Optimized Power ModeDefault48121620Min: 14.5 / Avg: 15.72 / Max: 17.85Min: 11.67 / Avg: 12.05 / Max: 12.62

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07Optimized Power ModeDefault48121620SE +/- 0.30, N = 7SE +/- 0.65, N = 314.0611.47
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07Optimized Power ModeDefault48121620Min: 13.14 / Avg: 14.06 / Max: 15.35Min: 10.58 / Avg: 11.47 / Max: 12.73

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06Optimized Power ModeDefault0.7321.4642.1962.9283.66SE +/- 0.19375001, N = 7SE +/- 0.19354811, N = 33.253176422.56617117
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06Optimized Power ModeDefault246810Min: 2.45 / Avg: 3.25 / Max: 3.82Min: 2.22 / Avg: 2.57 / Max: 2.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05Optimized Power ModeDefault48121620SE +/- 0.84, N = 7SE +/- 0.17, N = 318.0412.94
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05Optimized Power ModeDefault510152025Min: 15.45 / Avg: 18.04 / Max: 21.29Min: 12.64 / Avg: 12.94 / Max: 13.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04Optimized Power ModeDefault3691215SE +/- 0.18685247, N = 7SE +/- 0.27780883, N = 39.294769298.17697223
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04Optimized Power ModeDefault3691215Min: 8.66 / Avg: 9.29 / Max: 9.89Min: 7.82 / Avg: 8.18 / Max: 8.72

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03Optimized Power ModeDefault48121620SE +/- 0.30, N = 7SE +/- 0.05, N = 315.0212.59
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03Optimized Power ModeDefault48121620Min: 13.8 / Avg: 15.02 / Max: 16.03Min: 12.52 / Avg: 12.59 / Max: 12.68

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02Optimized Power ModeDefault3691215SE +/- 0.18560870, N = 7SE +/- 0.14674980, N = 310.794217117.69811408
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02Optimized Power ModeDefault3691215Min: 10.12 / Avg: 10.79 / Max: 11.41Min: 7.47 / Avg: 7.7 / Max: 7.97

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01Optimized Power ModeDefault3691215SE +/- 0.34672923, N = 7SE +/- 0.23680200, N = 310.525319648.84185569
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01Optimized Power ModeDefault3691215Min: 9.25 / Avg: 10.53 / Max: 11.61Min: 8.55 / Avg: 8.84 / Max: 9.31

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All QueriesOptimized Power ModeDefault3691215SE +/- 0.09145601, N = 7SE +/- 0.07178690, N = 39.773041278.35919683MIN: 4.46 / MAX: 32.81MIN: 4.15 / MAX: 27.27
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All QueriesOptimized Power ModeDefault3691215Min: 9.57 / Avg: 9.77 / Max: 10.19Min: 8.28 / Avg: 8.36 / Max: 8.5

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152Optimized Power ModeDefault48121620SE +/- 0.12, N = 12SE +/- 0.23, N = 312.7617.67MIN: 5.75 / MAX: 15.71MIN: 10.89 / MAX: 18.38
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152Optimized Power ModeDefault48121620Min: 12.18 / Avg: 12.76 / Max: 13.37Min: 17.29 / Avg: 17.67 / Max: 18.08

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Optimized Power ModeDefault700K1400K2100K2800K3500KSE +/- 43551.88, N = 15SE +/- 29533.59, N = 152180648.213360511.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Optimized Power ModeDefault600K1200K1800K2400K3000KMin: 1997443.47 / Avg: 2180648.21 / Max: 2512967.14Min: 3079708.96 / Avg: 3360511.12 / Max: 3515921.441. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152Optimized Power ModeDefault48121620SE +/- 0.17, N = 3SE +/- 0.13, N = 1112.8117.85MIN: 6.6 / MAX: 13.9MIN: 7.02 / MAX: 19.19
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152Optimized Power ModeDefault510152025Min: 12.62 / Avg: 12.81 / Max: 13.16Min: 17.48 / Avg: 17.85 / Max: 18.9

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-OnlyOptimized Power ModeDefault306090120150SE +/- 0.11, N = 3SE +/- 6.68, N = 9129.94149.49
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-OnlyOptimized Power ModeDefault306090120150Min: 129.76 / Avg: 129.94 / Max: 130.14Min: 129.51 / Avg: 149.49 / Max: 183.04

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Optimized Power ModeDefault20406080100SE +/- 1.07, N = 3SE +/- 1.41, N = 15102.9389.981. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400Optimized Power ModeDefault20406080100Min: 100.89 / Avg: 102.93 / Max: 104.49Min: 83.82 / Avg: 89.98 / Max: 101.441. (CXX) g++ options: -O3 -fopenmp

DuckDB

DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetOptimized Power ModeDefault306090120150SE +/- 0.38, N = 3SE +/- 0.31, N = 3157.10148.551. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: TPC-H ParquetOptimized Power ModeDefault306090120150Min: 156.47 / Avg: 157.1 / Max: 157.78Min: 147.97 / Avg: 148.55 / Max: 149.011. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBOptimized Power ModeDefault306090120150SE +/- 0.43, N = 3SE +/- 0.49, N = 3139.33121.971. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterDuckDB 0.9.1Benchmark: IMDBOptimized Power ModeDefault306090120150Min: 138.69 / Avg: 139.33 / Max: 140.15Min: 121.06 / Avg: 121.97 / Max: 122.761. (CXX) g++ options: -O3 -rdynamic -lssl -lcrypto -ldl

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesOptimized Power ModeDefault4080120160200SE +/- 0.58, N = 3SE +/- 0.30, N = 3187.84177.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesOptimized Power ModeDefault306090120150Min: 186.86 / Avg: 187.84 / Max: 188.87Min: 176.85 / Avg: 177.43 / Max: 177.83

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadOptimized Power ModeDefault612182430SE +/- 0.06, N = 3SE +/- 0.06, N = 326.0227.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: UploadOptimized Power ModeDefault612182430Min: 25.93 / Avg: 26.02 / Max: 26.12Min: 26.89 / Avg: 27 / Max: 27.091. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandOptimized Power ModeDefault1224364860SE +/- 0.10, N = 3SE +/- 0.04, N = 351.4553.631. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On DemandOptimized Power ModeDefault1122334455Min: 51.26 / Avg: 51.45 / Max: 51.61Min: 53.56 / Avg: 53.63 / Max: 53.681. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformOptimized Power ModeDefault1224364860SE +/- 0.14, N = 3SE +/- 0.11, N = 351.3153.981. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: PlatformOptimized Power ModeDefault1122334455Min: 51.04 / Avg: 51.31 / Max: 51.51Min: 53.83 / Avg: 53.98 / Max: 54.191. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigOptimized Power ModeDefault306090120150SE +/- 0.48, N = 3SE +/- 0.51, N = 3151.28149.79
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigOptimized Power ModeDefault306090120150Min: 150.49 / Avg: 151.28 / Max: 152.15Min: 149.28 / Avg: 149.79 / Max: 150.81

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyOptimized Power ModeDefault48121620SE +/- 0.06, N = 3SE +/- 0.02, N = 316.7515.631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyOptimized Power ModeDefault48121620Min: 16.68 / Avg: 16.75 / Max: 16.87Min: 15.61 / Avg: 15.63 / Max: 15.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteOptimized Power ModeDefault14K28K42K56K70KSE +/- 214.64, N = 3SE +/- 70.87, N = 359695639621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteOptimized Power ModeDefault11K22K33K44K55KMin: 59271.88 / Avg: 59695.27 / Max: 59968.38Min: 63821.38 / Avg: 63962.31 / Max: 64045.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: Li2_STO_aeOptimized Power ModeDefault20406080100SE +/- 1.02, N = 5SE +/- 0.40, N = 392.6794.381. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.17.1Input: Li2_STO_aeOptimized Power ModeDefault20406080100Min: 91.11 / Avg: 92.67 / Max: 96.4Min: 93.62 / Avg: 94.38 / Max: 94.991. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Optimized Power ModeDefault1020304050SE +/- 0.41, N = 15SE +/- 0.28, N = 342.5636.551. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200Optimized Power ModeDefault918273645Min: 40.92 / Avg: 42.56 / Max: 46.3Min: 36.05 / Avg: 36.55 / Max: 37.011. (CXX) g++ options: -O3 -fopenmp

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MOptimized Power ModeDefault20406080100SE +/- 0.17, N = 3SE +/- 0.14, N = 386.3085.82
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MOptimized Power ModeDefault1632486480Min: 85.99 / Avg: 86.3 / Max: 86.56Min: 85.59 / Avg: 85.82 / Max: 86.06

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldOptimized Power ModeDefault306090120150SE +/- 0.28, N = 3SE +/- 0.65, N = 3111.73111.23
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldOptimized Power ModeDefault20406080100Min: 111.34 / Avg: 111.73 / Max: 112.28Min: 110.57 / Avg: 111.23 / Max: 112.52

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerOptimized Power ModeDefault20406080100SE +/- 0.08, N = 3SE +/- 0.07, N = 398.4997.87
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerOptimized Power ModeDefault20406080100Min: 98.36 / Avg: 98.49 / Max: 98.62Min: 97.75 / Avg: 97.87 / Max: 97.99

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaOptimized Power ModeDefault20406080100SE +/- 0.20, N = 3SE +/- 0.29, N = 398.9196.52
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaOptimized Power ModeDefault20406080100Min: 98.68 / Avg: 98.91 / Max: 99.3Min: 96.08 / Avg: 96.52 / Max: 97.08

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastOptimized Power ModeDefault246810SE +/- 0.066, N = 3SE +/- 0.067, N = 36.3406.9211. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastOptimized Power ModeDefault3691215Min: 6.22 / Avg: 6.34 / Max: 6.45Min: 6.83 / Avg: 6.92 / Max: 7.051. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Optimized Power ModeDefault60K120K180K240K300KSE +/- 10.52, N = 3SE +/- 25.30, N = 3202507.42273684.711. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Optimized Power ModeDefault50K100K150K200K250KMin: 202494.42 / Avg: 202507.42 / Max: 202528.25Min: 273634.53 / Avg: 273684.71 / Max: 273715.511. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500Optimized Power ModeDefault20K40K60K80K100KSE +/- 184.86, N = 3SE +/- 265.12, N = 380817.7390823.061. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500Optimized Power ModeDefault16K32K48K64K80KMin: 80494.85 / Avg: 80817.73 / Max: 81135.15Min: 90495.89 / Avg: 90823.06 / Max: 91348.021. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationOptimized Power ModeDefault20406080100SE +/- 0.13, N = 3SE +/- 0.03, N = 389.3781.69
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationOptimized Power ModeDefault20406080100Min: 89.11 / Avg: 89.37 / Max: 89.54Min: 81.66 / Avg: 81.69 / Max: 81.75

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault600M1200M1800M2400M3000MSE +/- 20734086.19, N = 15SE +/- 24169149.30, N = 3255657333328101666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault500M1000M1500M2000M2500MMin: 2447400000 / Avg: 2556573333.33 / Max: 2686800000Min: 2785400000 / Avg: 2810166666.67 / Max: 28585000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamOptimized Power ModeDefault20406080100SE +/- 0.13, N = 3SE +/- 0.18, N = 382.7384.78
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamOptimized Power ModeDefault1632486480Min: 82.48 / Avg: 82.73 / Max: 82.87Min: 84.43 / Avg: 84.78 / Max: 85.04

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedOptimized Power ModeDefault30060090012001500SE +/- 1.74, N = 5SE +/- 1.98, N = 31282.71422.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedOptimized Power ModeDefault2004006008001000Min: 1278.9 / Avg: 1282.66 / Max: 1288.4Min: 1419.6 / Avg: 1422.73 / Max: 1426.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedOptimized Power ModeDefault70140210280350SE +/- 2.98, N = 5SE +/- 3.46, N = 3272.2333.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedOptimized Power ModeDefault60120180240300Min: 265.2 / Avg: 272.16 / Max: 282.9Min: 328.3 / Avg: 333.6 / Max: 340.11. (CC) gcc options: -O3 -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Optimized Power ModeDefault500K1000K1500K2000K2500KSE +/- 19286.75, N = 15SE +/- 7546.74, N = 32389636.672503177.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Optimized Power ModeDefault400K800K1200K1600K2000KMin: 2252191.25 / Avg: 2389636.67 / Max: 2484516.5Min: 2488101.75 / Avg: 2503177.67 / Max: 25113461. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault90180270360450SE +/- 0.71, N = 3SE +/- 3.53, N = 3392.45397.77
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault70140210280350Min: 391.04 / Avg: 392.45 / Max: 393.25Min: 393.65 / Avg: 397.77 / Max: 404.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault4080120160200SE +/- 0.33, N = 3SE +/- 1.36, N = 3162.59160.41
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150Min: 162.03 / Avg: 162.59 / Max: 163.18Min: 157.74 / Avg: 160.41 / Max: 162.25

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedOptimized Power ModeDefault30060090012001500SE +/- 2.17, N = 3SE +/- 16.60, N = 31094.21183.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedOptimized Power ModeDefault2004006008001000Min: 1091.5 / Avg: 1094.2 / Max: 1098.5Min: 1150.6 / Avg: 1183.8 / Max: 1200.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedOptimized Power ModeDefault3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 38.569.901. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedOptimized Power ModeDefault3691215Min: 8.55 / Avg: 8.56 / Max: 8.57Min: 9.87 / Avg: 9.9 / Max: 9.941. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigOptimized Power ModeDefault612182430SE +/- 0.18, N = 11SE +/- 0.21, N = 824.4823.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigOptimized Power ModeDefault612182430Min: 24.12 / Avg: 24.48 / Max: 26.19Min: 23.38 / Avg: 23.76 / Max: 25.18

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Optimized Power ModeDefault50K100K150K200K250KSE +/- 72.63, N = 2SE +/- 397.04, N = 3184861.99243384.191. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Optimized Power ModeDefault40K80K120K160K200KMin: 184789.35 / Avg: 184861.99 / Max: 184934.62Min: 242942.63 / Avg: 243384.19 / Max: 244176.541. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedOptimized Power ModeDefault30060090012001500SE +/- 2.93, N = 3SE +/- 1.19, N = 31044.61173.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedOptimized Power ModeDefault2004006008001000Min: 1041.1 / Avg: 1044.57 / Max: 1050.4Min: 1171.3 / Avg: 1173.47 / Max: 1175.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedOptimized Power ModeDefault510152025SE +/- 0.19, N = 3SE +/- 0.06, N = 316.219.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedOptimized Power ModeDefault510152025Min: 15.8 / Avg: 16.17 / Max: 16.4Min: 19 / Avg: 19.1 / Max: 19.21. (CC) gcc options: -O3 -pthread -lz -llzma

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MOptimized Power ModeDefault4K8K12K16K20KSE +/- 22.03, N = 3SE +/- 67.39, N = 313832.416522.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MOptimized Power ModeDefault3K6K9K12K15KMin: 13788.9 / Avg: 13832.4 / Max: 13860.2Min: 16450.6 / Avg: 16522.53 / Max: 16657.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50Optimized Power ModeDefault1020304050SE +/- 0.39, N = 3SE +/- 0.55, N = 432.8343.91MIN: 15.85 / MAX: 36.48MIN: 17.29 / MAX: 46.23
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50Optimized Power ModeDefault918273645Min: 32.34 / Avg: 32.83 / Max: 33.6Min: 42.3 / Avg: 43.91 / Max: 44.79

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Optimized Power ModeDefault1020304050SE +/- 0.02, N = 3SE +/- 0.51, N = 431.5844.58MIN: 14.91 / MAX: 34.67MIN: 19.44 / MAX: 46.72
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50Optimized Power ModeDefault918273645Min: 31.55 / Avg: 31.58 / Max: 31.63Min: 43.51 / Avg: 44.58 / Max: 45.91

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault50100150200250SE +/- 0.55, N = 3SE +/- 0.35, N = 3237.36236.70MIN: 160.6 / MAX: 286.17MIN: 157.14 / MAX: 268.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault4080120160200Min: 236.27 / Avg: 237.36 / Max: 237.93Min: 236.15 / Avg: 236.7 / Max: 237.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault120240360480600SE +/- 1.35, N = 3SE +/- 0.65, N = 3538.20539.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault100200300400500Min: 536.78 / Avg: 538.2 / Max: 540.9Min: 538.48 / Avg: 539.69 / Max: 540.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Optimized Power ModeDefault700K1400K2100K2800K3500KSE +/- 42725.42, N = 3SE +/- 25091.90, N = 33274425.323441439.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Optimized Power ModeDefault600K1200K1800K2400K3000KMin: 3191357.02 / Avg: 3274425.32 / Max: 3333312.49Min: 3407736.01 / Avg: 3441439.71 / Max: 3490491.811. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterOptimized Power ModeDefault3691215SE +/- 0.072, N = 3SE +/- 0.081, N = 39.09610.8361. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterOptimized Power ModeDefault3691215Min: 8.96 / Avg: 9.1 / Max: 9.19Min: 10.68 / Avg: 10.84 / Max: 10.951. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUOptimized Power ModeDefault3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 312.5512.51MIN: 10.77 / MAX: 42.91MIN: 10.84 / MAX: 42.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUOptimized Power ModeDefault48121620Min: 12.54 / Avg: 12.55 / Max: 12.56Min: 12.5 / Avg: 12.51 / Max: 12.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUOptimized Power ModeDefault2K4K6K8K10KSE +/- 6.61, N = 3SE +/- 7.22, N = 310168.0610205.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUOptimized Power ModeDefault2K4K6K8K10KMin: 10157.98 / Avg: 10168.06 / Max: 10180.52Min: 10191.24 / Avg: 10205.61 / Max: 10213.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUOptimized Power ModeDefault1224364860SE +/- 0.16, N = 3SE +/- 0.08, N = 353.4553.51MIN: 45.01 / MAX: 111.52MIN: 43.94 / MAX: 101.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUOptimized Power ModeDefault1122334455Min: 53.15 / Avg: 53.45 / Max: 53.68Min: 53.38 / Avg: 53.51 / Max: 53.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUOptimized Power ModeDefault5001000150020002500SE +/- 7.16, N = 3SE +/- 3.42, N = 32392.382390.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPUOptimized Power ModeDefault400800120016002000Min: 2381.08 / Avg: 2392.38 / Max: 2405.65Min: 2384.38 / Avg: 2390.25 / Max: 2396.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUOptimized Power ModeDefault1020304050SE +/- 0.05, N = 3SE +/- 0.04, N = 342.5942.72MIN: 32.12 / MAX: 128.96MIN: 31.92 / MAX: 84.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUOptimized Power ModeDefault918273645Min: 42.5 / Avg: 42.59 / Max: 42.68Min: 42.65 / Avg: 42.72 / Max: 42.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUOptimized Power ModeDefault160320480640800SE +/- 0.90, N = 3SE +/- 0.71, N = 3750.43748.101. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPUOptimized Power ModeDefault130260390520650Min: 748.76 / Avg: 750.43 / Max: 751.85Min: 746.9 / Avg: 748.1 / Max: 749.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.3014.32MIN: 12.18 / MAX: 47.3MIN: 12.39 / MAX: 39.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault48121620Min: 14.29 / Avg: 14.3 / Max: 14.31Min: 14.3 / Avg: 14.32 / Max: 14.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault2K4K6K8K10KSE +/- 4.32, N = 3SE +/- 5.30, N = 38943.888930.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault16003200480064008000Min: 8935.41 / Avg: 8943.88 / Max: 8949.6Min: 8921.1 / Avg: 8930 / Max: 8939.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUOptimized Power ModeDefault714212835SE +/- 0.08, N = 3SE +/- 0.23, N = 328.9928.85MIN: 22.06 / MAX: 222.83MIN: 21.11 / MAX: 222.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUOptimized Power ModeDefault612182430Min: 28.85 / Avg: 28.99 / Max: 29.11Min: 28.39 / Avg: 28.85 / Max: 29.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUOptimized Power ModeDefault2004006008001000SE +/- 3.05, N = 3SE +/- 9.12, N = 31100.761106.321. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUOptimized Power ModeDefault2004006008001000Min: 1096.05 / Avg: 1100.76 / Max: 1106.48Min: 1096.2 / Avg: 1106.32 / Max: 1124.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUOptimized Power ModeDefault1.15882.31763.47644.63525.794SE +/- 0.00, N = 3SE +/- 0.01, N = 35.135.15MIN: 4.64 / MAX: 29.56MIN: 4.63 / MAX: 27.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUOptimized Power ModeDefault246810Min: 5.12 / Avg: 5.13 / Max: 5.13Min: 5.13 / Avg: 5.15 / Max: 5.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUOptimized Power ModeDefault5K10K15K20K25KSE +/- 6.86, N = 3SE +/- 42.09, N = 324922.4024795.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPUOptimized Power ModeDefault4K8K12K16K20KMin: 24912.05 / Avg: 24922.4 / Max: 24935.37Min: 24737.97 / Avg: 24795.17 / Max: 24877.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedOptimized Power ModeDefault80K160K240K320K400KSE +/- 52.43, N = 3SE +/- 127.57, N = 3391468.4388447.21. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedOptimized Power ModeDefault70K140K210K280K350KMin: 391379.6 / Avg: 391468.37 / Max: 391561.1Min: 388279.3 / Avg: 388447.17 / Max: 388697.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUOptimized Power ModeDefault918273645SE +/- 0.08, N = 3SE +/- 0.08, N = 338.7938.41MIN: 36.26 / MAX: 60.12MIN: 35.95 / MAX: 58.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUOptimized Power ModeDefault816243240Min: 38.67 / Avg: 38.79 / Max: 38.94Min: 38.29 / Avg: 38.41 / Max: 38.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUOptimized Power ModeDefault7001400210028003500SE +/- 6.73, N = 3SE +/- 7.06, N = 33297.813330.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPUOptimized Power ModeDefault6001200180024003000Min: 3285.15 / Avg: 3297.81 / Max: 3308.1Min: 3316.88 / Avg: 3330.19 / Max: 3340.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUOptimized Power ModeDefault0.09230.18460.27690.36920.4615SE +/- 0.01, N = 3SE +/- 0.00, N = 30.410.40MIN: 0.19 / MAX: 15.84MIN: 0.18 / MAX: 14.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUOptimized Power ModeDefault12345Min: 0.4 / Avg: 0.41 / Max: 0.42Min: 0.39 / Avg: 0.4 / Max: 0.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUOptimized Power ModeDefault30K60K90K120K150KSE +/- 1361.46, N = 3SE +/- 775.18, N = 3117369.55121577.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUOptimized Power ModeDefault20K40K60K80K100KMin: 114910.81 / Avg: 117369.55 / Max: 119612.18Min: 120668.78 / Avg: 121577.55 / Max: 123119.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault0.55131.10261.65392.20522.7565SE +/- 0.00, N = 3SE +/- 0.01, N = 32.422.45MIN: 1.96 / MAX: 28.3MIN: 2.03 / MAX: 23.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault246810Min: 2.42 / Avg: 2.42 / Max: 2.42Min: 2.43 / Avg: 2.45 / Max: 2.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault11K22K33K44K55KSE +/- 229.19, N = 3SE +/- 490.25, N = 349494.0648795.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUOptimized Power ModeDefault9K18K27K36K45KMin: 49048.51 / Avg: 49494.06 / Max: 49810.11Min: 48250.28 / Avg: 48795.97 / Max: 49774.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveOptimized Power ModeDefault306090120150SE +/- 0.79, N = 3SE +/- 1.03, N = 3118.24131.811. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: LiveOptimized Power ModeDefault20406080100Min: 116.69 / Avg: 118.24 / Max: 119.24Min: 130.43 / Avg: 131.81 / Max: 133.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000Optimized Power ModeDefault12002400360048006000SE +/- 62.35, N = 3SE +/- 32.35, N = 348145490
OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000Optimized Power ModeDefault10002000300040005000Min: 4688.89 / Avg: 4813.57 / Max: 4878.29Min: 5446.03 / Avg: 5489.56 / Max: 5552.78

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 313.9113.95
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault48121620Min: 13.89 / Avg: 13.91 / Max: 13.92Min: 13.92 / Avg: 13.95 / Max: 13.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault10002000300040005000SE +/- 2.64, N = 3SE +/- 6.99, N = 34590.344580.63
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault8001600240032004000Min: 4586.12 / Avg: 4590.34 / Max: 4595.19Min: 4566.76 / Avg: 4580.63 / Max: 4589.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault816243240SE +/- 0.01, N = 3SE +/- 0.03, N = 333.9033.64
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault714212835Min: 33.88 / Avg: 33.9 / Max: 33.92Min: 33.58 / Avg: 33.64 / Max: 33.67

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault400800120016002000SE +/- 0.88, N = 3SE +/- 1.67, N = 31883.661899.22
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault30060090012001500Min: 1882.24 / Avg: 1883.66 / Max: 1885.25Min: 1897.13 / Avg: 1899.22 / Max: 1902.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault100200300400500SE +/- 0.42, N = 3SE +/- 0.27, N = 3461.15463.09
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault80160240320400Min: 460.3 / Avg: 461.15 / Max: 461.59Min: 462.64 / Avg: 463.09 / Max: 463.56

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150SE +/- 0.12, N = 3SE +/- 0.21, N = 3138.01137.34
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150Min: 137.84 / Avg: 138.01 / Max: 138.24Min: 136.99 / Avg: 137.34 / Max: 137.71

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault100200300400500SE +/- 0.70, N = 3SE +/- 0.03, N = 3460.69462.99
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault80160240320400Min: 459.91 / Avg: 460.69 / Max: 462.09Min: 462.93 / Avg: 462.99 / Max: 463.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150SE +/- 0.30, N = 3SE +/- 0.09, N = 3138.15137.35
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150Min: 137.58 / Avg: 138.15 / Max: 138.56Min: 137.19 / Avg: 137.35 / Max: 137.47

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault110M220M330M440M550MSE +/- 3588947.54, N = 3SE +/- 5123258.57, N = 64914266675082450001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault90M180M270M360M450MMin: 487100000 / Avg: 491426666.67 / Max: 498550000Min: 497980000 / Avg: 508245000 / Max: 5317600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault1300M2600M3900M5200M6500MSE +/- 61606333.10, N = 6SE +/- 7150058.27, N = 3630148333361856000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault1100M2200M3300M4400M5500MMin: 6228500000 / Avg: 6301483333.33 / Max: 6609100000Min: 6178400000 / Avg: 6185600000 / Max: 61999000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingOptimized Power ModeDefault140K280K420K560K700KSE +/- 9574.07, N = 3SE +/- 4040.78, N = 36381366373111. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingOptimized Power ModeDefault110K220K330K440K550KMin: 620072 / Avg: 638135.67 / Max: 652669Min: 633021 / Avg: 637310.67 / Max: 6453871. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingOptimized Power ModeDefault150K300K450K600K750KSE +/- 1440.87, N = 3SE +/- 869.44, N = 36544596895161. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingOptimized Power ModeDefault120K240K360K480K600KMin: 651684 / Avg: 654458.67 / Max: 656520Min: 687781 / Avg: 689516.33 / Max: 6904801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault80160240320400SE +/- 0.05, N = 3SE +/- 0.69, N = 3346.74350.90
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault60120180240300Min: 346.66 / Avg: 346.74 / Max: 346.81Min: 349.65 / Avg: 350.9 / Max: 352.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault4080120160200SE +/- 0.02, N = 3SE +/- 0.43, N = 3183.71182.16
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault306090120150Min: 183.68 / Avg: 183.71 / Max: 183.73Min: 181.4 / Avg: 182.16 / Max: 182.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1224364860SE +/- 0.07, N = 3SE +/- 0.05, N = 351.6251.70
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1020304050Min: 51.47 / Avg: 51.62 / Max: 51.7Min: 51.6 / Avg: 51.7 / Max: 51.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault30060090012001500SE +/- 2.51, N = 3SE +/- 1.85, N = 31237.031235.41
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault2004006008001000Min: 1233.52 / Avg: 1237.03 / Max: 1241.89Min: 1232.46 / Avg: 1235.41 / Max: 1238.82

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastOptimized Power ModeDefault1326395265SE +/- 0.45, N = 15SE +/- 0.07, N = 549.1056.06
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastOptimized Power ModeDefault1122334455Min: 47.45 / Avg: 49.1 / Max: 53.33Min: 55.85 / Avg: 56.06 / Max: 56.23

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1.33622.67244.00865.34486.681SE +/- 0.0065, N = 3SE +/- 0.0033, N = 35.93855.6732
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault246810Min: 5.93 / Avg: 5.94 / Max: 5.95Min: 5.67 / Avg: 5.67 / Max: 5.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault2K4K6K8K10KSE +/- 12.41, N = 3SE +/- 7.42, N = 310738.1411244.79
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault2K4K6K8K10KMin: 10721.99 / Avg: 10738.14 / Max: 10762.53Min: 11232.11 / Avg: 11244.79 / Max: 11257.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 336.1335.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault816243240Min: 35.84 / Avg: 36.13 / Max: 36.28Min: 35.8 / Avg: 35.93 / Max: 36.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault400800120016002000SE +/- 6.86, N = 3SE +/- 3.42, N = 31768.911778.59
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault30060090012001500Min: 1761.71 / Avg: 1768.91 / Max: 1782.63Min: 1772.46 / Avg: 1778.59 / Max: 1784.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault20406080100SE +/- 0.04, N = 3SE +/- 0.04, N = 376.5076.54
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1530456075Min: 76.42 / Avg: 76.5 / Max: 76.55Min: 76.46 / Avg: 76.54 / Max: 76.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault2004006008001000SE +/- 0.60, N = 3SE +/- 0.97, N = 3834.27834.18
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault150300450600750Min: 833.08 / Avg: 834.27 / Max: 835.01Min: 832.62 / Avg: 834.18 / Max: 835.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault816243240SE +/- 0.07, N = 3SE +/- 0.16, N = 335.9635.67
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault816243240Min: 35.84 / Avg: 35.96 / Max: 36.09Min: 35.39 / Avg: 35.67 / Max: 35.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault400800120016002000SE +/- 3.31, N = 3SE +/- 7.80, N = 31776.531791.27
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault30060090012001500Min: 1769.97 / Avg: 1776.53 / Max: 1780.66Min: 1778.57 / Avg: 1791.27 / Max: 1805.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1632486480SE +/- 0.03, N = 3SE +/- 0.05, N = 373.9974.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault1428425670Min: 73.94 / Avg: 73.99 / Max: 74.02Min: 73.98 / Avg: 74.07 / Max: 74.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault2004006008001000SE +/- 0.10, N = 3SE +/- 0.30, N = 3863.35862.14
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamOptimized Power ModeDefault150300450600750Min: 863.16 / Avg: 863.35 / Max: 863.5Min: 861.66 / Avg: 862.14 / Max: 862.69

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastOptimized Power ModeDefault510152025SE +/- 0.11, N = 3SE +/- 0.15, N = 318.0519.921. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastOptimized Power ModeDefault510152025Min: 17.84 / Avg: 18.05 / Max: 18.16Min: 19.74 / Avg: 19.92 / Max: 20.211. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestOptimized Power ModeDefault612182430SE +/- 0.12, N = 3SE +/- 0.14, N = 325.3725.74
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestOptimized Power ModeDefault612182430Min: 25.17 / Avg: 25.37 / Max: 25.58Min: 25.51 / Avg: 25.74 / Max: 26

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-OnlyOptimized Power ModeDefault816243240SE +/- 0.07, N = 3SE +/- 0.33, N = 333.4034.94
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-OnlyOptimized Power ModeDefault714212835Min: 33.29 / Avg: 33.4 / Max: 33.52Min: 34.6 / Avg: 34.94 / Max: 35.6

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault500M1000M1500M2000M2500MSE +/- 1101514.11, N = 3SE +/- 2968913.01, N = 3214100000021445666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault400M800M1200M1600M2000MMin: 2139800000 / Avg: 2141000000 / Max: 2143200000Min: 2141400000 / Avg: 2144566666.67 / Max: 21505000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 1000Optimized Power ModeDefault20K40K60K80K100K76315.6580826.711. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault300M600M900M1200M1500MSE +/- 13505101.92, N = 3SE +/- 4762352.36, N = 3151346666714739000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault300M600M900M1200M1500MMin: 1491700000 / Avg: 1513466666.67 / Max: 1538200000Min: 1466700000 / Avg: 1473900000 / Max: 14829000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault200M400M600M800M1000MSE +/- 5205466.14, N = 3SE +/- 10474795.68, N = 399966333310123366671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512Optimized Power ModeDefault200M400M600M800M1000MMin: 993420000 / Avg: 999663333.33 / Max: 1010000000Min: 992410000 / Avg: 1012336666.67 / Max: 10279000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault1200M2400M3600M4800M6000MSE +/- 19835069.95, N = 3SE +/- 11767044.38, N = 3565660000058355000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault1000M2000M3000M4000M5000MMin: 5622600000 / Avg: 5656600000 / Max: 5691300000Min: 5815800000 / Avg: 5835500000 / Max: 58565000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault900M1800M2700M3600M4500MSE +/- 34188302.09, N = 3SE +/- 11478143.48, N = 3412500000043298666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault800M1600M2400M3200M4000MMin: 4061200000 / Avg: 4125000000 / Max: 4178200000Min: 4307100000 / Avg: 4329866666.67 / Max: 43438000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault800M1600M2400M3200M4000MSE +/- 3883440.63, N = 3SE +/- 1386041.53, N = 3374523333336118666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault600M1200M1800M2400M3000MMin: 3738000000 / Avg: 3745233333.33 / Max: 3751300000Min: 3609100000 / Avg: 3611866666.67 / Max: 36134000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault500M1000M1500M2000M2500MSE +/- 13974301.81, N = 3SE +/- 6835284.27, N = 3246086666724760333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault400M800M1200M1600M2000MMin: 2446100000 / Avg: 2460866666.67 / Max: 2488800000Min: 2465600000 / Avg: 2476033333.33 / Max: 24889000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault300M600M900M1200M1500MSE +/- 1193035.34, N = 3SE +/- 1273228.62, N = 3120450000012117666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32Optimized Power ModeDefault200M400M600M800M1000MMin: 1202200000 / Avg: 1204500000 / Max: 1206200000Min: 1209900000 / Avg: 1211766666.67 / Max: 12142000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault300M600M900M1200M1500MSE +/- 16392511.84, N = 3SE +/- 9950041.88, N = 3129333333314291000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57Optimized Power ModeDefault200M400M600M800M1000MMin: 1262500000 / Avg: 1293333333.33 / Max: 1318400000Min: 1410600000 / Avg: 1429100000 / Max: 14447000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterOptimized Power ModeDefault714212835SE +/- 0.30, N = 4SE +/- 0.36, N = 328.9332.091. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterOptimized Power ModeDefault714212835Min: 28.35 / Avg: 28.93 / Max: 29.77Min: 31.47 / Avg: 32.09 / Max: 32.711. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KOptimized Power ModeDefault246810SE +/- 0.006, N = 3SE +/- 0.098, N = 36.6457.2971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4KOptimized Power ModeDefault3691215Min: 6.64 / Avg: 6.65 / Max: 6.66Min: 7.12 / Avg: 7.3 / Max: 7.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Optimized Power ModeDefault700K1400K2100K2800K3500KSE +/- 1584.15, N = 3SE +/- 12726.60, N = 33050498.003060186.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Optimized Power ModeDefault500K1000K1500K2000K2500KMin: 3047348.5 / Avg: 3050498 / Max: 3052371.25Min: 3034771.5 / Avg: 3060186.42 / Max: 3074102.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowOptimized Power ModeDefault612182430SE +/- 0.08, N = 3SE +/- 0.08, N = 326.7127.20
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowOptimized Power ModeDefault612182430Min: 26.62 / Avg: 26.71 / Max: 26.86Min: 27.09 / Avg: 27.2 / Max: 27.36

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Optimized Power ModeDefault700K1400K2100K2800K3500KSE +/- 36270.68, N = 3SE +/- 482.98, N = 33152359.753240001.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Optimized Power ModeDefault600K1200K1800K2400K3000KMin: 3080011 / Avg: 3152359.75 / Max: 3193109Min: 3239239.25 / Avg: 3240001.58 / Max: 3240896.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-ThreadedOptimized Power ModeDefault8001600240032004000SE +/- 2.92, N = 3SE +/- 1.44, N = 33641.43650.51. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-ThreadedOptimized Power ModeDefault6001200180024003000Min: 3635.6 / Avg: 3641.43 / Max: 3644.7Min: 3647.7 / Avg: 3650.5 / Max: 3652.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MOptimized Power ModeDefault16K32K48K64K80KSE +/- 24.14, N = 4SE +/- 677.05, N = 475701.775002.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MOptimized Power ModeDefault13K26K39K52K65KMin: 75648.7 / Avg: 75701.7 / Max: 75757.6Min: 72971.4 / Avg: 75002.28 / Max: 75700.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Optimized Power ModeDefault1000K2000K3000K4000K5000KSE +/- 1281.58, N = 3SE +/- 8167.04, N = 43811819.504744838.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Optimized Power ModeDefault800K1600K2400K3200K4000KMin: 3809882.5 / Avg: 3811819.5 / Max: 3814241.75Min: 4723750 / Avg: 4744838.88 / Max: 47624111. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeOptimized Power ModeDefault61218243023.3523.471. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeOptimized Power ModeDefault71421283527.7130.041. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KOptimized Power ModeDefault50100150200250SE +/- 1.56, N = 9SE +/- 1.12, N = 5199.55218.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4KOptimized Power ModeDefault4080120160200Min: 192.99 / Avg: 199.55 / Max: 210.29Min: 215.42 / Avg: 218.76 / Max: 222.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumOptimized Power ModeDefault714212835SE +/- 0.19, N = 3SE +/- 0.08, N = 329.6229.79
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumOptimized Power ModeDefault714212835Min: 29.42 / Avg: 29.62 / Max: 29.99Min: 29.66 / Avg: 29.79 / Max: 29.92

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super FastOptimized Power ModeDefault306090120150SE +/- 1.05, N = 15SE +/- 1.01, N = 15145.75158.23
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super FastOptimized Power ModeDefault306090120150Min: 140.33 / Avg: 145.75 / Max: 151.79Min: 149.36 / Avg: 158.23 / Max: 164.54

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MOptimized Power ModeDefault15K30K45K60K75KSE +/- 443.65, N = 3SE +/- 844.15, N = 469847.969677.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1MOptimized Power ModeDefault12K24K36K48K60KMin: 68960.8 / Avg: 69847.87 / Max: 70308.7Min: 67163.7 / Avg: 69677.8 / Max: 70676.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowOptimized Power ModeDefault918273645SE +/- 0.06, N = 4SE +/- 0.06, N = 441.1341.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowOptimized Power ModeDefault918273645Min: 40.97 / Avg: 41.13 / Max: 41.23Min: 41.3 / Avg: 41.45 / Max: 41.571. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MOptimized Power ModeDefault15K30K45K60K75KSE +/- 943.50, N = 3SE +/- 82.14, N = 469196.770383.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1MOptimized Power ModeDefault12K24K36K48K60KMin: 67317.4 / Avg: 69196.7 / Max: 70283.9Min: 70180.4 / Avg: 70383.2 / Max: 70561.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MOptimized Power ModeDefault15K30K45K60K75KSE +/- 41.44, N = 3SE +/- 27.69, N = 470298.870469.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MOptimized Power ModeDefault12K24K36K48K60KMin: 70229.7 / Avg: 70298.83 / Max: 70373Min: 70407.7 / Avg: 70469.75 / Max: 70541.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MOptimized Power ModeDefault15K30K45K60K75KSE +/- 96.69, N = 3SE +/- 49.63, N = 470338.670573.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1MOptimized Power ModeDefault12K24K36K48K60KMin: 70234.6 / Avg: 70338.6 / Max: 70531.8Min: 70467.2 / Avg: 70573 / Max: 70686.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumOptimized Power ModeDefault1020304050SE +/- 0.05, N = 4SE +/- 0.06, N = 441.6941.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumOptimized Power ModeDefault918273645Min: 41.6 / Avg: 41.69 / Max: 41.83Min: 41.91 / Avg: 41.99 / Max: 42.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-OnlyOptimized Power ModeDefault3691215SE +/- 0.04, N = 4SE +/- 0.03, N = 412.6712.89
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-OnlyOptimized Power ModeDefault48121620Min: 12.61 / Avg: 12.67 / Max: 12.77Min: 12.81 / Avg: 12.89 / Max: 12.95

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pOptimized Power ModeDefault510152025SE +/- 0.11, N = 5SE +/- 0.16, N = 519.6720.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pOptimized Power ModeDefault510152025Min: 19.32 / Avg: 19.67 / Max: 19.92Min: 20.35 / Avg: 20.95 / Max: 21.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastOptimized Power ModeDefault1326395265SE +/- 0.48, N = 4SE +/- 0.15, N = 550.6857.87
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastOptimized Power ModeDefault1122334455Min: 49.52 / Avg: 50.68 / Max: 51.87Min: 57.46 / Avg: 57.87 / Max: 58.2

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastOptimized Power ModeDefault1326395265SE +/- 0.21, N = 4SE +/- 0.37, N = 550.3659.59
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastOptimized Power ModeDefault1224364860Min: 49.76 / Avg: 50.36 / Max: 50.75Min: 58.36 / Avg: 59.59 / Max: 60.54

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastOptimized Power ModeDefault20406080100SE +/- 0.70, N = 6SE +/- 0.23, N = 670.8777.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastOptimized Power ModeDefault1530456075Min: 68.39 / Avg: 70.87 / Max: 72.57Min: 76.84 / Avg: 77.53 / Max: 78.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KOptimized Power ModeDefault1530456075SE +/- 0.56, N = 4SE +/- 0.39, N = 461.9169.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4KOptimized Power ModeDefault1326395265Min: 60.44 / Avg: 61.91 / Max: 63.13Min: 68.72 / Avg: 69.47 / Max: 70.551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KOptimized Power ModeDefault50100150200250SE +/- 2.06, N = 5SE +/- 0.90, N = 6198.74218.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4KOptimized Power ModeDefault4080120160200Min: 195 / Avg: 198.74 / Max: 206.71Min: 214.86 / Avg: 218.45 / Max: 221.081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastOptimized Power ModeDefault4080120160200SE +/- 1.12, N = 15SE +/- 1.12, N = 9153.17166.47
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastOptimized Power ModeDefault306090120150Min: 147.04 / Avg: 153.17 / Max: 161.05Min: 159.93 / Avg: 166.47 / Max: 170.71

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: SlowOptimized Power ModeDefault20406080100SE +/- 0.19, N = 6SE +/- 0.10, N = 678.9380.30
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: SlowOptimized Power ModeDefault1530456075Min: 78.21 / Avg: 78.93 / Max: 79.43Min: 79.99 / Avg: 80.3 / Max: 80.55

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastOptimized Power ModeDefault1530456075SE +/- 0.44, N = 5SE +/- 0.26, N = 564.9769.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastOptimized Power ModeDefault1326395265Min: 63.8 / Avg: 64.97 / Max: 66Min: 68.83 / Avg: 69.36 / Max: 70.291. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault120240360480600SE +/- 9.88, N = 15SE +/- 2.30, N = 9493.26542.981. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault100200300400500Min: 356.57 / Avg: 493.26 / Max: 513.71Min: 533.41 / Avg: 542.98 / Max: 554.71. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastOptimized Power ModeDefault1632486480SE +/- 0.12, N = 5SE +/- 0.25, N = 567.2171.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastOptimized Power ModeDefault1428425670Min: 66.93 / Avg: 67.21 / Max: 67.63Min: 70.53 / Avg: 71.05 / Max: 71.891. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyOptimized Power ModeDefault1.0622.1243.1864.2485.31SE +/- 0.01, N = 6SE +/- 0.01, N = 64.724.53
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyOptimized Power ModeDefault246810Min: 4.67 / Avg: 4.72 / Max: 4.76Min: 4.48 / Avg: 4.53 / Max: 4.55

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.2Pi Digits To Calculate: 1BOptimized Power ModeDefault1.16982.33963.50944.67925.849SE +/- 0.009, N = 5SE +/- 0.018, N = 54.6975.199
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.2Pi Digits To Calculate: 1BOptimized Power ModeDefault246810Min: 4.68 / Avg: 4.7 / Max: 4.72Min: 5.16 / Avg: 5.2 / Max: 5.25

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pOptimized Power ModeDefault306090120150SE +/- 0.77, N = 6SE +/- 0.36, N = 7126.70142.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pOptimized Power ModeDefault306090120150Min: 124.68 / Avg: 126.7 / Max: 129.02Min: 140.83 / Avg: 142.27 / Max: 143.51. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: MediumOptimized Power ModeDefault20406080100SE +/- 0.25, N = 6SE +/- 0.08, N = 686.7888.46
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: MediumOptimized Power ModeDefault20406080100Min: 85.6 / Avg: 86.78 / Max: 87.35Min: 88.21 / Avg: 88.46 / Max: 88.75

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowOptimized Power ModeDefault306090120150SE +/- 0.35, N = 8SE +/- 0.38, N = 8125.18127.251. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowOptimized Power ModeDefault20406080100Min: 124.01 / Avg: 125.18 / Max: 126.67Min: 125.92 / Avg: 127.25 / Max: 128.91. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very FastOptimized Power ModeDefault4080120160200SE +/- 1.07, N = 11SE +/- 0.74, N = 8147.08160.63
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very FastOptimized Power ModeDefault306090120150Min: 141.96 / Avg: 147.08 / Max: 153.57Min: 158.38 / Avg: 160.63 / Max: 165.07

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pOptimized Power ModeDefault110220330440550SE +/- 4.01, N = 15SE +/- 3.53, N = 9439.48502.711. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pOptimized Power ModeDefault90180270360450Min: 416.15 / Avg: 439.48 / Max: 472.01Min: 484.02 / Avg: 502.71 / Max: 517.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumOptimized Power ModeDefault306090120150SE +/- 0.58, N = 8SE +/- 0.46, N = 8130.64131.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumOptimized Power ModeDefault20406080100Min: 127.72 / Avg: 130.64 / Max: 132.64Min: 129.53 / Avg: 131.14 / Max: 133.211. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault100200300400500SE +/- 2.86, N = 8SE +/- 1.93, N = 8433.53463.271. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault80160240320400Min: 422.26 / Avg: 433.53 / Max: 442.47Min: 456.28 / Avg: 463.27 / Max: 473.461. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault120240360480600SE +/- 3.72, N = 8SE +/- 4.34, N = 9502.05542.171. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pOptimized Power ModeDefault100200300400500Min: 481.52 / Avg: 502.05 / Max: 520.31Min: 530.7 / Avg: 542.17 / Max: 573.681. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pOptimized Power ModeDefault140280420560700SE +/- 3.16, N = 10SE +/- 4.69, N = 11574.60628.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pOptimized Power ModeDefault110220330440550Min: 560.66 / Avg: 574.6 / Max: 590.69Min: 608.76 / Avg: 628.75 / Max: 657.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastOptimized Power ModeDefault60120180240300SE +/- 1.69, N = 10SE +/- 1.09, N = 10251.33263.881. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastOptimized Power ModeDefault50100150200250Min: 240.04 / Avg: 251.33 / Max: 256.84Min: 259.16 / Avg: 263.88 / Max: 272.31. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastOptimized Power ModeDefault60120180240300SE +/- 1.93, N = 10SE +/- 2.07, N = 11263.96282.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastOptimized Power ModeDefault50100150200250Min: 251.12 / Avg: 263.96 / Max: 273.61Min: 265.24 / Avg: 282.05 / Max: 293.061. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastOptimized Power ModeDefault60120180240300SE +/- 1.04, N = 10SE +/- 1.16, N = 10256.19268.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastOptimized Power ModeDefault50100150200250Min: 251.65 / Avg: 256.19 / Max: 261.96Min: 263.87 / Avg: 268.49 / Max: 273.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Execution TimeOptimized Power ModeDefault1.23952.4793.71854.9586.19755.508974.132881. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

Tuning: VMAF Optimized - Input: Bosphorus 4K

Default: The test quit with a non-zero exit status.

Optimized Power Mode: The test quit with a non-zero exit status.

Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K

Default: The test quit with a non-zero exit status.

Optimized Power Mode: The test quit with a non-zero exit status.

Tuning: Visual Quality Optimized - Input: Bosphorus 4K

Default: The test quit with a non-zero exit status.

Optimized Power Mode: The test quit with a non-zero exit status.

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringOptimized Power ModeDefault140280420560700Min: 88.73 / Avg: 366.23 / Max: 802.53Min: 101.15 / Avg: 445.93 / Max: 802.3

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringOptimized Power ModeDefault10002000300040005000Min: 800 / Avg: 3437 / Max: 5154Min: 500 / Avg: 3374.53 / Max: 5474

181 Results Shown

PyTorch:
  CPU - 64 - Efficientnet_v2_l
  CPU - 256 - Efficientnet_v2_l
Timed GCC Compilation
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
Apache Spark TPC-H:
  10 - Q22
  10 - Q21
  10 - Q20
  10 - Q19
  10 - Q18
  10 - Q17
  10 - Q16
  10 - Q15
  10 - Q14
  10 - Q13
  10 - Q12
  10 - Q11
  10 - Q10
  10 - Q09
  10 - Q08
  10 - Q07
  10 - Q06
  10 - Q05
  10 - Q04
  10 - Q03
  10 - Q02
  10 - Q01
  10 - Geometric Mean Of All Queries
PyTorch
Memcached
PyTorch
Blender
easyWave
DuckDB:
  TPC-H Parquet
  IMDB
Timed LLVM Compilation
FFmpeg:
  libx265 - Upload
  libx265 - Video On Demand
  libx265 - Platform
Timed Linux Kernel Compilation
PostgreSQL:
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
QMCPACK
easyWave
OpenRadioss:
  Chrysler Neon 1M
  Bird Strike on Windshield
  INIVOL and Fluid Structure Interaction Drop Container
Timed LLVM Compilation
VVenC
nginx
Apache HTTP Server
OpenRadioss
Liquid-DSP
OpenRadioss
Zstd Compression:
  12 - Decompression Speed
  12 - Compression Speed
Redis
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Timed Linux Kernel Compilation
nginx
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
Xmrig
PyTorch:
  CPU - 64 - ResNet-50
  CPU - 256 - ResNet-50
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
Memcached
VVenC
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
QuantLib
OpenVINO:
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
FFmpeg
Apache Hadoop
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Liquid-DSP:
  32 - 256 - 512
  256 - 256 - 32
7-Zip Compression:
  Decompression Rating
  Compression Rating
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
uvg266
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
VVenC
OpenRadioss
Blender
Liquid-DSP
Apache HTTP Server
Liquid-DSP:
  128 - 256 - 512
  64 - 256 - 512
  256 - 256 - 57
  128 - 256 - 57
  128 - 256 - 32
  64 - 256 - 32
  32 - 256 - 32
  32 - 256 - 57
VVenC
SVT-AV1
Redis
uvg266
Redis
QuantLib
Xmrig
Redis
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
SVT-AV1
uvg266:
  Bosphorus 4K - Medium
  Bosphorus 1080p - Super Fast
Xmrig
Kvazaar
Xmrig:
  KawPow - 1M
  Monero - 1M
  CryptoNight-Heavy - 1M
Kvazaar
Blender
SVT-AV1
uvg266:
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
Kvazaar
SVT-AV1:
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
uvg266:
  Bosphorus 1080p - Ultra Fast
  Bosphorus 1080p - Slow
Kvazaar
SVT-VP9
Kvazaar
Intel Open Image Denoise
Y-Cruncher
SVT-AV1
uvg266
Kvazaar
uvg266
SVT-AV1
Kvazaar
SVT-VP9:
  Visual Quality Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
SVT-AV1
Kvazaar:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
  Bosphorus 1080p - Very Fast
OpenFOAM
CPU Power Consumption Monitor:
  Phoronix Test Suite System Monitoring:
    Watts
    Megahertz