1280p october

Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 14GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210278-NE-1280POCTO70
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 5 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 2 Tests
Encoding 2 Tests
HPC - High Performance Computing 3 Tests
Imaging 3 Tests
Machine Learning 2 Tests
Multi-Core 3 Tests
Python Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
October 27 2022
  3 Hours, 18 Minutes
B
October 27 2022
  3 Hours, 25 Minutes
Invert Hiding All Results Option
  3 Hours, 22 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1280p octoberOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads)MSI MS-14C6 (E14C6IMS.115 BIOS)Intel Alder Lake PCH16GB1024GB Micron_3400_MTFDKBA1T0TFHMSI Intel ADL GT2 14GB (1450MHz)Realtek ALC274Intel Alder Lake-P PCH CNVi WiFiUbuntu 22.045.15.0-43-generic (x86_64)KDE Plasma 5.24.4X Server 1.21.1.34.6 Mesa 22.0.51.3.204GCC 11.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution1280p October BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x41c - Thermald 2.4.9 - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

A vs. B ComparisonPhoronix Test SuiteBaseline+33.7%+33.7%+67.4%+67.4%+101.1%+101.1%51.5%50.5%25.5%22.9%15.4%15.2%9.4%8%6.9%6.9%6.3%6.2%5.9%5.9%5.5%5.1%4.1%3.8%3.6%3.4%2.9%2.9%2.8%2.4%2.4%2.2%2.2%2%2%1000000 - Rand Read - 4134.8%1000000 - Rand Read - 4133.2%10000 - Async Rand Write - 110000 - Async Rand Write - 11000000 - Increment - 142.7%1000000 - Increment - 141%1000000 - Rand Read - 133.3%1000000 - Rand Read - 131.3%10000 - Rand Write - 410000 - Rand Write - 410000 - Rand Read - 419.1%1000000 - Seq Read - 418.9%10000 - Rand Read - 418.3%1000000 - Seq Read - 417.9%10000 - Seq Read - 110000 - Seq Read - 110000 - Rand Read - 114.8%10000 - Rand Read - 113.9%1000000 - Rand Write - 410000 - Increment - 18.2%10000 - Seq Write - 110000 - Increment - 17.9%10000 - Seq Read - 410000 - Seq Read - 41000000 - Async Rand Read - 16.4%1000000 - Increment - 41000000 - Increment - 4C.D.Y.C - A.M.S10000 - Seq Write - 110000 - Seq Write - 45.5%C.D.Y.C - A.M.S1000000 - Async Rand Read - 15.2%1000000 - Seq Read - 15.2%1000000 - Seq Read - 15.1%1000000 - Seq Write - 41000000 - Async Rand Write - 45%1000000 - Async Rand Write - 44.6%1000000 - Rand Write - 44.6%N.T.C.D.m - A.M.S10000 - Async Rand Read - 14%Bumper Beam10000 - Seq Write - 43.7%N.T.C.D.m - A.M.SC.P.D.T10000 - Async Rand Read - 13.4%10000 - Rand Write - 110000 - Rand Write - 11000000 - Seq Write - 1Myriad-Groestl2.5%10000 - Async Rand Write - 410000 - Async Rand Write - 4N.T.C.B.b.u.c - A.M.SN.T.C.B.b.u.c - A.M.SB.S.o.W10, LosslessApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseOpenRadiossApache HBaseNeural Magic DeepSparseOpenRadiossApache HBaseApache HBaseApache HBaseApache HBaseCpuminer-OptApache HBaseApache HBaseNeural Magic DeepSparseNeural Magic DeepSparseOpenRadiosslibavif avifencAB

1280p octoberhbase: 1000000 - Rand Read - 4hbase: 1000000 - Rand Read - 4hbase: 10000 - Async Rand Write - 1hbase: 10000 - Async Rand Write - 1hbase: 1000000 - Increment - 1hbase: 1000000 - Increment - 1hbase: 1000000 - Rand Read - 1hbase: 1000000 - Rand Read - 1hbase: 10000 - Rand Write - 4hbase: 10000 - Rand Write - 4hbase: 10000 - Rand Read - 4hbase: 1000000 - Seq Read - 4hbase: 10000 - Rand Read - 4hbase: 1000000 - Seq Read - 4hbase: 10000 - Seq Read - 1hbase: 10000 - Seq Read - 1hbase: 10000 - Rand Read - 1hbase: 10000 - Rand Read - 1hbase: 1000000 - Rand Write - 4hbase: 10000 - Increment - 1hbase: 10000 - Seq Write - 1hbase: 10000 - Increment - 1hbase: 10000 - Seq Read - 4hbase: 10000 - Seq Read - 4hbase: 1000000 - Async Rand Read - 1hbase: 1000000 - Increment - 4hbase: 1000000 - Increment - 4deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamhbase: 10000 - Seq Write - 1hbase: 10000 - Seq Write - 4deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamhbase: 1000000 - Async Rand Read - 1hbase: 1000000 - Seq Read - 1hbase: 1000000 - Seq Read - 1hbase: 1000000 - Seq Write - 4hbase: 1000000 - Async Rand Write - 4hbase: 1000000 - Async Rand Write - 4hbase: 1000000 - Rand Write - 4deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamhbase: 10000 - Async Rand Read - 1openradioss: Bumper Beamhbase: 10000 - Seq Write - 4deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenradioss: Cell Phone Drop Testhbase: 10000 - Async Rand Read - 1hbase: 10000 - Rand Write - 1hbase: 10000 - Rand Write - 1hbase: 1000000 - Seq Write - 1cpuminer-opt: Myriad-Groestlhbase: 10000 - Async Rand Write - 4hbase: 10000 - Async Rand Write - 4deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenradioss: Bird Strike on Windshieldavifenc: 10, Losslessjpegxl-decode: 1quadray: 5 - 1080pdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamjpegxl: PNG - 100xmrig: Wownero - 1Mdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamjpegxl-decode: Allavifenc: 0hbase: 1000000 - Async Rand Write - 1hbase: 10000 - Async Rand Read - 4xmrig: Monero - 1Mdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcpuminer-opt: Magiquadray: 2 - 4Khbase: 1000000 - Async Rand Write - 1quadray: 1 - 1080pdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamavifenc: 6jpegxl: PNG - 80tensorflow: CPU - 16 - AlexNettensorflow: CPU - 64 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamhbase: 1000000 - Rand Write - 1hbase: 10000 - Increment - 4encode-flac: WAV To FLACtensorflow: CPU - 32 - ResNet-50cpuminer-opt: Deepcoincpuminer-opt: Garlicoincpuminer-opt: Triple SHA-256, Onecoinopenradioss: INIVOL and Fluid Structure Interaction Drop Containerjpegxl: JPEG - 80jpegxl: PNG - 90hbase: 10000 - Increment - 4cpuminer-opt: x25xtensorflow: CPU - 32 - AlexNethbase: 10000 - Async Rand Read - 4avifenc: 6, Losslesstensorflow: CPU - 16 - ResNet-50cpuminer-opt: Ringcoindeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 256 - GoogLeNethbase: 1000000 - Async Rand Read - 4deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 16 - GoogLeNetdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamquadray: 3 - 1080pjpegxl: JPEG - 90tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 256 - AlexNetcpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Blake-2 Scpuminer-opt: scryptdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinavifenc: 2tensorflow: CPU - 32 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamtensorflow: CPU - 64 - AlexNetdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenradioss: Rubber O-Ring Seal Installationhbase: 1000000 - Scan - 4hbase: 1000000 - Scan - 1hbase: 10000 - Scan - 4hbase: 10000 - Scan - 1hbase: 1000000 - Async Rand Read - 4hbase: 1000000 - Seq Write - 4hbase: 1000000 - Seq Write - 1hbase: 1000000 - Rand Write - 1hbase: 1000000 - Scan - 4hbase: 1000000 - Scan - 1hbase: 10000 - Scan - 4hbase: 10000 - Scan - 1cpuminer-opt: LBC, LBRY Creditsquadray: 2 - 1080pquadray: 5 - 4Kquadray: 3 - 4Kquadray: 1 - 4Kjpegxl: JPEG - 100tensorflow: CPU - 256 - ResNet-50AB695726450019687513112541816164591811885097220447782324195229421935368246912650178142171708715226025274.7189366933025.341258581696563061167042398085035.6869225350.5354194.7359268.34428125707361011537904.25113273441753.05443.9097582.795.85247.731.1831.65931.58220.714515.414.829967.4207230.27209.8181101983546.714.909267.068261.04133.8307291.221.2312316.451662.0313408.19547.8438.2474.3713.6259.12363.85917555232416.64913.345429.971280.741161701171.568.128.1312009303.9681.751916211.48712.841507.2543.879422.783739.812722416.99342.79537.7323.36163.877.9538.393.6555050226780102.29396.326852.90934443090.81937.144.009186.94131.678817.5928455.4714659913152604.680.31.014.420.71622455533029611079296721383151727062244285917289922014831263370432398266672455190482031605714327633259.3806346569126.741261611614466265159052507731237.1624234337.5456187.9041259.53414126455351039937709.44115993361714.85823.9958571.295.73846.841.1631.218732.02750.724454.114.656668.2177227.6212.09880242003511.515.056766.411258.633.8664293.871.2212416.321649.6294405.17577.9018.1874.913.51257.48693.88367508132216.54913.2653991273.471155601177.528.088.0912067305.3181.41924311.53212.891502.0643.731722.860139.682731217.047642.931837.6123.2883.887.9338.2193.4654960226410102.45396.883652.83774448090.74437.174.012286.99131.736717.5857455.551117466614659913358633572027296413459152604.680.31.014.420.7OpenBenchmarking.org

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 4BA408012016020016269

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 4BA12K24K36K48K60K2455557264

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 1BA110220330440550330500

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 1BA600120018002400300029611968

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 1BA2040608010010775

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 1BA3K6K9K12K15K929613112

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 1BA16324864807254

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 1BA4K8K12K16K20K1383118161

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 4BA14284256705164

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 4BA16K32K48K64K80K7270659181

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 4BA50100150200250224188

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 4BA11K22K33K44K55K4285950972

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 4BA4K8K12K16K20K1728920447

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 4BA204060801009278

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 1BA50100150200250201232

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 1BA1000200030004000500048314195

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 1BA60120180240300263229

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 1BA900180027003600450037044219

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 4BA8162432403235

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 1BA90180270360450398368

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 1BA6K12K18K24K30K2666724691

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 1BA600120018002400300024552650

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 4BA4K8K12K16K20K1904817814

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 4BA50100150200250203217

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 1BA4K8K12K16K20K1605717087

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 4BA306090120150143152

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 4BA6K12K18K24K30K2763326025

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamBA60120180240300259.38274.72

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 1BA8162432403436

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 4BA15K30K45K60K75K6569169330

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamBA61218243026.7425.34

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 1BA14284256706158

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 1BA14284256706158

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 1BA4K8K12K16K20K1614416965

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 4BA14K28K42K56K70K6626563061

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 4BA4K8K12K16K20K1590516704

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 4BA50100150200250250239

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 4BA20K40K60K80K100K7731280850

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamBA91827364537.1635.69

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 1BA50100150200250234225

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamBA80160240320400337.54350.53

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 4BA13263952655654

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamBA4080120160200187.90194.74

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestBA60120180240300259.53268.34

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 1BA900180027003600450041414281

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 1BA6K12K18K24K30K2645525707

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 1BA8162432403536

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 1BA20K40K60K80K100K103993101153

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlBA2K4K6K8K10K7709.447904.251. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 4BA2K4K6K8K10K1159911327

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 4BA70140210280350336344

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamBA4008001200160020001714.861753.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamBA0.89911.79822.69733.59644.49553.99583.9097

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldBA130260390520650571.29582.79

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessBA1.31672.63343.95015.26686.58355.7385.8521. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1BA112233445546.8447.73

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pBA0.26550.5310.79651.0621.32751.161.181. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBA71421283531.2231.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBA71421283532.0331.58

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100BA0.1620.3240.4860.6480.810.720.711. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MBA100020003000400050004454.14515.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamBA4812162014.6614.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamBA153045607568.2267.42

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllBA50100150200250227.60230.27

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0BA50100150200250212.10209.811. (CXX) g++ options: -O3 -fPIC -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 1BA2K4K6K8K10K80248110

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 4BA4080120160200200198

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MBA80016002400320040003511.53546.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamBA4812162015.0614.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamBA153045607566.4167.07

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamBA60120180240300258.63261.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamBA0.86991.73982.60973.47964.34953.86643.8307

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiBA60120180240300293.87291.221. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KBA0.27680.55360.83041.10721.3841.221.231. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 1BA306090120150124123

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pBA4812162016.3216.451. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamBA4008001200160020001649.631662.03

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamBA90180270360450405.18408.20

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6BA2468107.9017.8431. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80BA2468108.188.241. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetBA2040608010074.9074.37

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50BA369121513.5113.60

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamBA60120180240300257.49259.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamBA0.87381.74762.62143.49524.3693.88363.8591

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 1BA16K32K48K64K80K7508175552

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 4BA70140210280350322324

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACBA4812162016.5516.651. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50BA369121513.2613.34

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinBA120024003600480060005399.005429.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinBA300600900120015001273.471280.741. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinBA20K40K60K80K100K1155601161701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerBA300600900120015001177.521171.56

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80BA2468108.088.121. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90BA2468108.098.131. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 4BA3K6K9K12K15K1206712009

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xBA70140210280350305.31303.961. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetBA2040608010081.4081.75

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 4BA4K8K12K16K20K1924319162

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessBA369121511.5311.491. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50BA369121512.8912.84

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinBA300600900120015001502.061507.251. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamBA102030405043.7343.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamBA51015202522.8622.78

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetBA91827364539.6839.81

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 4BA6K12K18K24K30K2731227224

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamBA4812162017.0516.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBA102030405042.9342.80

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetBA91827364537.6137.73

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBA61218243023.2923.36

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pBA0.8731.7462.6193.4924.3653.883.871. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90BA2468107.937.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetBA91827364538.2138.30

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetBA2040608010093.4693.65

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteBA12K24K36K48K60K54960550501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SBA50K100K150K200K250K2264102267801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptBA20406080100102.45102.291. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamBA90180270360450396.88396.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamBA122436486052.8452.91

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinBA10K20K30K40K50K44480444301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2BA2040608010090.7490.821. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetBA91827364537.1737.14

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamBA0.90271.80542.70813.61084.51354.01224.0091

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetBA2040608010086.9986.94

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamBA306090120150131.74131.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamBA4812162017.5917.59

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationBA100200300400500455.55455.47

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 4B369121511

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 1B4812162017

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 4B102030405046

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 1B153045607566

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 4BA306090120150146146

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 4BA13263952655959

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 1BA369121599

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 1BA36912151313

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 4B80K160K240K320K400K358633

Rows: 1000000 - Test: Scan - Clients: 4

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 1B12K24K36K48K60K57202

Rows: 1000000 - Test: Scan - Clients: 1

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 4B16K32K48K64K80K72964

Rows: 10000 - Test: Scan - Clients: 4

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 1B3K6K9K12K15K13459

Rows: 10000 - Test: Scan - Clients: 1

A: The test run did not produce a result.

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsBA3K6K9K12K15K15260152601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pBA1.0532.1063.1594.2125.2654.684.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KBA0.06750.1350.20250.270.33750.30.31. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KBA0.22730.45460.68190.90921.13651.011.011. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KBA0.99451.9892.98353.9784.97254.424.421. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100BA0.15750.3150.47250.630.78750.70.71. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 256 - Model: ResNet-50

A: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

B: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

144 Results Shown

Apache HBase:
  1000000 - Rand Read - 4:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Async Rand Write - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Increment - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Write - 4:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Read - 4:
    Microseconds - Average Latency
  1000000 - Seq Read - 4:
    Rows Per Second
  10000 - Rand Read - 4:
    Rows Per Second
  1000000 - Seq Read - 4:
    Microseconds - Average Latency
  10000 - Seq Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Rand Write - 4:
    Microseconds - Average Latency
  10000 - Increment - 1:
    Microseconds - Average Latency
  10000 - Seq Write - 1:
    Rows Per Second
  10000 - Increment - 1:
    Rows Per Second
  10000 - Seq Read - 4:
    Rows Per Second
    Microseconds - Average Latency
  1000000 - Async Rand Read - 1:
    Rows Per Second
  1000000 - Increment - 4:
    Microseconds - Average Latency
    Rows Per Second
Neural Magic DeepSparse
Apache HBase:
  10000 - Seq Write - 1
  10000 - Seq Write - 4
Neural Magic DeepSparse
Apache HBase:
  1000000 - Async Rand Read - 1
  1000000 - Seq Read - 1
  1000000 - Seq Read - 1
  1000000 - Seq Write - 4
  1000000 - Async Rand Write - 4
  1000000 - Async Rand Write - 4
  1000000 - Rand Write - 4
Neural Magic DeepSparse
Apache HBase
OpenRadioss
Apache HBase
Neural Magic DeepSparse
OpenRadioss
Apache HBase:
  10000 - Async Rand Read - 1
  10000 - Rand Write - 1
  10000 - Rand Write - 1
  1000000 - Seq Write - 1
Cpuminer-Opt
Apache HBase:
  10000 - Async Rand Write - 4:
    Rows Per Second
    Microseconds - Average Latency
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenRadioss
libavif avifenc
JPEG XL Decoding libjxl
QuadRay
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL libjxl
Xmrig
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
JPEG XL Decoding libjxl
libavif avifenc
Apache HBase:
  1000000 - Async Rand Write - 1
  10000 - Async Rand Read - 4
Xmrig
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Cpuminer-Opt
QuadRay
Apache HBase
QuadRay
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
libavif avifenc
JPEG XL libjxl
TensorFlow:
  CPU - 16 - AlexNet
  CPU - 64 - ResNet-50
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
Apache HBase:
  1000000 - Rand Write - 1
  10000 - Increment - 4
FLAC Audio Encoding
TensorFlow
Cpuminer-Opt:
  Deepcoin
  Garlicoin
  Triple SHA-256, Onecoin
OpenRadioss
JPEG XL libjxl:
  JPEG - 80
  PNG - 90
Apache HBase
Cpuminer-Opt
TensorFlow
Apache HBase
libavif avifenc
TensorFlow
Cpuminer-Opt
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
TensorFlow
Apache HBase
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
TensorFlow
Neural Magic DeepSparse
QuadRay
JPEG XL libjxl
TensorFlow:
  CPU - 64 - GoogLeNet
  CPU - 256 - AlexNet
Cpuminer-Opt:
  Quad SHA-256, Pyrite
  Blake-2 S
  scrypt
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
Cpuminer-Opt
libavif avifenc
TensorFlow
Neural Magic DeepSparse
TensorFlow
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
OpenRadioss
Apache HBase:
  1000000 - Scan - 4
  1000000 - Scan - 1
  10000 - Scan - 4
  10000 - Scan - 1
  1000000 - Async Rand Read - 4
  1000000 - Seq Write - 4
  1000000 - Seq Write - 1
  1000000 - Rand Write - 1
  1000000 - Scan - 4
  1000000 - Scan - 1
  10000 - Scan - 4
  10000 - Scan - 1
Cpuminer-Opt
QuadRay:
  2 - 1080p
  5 - 4K
  3 - 4K
  1 - 4K
JPEG XL libjxl