1280p october

Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 14GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210278-NE-1280POCTO70
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 5 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 2 Tests
Encoding 2 Tests
HPC - High Performance Computing 3 Tests
Imaging 3 Tests
Machine Learning 2 Tests
Multi-Core 3 Tests
Python Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
October 27 2022
  3 Hours, 18 Minutes
B
October 27 2022
  3 Hours, 25 Minutes
Invert Hiding All Results Option
  3 Hours, 22 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1280p octoberOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads)MSI MS-14C6 (E14C6IMS.115 BIOS)Intel Alder Lake PCH16GB1024GB Micron_3400_MTFDKBA1T0TFHMSI Intel ADL GT2 14GB (1450MHz)Realtek ALC274Intel Alder Lake-P PCH CNVi WiFiUbuntu 22.045.15.0-43-generic (x86_64)KDE Plasma 5.24.4X Server 1.21.1.34.6 Mesa 22.0.51.3.204GCC 11.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution1280p October BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x41c - Thermald 2.4.9 - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

A vs. B ComparisonPhoronix Test SuiteBaseline+33.7%+33.7%+67.4%+67.4%+101.1%+101.1%51.5%50.5%25.5%22.9%15.4%15.2%9.4%8%6.9%6.9%6.3%6.2%5.9%5.9%5.5%5.1%4.1%3.8%3.6%3.4%2.9%2.9%2.8%2.4%2.4%2.2%2.2%2%2%1000000 - Rand Read - 4134.8%1000000 - Rand Read - 4133.2%10000 - Async Rand Write - 110000 - Async Rand Write - 11000000 - Increment - 142.7%1000000 - Increment - 141%1000000 - Rand Read - 133.3%1000000 - Rand Read - 131.3%10000 - Rand Write - 410000 - Rand Write - 410000 - Rand Read - 419.1%1000000 - Seq Read - 418.9%10000 - Rand Read - 418.3%1000000 - Seq Read - 417.9%10000 - Seq Read - 110000 - Seq Read - 110000 - Rand Read - 114.8%10000 - Rand Read - 113.9%1000000 - Rand Write - 410000 - Increment - 18.2%10000 - Seq Write - 110000 - Increment - 17.9%10000 - Seq Read - 410000 - Seq Read - 41000000 - Async Rand Read - 16.4%1000000 - Increment - 41000000 - Increment - 4C.D.Y.C - A.M.S10000 - Seq Write - 110000 - Seq Write - 45.5%C.D.Y.C - A.M.S1000000 - Async Rand Read - 15.2%1000000 - Seq Read - 15.2%1000000 - Seq Read - 15.1%1000000 - Seq Write - 41000000 - Async Rand Write - 45%1000000 - Async Rand Write - 44.6%1000000 - Rand Write - 44.6%N.T.C.D.m - A.M.S10000 - Async Rand Read - 14%Bumper Beam10000 - Seq Write - 43.7%N.T.C.D.m - A.M.SC.P.D.T10000 - Async Rand Read - 13.4%10000 - Rand Write - 110000 - Rand Write - 11000000 - Seq Write - 1Myriad-Groestl2.5%10000 - Async Rand Write - 410000 - Async Rand Write - 4N.T.C.B.b.u.c - A.M.SN.T.C.B.b.u.c - A.M.SB.S.o.W10, LosslessApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseApache HBaseNeural Magic DeepSparseApache HBaseOpenRadiossApache HBaseNeural Magic DeepSparseOpenRadiossApache HBaseApache HBaseApache HBaseApache HBaseCpuminer-OptApache HBaseApache HBaseNeural Magic DeepSparseNeural Magic DeepSparseOpenRadiosslibavif avifencAB

1280p octoberhbase: 1000000 - Rand Read - 4hbase: 1000000 - Rand Read - 4hbase: 10000 - Async Rand Write - 1hbase: 10000 - Async Rand Write - 1hbase: 1000000 - Increment - 1hbase: 1000000 - Increment - 1hbase: 1000000 - Rand Read - 1hbase: 1000000 - Rand Read - 1hbase: 10000 - Rand Write - 4hbase: 10000 - Rand Write - 4hbase: 10000 - Rand Read - 4hbase: 1000000 - Seq Read - 4hbase: 10000 - Rand Read - 4hbase: 1000000 - Seq Read - 4hbase: 10000 - Seq Read - 1hbase: 10000 - Seq Read - 1hbase: 10000 - Rand Read - 1hbase: 10000 - Rand Read - 1hbase: 1000000 - Rand Write - 4hbase: 10000 - Increment - 1hbase: 10000 - Seq Write - 1hbase: 10000 - Increment - 1hbase: 10000 - Seq Read - 4hbase: 10000 - Seq Read - 4hbase: 1000000 - Async Rand Read - 1hbase: 1000000 - Increment - 4hbase: 1000000 - Increment - 4deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamhbase: 10000 - Seq Write - 1hbase: 10000 - Seq Write - 4deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamhbase: 1000000 - Async Rand Read - 1hbase: 1000000 - Seq Read - 1hbase: 1000000 - Seq Read - 1hbase: 1000000 - Seq Write - 4hbase: 1000000 - Async Rand Write - 4hbase: 1000000 - Async Rand Write - 4hbase: 1000000 - Rand Write - 4deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamhbase: 10000 - Async Rand Read - 1openradioss: Bumper Beamhbase: 10000 - Seq Write - 4deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenradioss: Cell Phone Drop Testhbase: 10000 - Async Rand Read - 1hbase: 10000 - Rand Write - 1hbase: 10000 - Rand Write - 1hbase: 1000000 - Seq Write - 1cpuminer-opt: Myriad-Groestlhbase: 10000 - Async Rand Write - 4hbase: 10000 - Async Rand Write - 4deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenradioss: Bird Strike on Windshieldavifenc: 10, Losslessjpegxl-decode: 1quadray: 5 - 1080pdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamjpegxl: PNG - 100xmrig: Wownero - 1Mdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamjpegxl-decode: Allavifenc: 0hbase: 1000000 - Async Rand Write - 1hbase: 10000 - Async Rand Read - 4xmrig: Monero - 1Mdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcpuminer-opt: Magiquadray: 2 - 4Khbase: 1000000 - Async Rand Write - 1quadray: 1 - 1080pdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamavifenc: 6jpegxl: PNG - 80tensorflow: CPU - 16 - AlexNettensorflow: CPU - 64 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamhbase: 1000000 - Rand Write - 1hbase: 10000 - Increment - 4encode-flac: WAV To FLACtensorflow: CPU - 32 - ResNet-50cpuminer-opt: Deepcoincpuminer-opt: Garlicoincpuminer-opt: Triple SHA-256, Onecoinopenradioss: INIVOL and Fluid Structure Interaction Drop Containerjpegxl: JPEG - 80jpegxl: PNG - 90hbase: 10000 - Increment - 4cpuminer-opt: x25xtensorflow: CPU - 32 - AlexNethbase: 10000 - Async Rand Read - 4avifenc: 6, Losslesstensorflow: CPU - 16 - ResNet-50cpuminer-opt: Ringcoindeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 256 - GoogLeNethbase: 1000000 - Async Rand Read - 4deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 16 - GoogLeNetdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamquadray: 3 - 1080pjpegxl: JPEG - 90tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 256 - AlexNetcpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Blake-2 Scpuminer-opt: scryptdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinavifenc: 2tensorflow: CPU - 32 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamtensorflow: CPU - 64 - AlexNetdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenradioss: Rubber O-Ring Seal Installationhbase: 1000000 - Scan - 4hbase: 1000000 - Scan - 1hbase: 10000 - Scan - 4hbase: 10000 - Scan - 1hbase: 1000000 - Async Rand Read - 4hbase: 1000000 - Seq Write - 4hbase: 1000000 - Seq Write - 1hbase: 1000000 - Rand Write - 1hbase: 1000000 - Scan - 4hbase: 1000000 - Scan - 1hbase: 10000 - Scan - 4hbase: 10000 - Scan - 1cpuminer-opt: LBC, LBRY Creditsquadray: 2 - 1080pquadray: 5 - 4Kquadray: 3 - 4Kquadray: 1 - 4Kjpegxl: JPEG - 100tensorflow: CPU - 256 - ResNet-50AB695726450019687513112541816164591811885097220447782324195229421935368246912650178142171708715226025274.7189366933025.341258581696563061167042398085035.6869225350.5354194.7359268.34428125707361011537904.25113273441753.05443.9097582.795.85247.731.1831.65931.58220.714515.414.829967.4207230.27209.8181101983546.714.909267.068261.04133.8307291.221.2312316.451662.0313408.19547.8438.2474.3713.6259.12363.85917555232416.64913.345429.971280.741161701171.568.128.1312009303.9681.751916211.48712.841507.2543.879422.783739.812722416.99342.79537.7323.36163.877.9538.393.6555050226780102.29396.326852.90934443090.81937.144.009186.94131.678817.5928455.4714659913152604.680.31.014.420.71622455533029611079296721383151727062244285917289922014831263370432398266672455190482031605714327633259.3806346569126.741261611614466265159052507731237.1624234337.5456187.9041259.53414126455351039937709.44115993361714.85823.9958571.295.73846.841.1631.218732.02750.724454.114.656668.2177227.6212.09880242003511.515.056766.411258.633.8664293.871.2212416.321649.6294405.17577.9018.1874.913.51257.48693.88367508132216.54913.2653991273.471155601177.528.088.0912067305.3181.41924311.53212.891502.0643.731722.860139.682731217.047642.931837.6123.2883.887.9338.2193.4654960226410102.45396.883652.83774448090.74437.174.012286.99131.736717.5857455.551117466614659913358633572027296413459152604.680.31.014.420.7OpenBenchmarking.org

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 4AB408012016020069162

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 4AB12K24K36K48K60K5726424555

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 1AB110220330440550500330

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 1AB600120018002400300019682961

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 1AB2040608010075107

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 1AB3K6K9K12K15K131129296

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 1AB16324864805472

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Read - Clients: 1AB4K8K12K16K20K1816113831

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 4AB14284256706451

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 4AB16K32K48K64K80K5918172706

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 4AB50100150200250188224

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 4AB11K22K33K44K55K5097242859

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 4AB4K8K12K16K20K2044717289

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 4AB204060801007892

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 1AB50100150200250232201

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 1AB1000200030004000500041954831

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 1AB60120180240300229263

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Read - Clients: 1AB900180027003600450042193704

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 4AB8162432403532

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 1AB90180270360450368398

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 1AB6K12K18K24K30K2469126667

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 1AB600120018002400300026502455

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 4AB4K8K12K16K20K1781419048

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Read - Clients: 4AB50100150200250217203

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 1AB4K8K12K16K20K1708716057

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 4AB306090120150152143

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Increment - Clients: 4AB6K12K18K24K30K2602527633

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB60120180240300274.72259.38

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 1AB8162432403634

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 4AB15K30K45K60K75K6933065691

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAB61218243025.3426.74

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 1AB14284256705861

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 1AB14284256705861

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Read - Clients: 1AB4K8K12K16K20K1696516144

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 4AB14K28K42K56K70K6306166265

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 4AB4K8K12K16K20K1670415905

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 4AB50100150200250239250

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 4AB20K40K60K80K100K8085077312

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB91827364535.6937.16

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 1AB50100150200250225234

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamAB80160240320400350.53337.54

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Sequential Write - Clients: 4AB13263952655456

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAB4080120160200194.74187.90

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestAB60120180240300268.34259.53

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 1AB900180027003600450042814141

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 1AB6K12K18K24K30K2570726455

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Random Write - Clients: 1AB8162432403635

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 1AB20K40K60K80K100K101153103993

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAB2K4K6K8K10K7904.257709.441. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 4AB2K4K6K8K10K1132711599

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Write - Clients: 4AB70140210280350344336

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB4008001200160020001753.051714.86

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAB0.89911.79822.69733.59644.49553.90973.9958

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldAB130260390520650582.79571.29

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessAB1.31672.63343.95015.26686.58355.8525.7381. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1AB112233445547.7346.84

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pAB0.26550.5310.79651.0621.32751.181.161. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB71421283531.6631.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAB71421283531.5832.03

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100AB0.1620.3240.4860.6480.810.710.721. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MAB100020003000400050004515.44454.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB4812162014.8314.66

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAB153045607567.4268.22

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllAB50100150200250230.27227.60

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0AB50100150200250209.81212.101. (CXX) g++ options: -O3 -fPIC -lm

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 1AB2K4K6K8K10K81108024

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 4AB4080120160200198200

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MAB80016002400320040003546.73511.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB4812162014.9115.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamAB153045607567.0766.41

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB60120180240300261.04258.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAB0.86991.73982.60973.47964.34953.83073.8664

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiAB60120180240300291.22293.871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KAB0.27680.55360.83041.10721.3841.231.221. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Write - Clients: 1AB306090120150123124

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pAB4812162016.4516.321. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB4008001200160020001662.031649.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB90180270360450408.20405.18

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6AB2468107.8437.9011. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80AB2468108.248.181. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetAB2040608010074.3774.90

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50AB369121513.6013.51

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB60120180240300259.12257.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAB0.87381.74762.62143.49524.3693.85913.8836

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 1AB16K32K48K64K80K7555275081

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 4AB70140210280350324322

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACAB4812162016.6516.551. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50AB369121513.3413.26

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAB120024003600480060005429.975399.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAB300600900120015001280.741273.471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinAB20K40K60K80K100K1161701155601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerAB300600900120015001171.561177.52

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80AB2468108.128.081. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90AB2468108.138.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Increment - Clients: 4AB3K6K9K12K15K1200912067

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAB70140210280350303.96305.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetAB2040608010081.7581.40

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Async Random Read - Clients: 4AB4K8K12K16K20K1916219243

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessAB369121511.4911.531. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50AB369121512.8412.89

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinAB300600900120015001507.251502.061. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB102030405043.8843.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAB51015202522.7822.86

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetAB91827364539.8139.68

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 4AB6K12K18K24K30K2722427312

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAB4812162016.9917.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB102030405042.8042.93

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetAB91827364537.7337.61

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAB61218243023.3623.29

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pAB0.8731.7462.6193.4924.3653.873.881. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90AB2468107.957.931. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetAB91827364538.3038.21

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetAB2040608010093.6593.46

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAB12K24K36K48K60K55050549601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SAB50K100K150K200K250K2267802264101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAB20406080100102.29102.451. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB90180270360450396.33396.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB122436486052.9152.84

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAB10K20K30K40K50K44430444801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2AB2040608010090.8290.741. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetAB91827364537.1437.17

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAB0.90271.80542.70813.61084.51354.00914.0122

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetAB2040608010086.9486.99

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAB306090120150131.68131.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAB4812162017.5917.59

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationAB100200300400500455.47455.55

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 4B369121511

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 1B4812162017

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 4B102030405046

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 1B153045607566

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Async Random Read - Clients: 4AB306090120150146146

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 4AB13263952655959

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Sequential Write - Clients: 1AB369121599

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Random Write - Clients: 1AB36912151313

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 4B80K160K240K320K400K358633

Rows: 1000000 - Test: Scan - Clients: 4

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 1000000 - Test: Scan - Clients: 1B12K24K36K48K60K57202

Rows: 1000000 - Test: Scan - Clients: 1

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 4B16K32K48K64K80K72964

Rows: 10000 - Test: Scan - Clients: 4

A: The test run did not produce a result.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.5.0Rows: 10000 - Test: Scan - Clients: 1B3K6K9K12K15K13459

Rows: 10000 - Test: Scan - Clients: 1

A: The test run did not produce a result.

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAB3K6K9K12K15K15260152601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pAB1.0532.1063.1594.2125.2654.684.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KAB0.06750.1350.20250.270.33750.30.31. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KAB0.22730.45460.68190.90921.13651.011.011. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KAB0.99451.9892.98353.9784.97254.424.421. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100AB0.15750.3150.47250.630.78750.70.71. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 256 - Model: ResNet-50

A: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

B: The test quit with a non-zero exit status. E: Fatal Python error: Segmentation fault

144 Results Shown

Apache HBase:
  1000000 - Rand Read - 4:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Async Rand Write - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Increment - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Write - 4:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Read - 4:
    Microseconds - Average Latency
  1000000 - Seq Read - 4:
    Rows Per Second
  10000 - Rand Read - 4:
    Rows Per Second
  1000000 - Seq Read - 4:
    Microseconds - Average Latency
  10000 - Seq Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  10000 - Rand Read - 1:
    Microseconds - Average Latency
    Rows Per Second
  1000000 - Rand Write - 4:
    Microseconds - Average Latency
  10000 - Increment - 1:
    Microseconds - Average Latency
  10000 - Seq Write - 1:
    Rows Per Second
  10000 - Increment - 1:
    Rows Per Second
  10000 - Seq Read - 4:
    Rows Per Second
    Microseconds - Average Latency
  1000000 - Async Rand Read - 1:
    Rows Per Second
  1000000 - Increment - 4:
    Microseconds - Average Latency
    Rows Per Second
Neural Magic DeepSparse
Apache HBase:
  10000 - Seq Write - 1
  10000 - Seq Write - 4
Neural Magic DeepSparse
Apache HBase:
  1000000 - Async Rand Read - 1
  1000000 - Seq Read - 1
  1000000 - Seq Read - 1
  1000000 - Seq Write - 4
  1000000 - Async Rand Write - 4
  1000000 - Async Rand Write - 4
  1000000 - Rand Write - 4
Neural Magic DeepSparse
Apache HBase
OpenRadioss
Apache HBase
Neural Magic DeepSparse
OpenRadioss
Apache HBase:
  10000 - Async Rand Read - 1
  10000 - Rand Write - 1
  10000 - Rand Write - 1
  1000000 - Seq Write - 1
Cpuminer-Opt
Apache HBase:
  10000 - Async Rand Write - 4:
    Rows Per Second
    Microseconds - Average Latency
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenRadioss
libavif avifenc
JPEG XL Decoding libjxl
QuadRay
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL libjxl
Xmrig
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
JPEG XL Decoding libjxl
libavif avifenc
Apache HBase:
  1000000 - Async Rand Write - 1
  10000 - Async Rand Read - 4
Xmrig
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Cpuminer-Opt
QuadRay
Apache HBase
QuadRay
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
libavif avifenc
JPEG XL libjxl
TensorFlow:
  CPU - 16 - AlexNet
  CPU - 64 - ResNet-50
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
Apache HBase:
  1000000 - Rand Write - 1
  10000 - Increment - 4
FLAC Audio Encoding
TensorFlow
Cpuminer-Opt:
  Deepcoin
  Garlicoin
  Triple SHA-256, Onecoin
OpenRadioss
JPEG XL libjxl:
  JPEG - 80
  PNG - 90
Apache HBase
Cpuminer-Opt
TensorFlow
Apache HBase
libavif avifenc
TensorFlow
Cpuminer-Opt
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
TensorFlow
Apache HBase
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
TensorFlow
Neural Magic DeepSparse
QuadRay
JPEG XL libjxl
TensorFlow:
  CPU - 64 - GoogLeNet
  CPU - 256 - AlexNet
Cpuminer-Opt:
  Quad SHA-256, Pyrite
  Blake-2 S
  scrypt
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
Cpuminer-Opt
libavif avifenc
TensorFlow
Neural Magic DeepSparse
TensorFlow
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
OpenRadioss
Apache HBase:
  1000000 - Scan - 4
  1000000 - Scan - 1
  10000 - Scan - 4
  10000 - Scan - 1
  1000000 - Async Rand Read - 4
  1000000 - Seq Write - 4
  1000000 - Seq Write - 1
  1000000 - Rand Write - 1
  1000000 - Scan - 4
  1000000 - Scan - 1
  10000 - Scan - 4
  10000 - Scan - 1
Cpuminer-Opt
QuadRay:
  2 - 1080p
  5 - 4K
  3 - 4K
  1 - 4K
JPEG XL libjxl