eMAG

ARMv8 Cortex-A72 testing with a SolidRun CEX7 (EDK II BIOS) and MSI NVIDIA GeForce GT 1030 on Fedora 33 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102210-FI-2012272NE18
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 2 Tests
Chess Test Suite 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 7 Tests
Creator Workloads 7 Tests
Encoding 5 Tests
HPC - High Performance Computing 3 Tests
Machine Learning 2 Tests
Multi-Core 8 Tests
Programmer / Developer System Benchmarks 2 Tests
Server CPU Tests 5 Tests
Single-Threaded 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 26 2020
  6 Hours, 4 Minutes
2
December 27 2020
  6 Hours, 55 Minutes
3
December 27 2020
  4 Hours, 25 Minutes
4
December 27 2020
  1 Hour, 21 Minutes
HoneyComb LX2K
February 20 2021
  5 Hours, 39 Minutes
Invert Hiding All Results Option
  4 Hours, 53 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eMAGProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkAudioOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen Resolution1234HoneyComb LX2KAmpere eMAG ARMv8 @ 3.00GHz (32 Cores)AmpereComputing OSPREY (4.8.19 BIOS)Applied Micro Circuits X-Gene126GB256GB Samsung SSD 860ASPEEDVE228Intel I210Ubuntu 20.045.7.0-050700-generic (aarch64)GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext41920x1080ARMv8 Cortex-A72 (16 Cores)SolidRun CEX7 (EDK II BIOS)32GB128GB Generic + 8GB SL08G + 63GB DF4064MSI NVIDIA GeForce GT 1030NVIDIA GP108 HD AudioFedora 335.10.10-00042-gbfa806f5daa5-dirty (aarch64)X Server 1.20.10GCC 10.2.1 20201125 + Clang 11.0.0 + CUDA 11.2btrfsOpenBenchmarking.orgCompiler Details- 1: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - 2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - 3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - 4: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - HoneyComb LX2K: --build=aarch64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu Processor Details- 1, 2, 3, 4: Scaling Governor: cppc_cpufreq ondemandPython Details- 1: Python 3.8.2- 2: Python 3.8.2- 3: Python 3.8.2- 4: Python 3.8.2- HoneyComb LX2K: Python 3.9.1Security Details- 1: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + tsx_async_abort: Not affected- 2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + tsx_async_abort: Not affected- 3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + tsx_async_abort: Not affected- 4: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + tsx_async_abort: Not affected- HoneyComb LX2K: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening + srbds: Not affected + tsx_async_abort: Not affected

1234HoneyComb LX2KLogarithmic Result OverviewPhoronix Test SuiteCLOMPTimed MAFFT AlignmentoneDNNsimdjsonTSCP

eMAGclomp: Static OMP Speedupmafft: Multiple Sequence Alignment - LSU RNAsimdjson: Kostyasimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDtscp: AI Chess Performanceonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUrav1e: 1rav1e: 5rav1e: 6rav1e: 10x264: H.264 Video Encodingcoremark: CoreMark Size 666 - Iterations Per Secondstockfish: Total Timeasmfish: 1024 Hash Memory, 26 Depthavifenc: 0avifenc: 2avifenc: 8avifenc: 10numpy: build-eigen: Time To Compileencode-ape: WAV To APEencode-opus: WAV To Opus Encodeespeak: Text-To-Speech Synthesis1234HoneyComb LX2K7.135.6410.480.230.550.5651590329.014823.6825183.917418.33093.4725191.83760.3671113.020184.796114.06631838.217700.430113.816255.919.817629827.817140.136.61480.0840.1880.2230.42532.13385397.3703631569146933037962404.737250.16522.30221.88391.65357.47496.13048.30887.8317.235.3440.480.230.550.5651590333.224422.8008184.877421.22198.2804194.62063.4163112.310185.965113.51130600.517313.530971.016777.021.007731520.316556.738.37060.0840.1870.2210.41932.67385035.2074661541712733135767403.850250.70522.33221.83291.86357.14039.91948.26684.9027.036.3550.480.230.550.5651571032.284022.8470183.393424.64176.3515173.53955.9390112.550185.162113.11231978.216864.330446.516436.820.484730673.717065.538.49440.0840.1870.2220.42032.78385080.514289156176077.235.9990.480.230.550.5751570930.892524.4468183.137419.39284.3690173.05059.7828112.924183.153112.61330770.01.625.1730.610.280.70.7151092853.321221.5127129.05097.1388122.323620.757118.206192.123220.110196.13474820.538360.174589.338077.326.146674962.738288.246.00780.0810.1860.2240.44533.56193571.808864951445716459128485.908310.03224.46023.18397.09315.89737.00237.73584.427OpenBenchmarking.org

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1234HoneyComb LX2K246810SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 37.17.27.07.21.61. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1234HoneyComb LX2K3691215Min: 7 / Avg: 7.1 / Max: 7.2Min: 7.2 / Avg: 7.23 / Max: 7.3Min: 6.9 / Avg: 7 / Max: 7.1Min: 7.1 / Avg: 7.23 / Max: 7.3Min: 1.6 / Avg: 1.63 / Max: 1.71. (CC) gcc options: -fopenmp -O3 -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1234HoneyComb LX2K816243240SE +/- 0.31, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.49, N = 3SE +/- 0.40, N = 335.6435.3436.3636.0025.171. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1234HoneyComb LX2K816243240Min: 35.16 / Avg: 35.64 / Max: 36.23Min: 35.23 / Avg: 35.34 / Max: 35.57Min: 36.04 / Avg: 36.36 / Max: 36.53Min: 35.07 / Avg: 36 / Max: 36.71Min: 24.37 / Avg: 25.17 / Max: 25.671. (CC) gcc options: -std=c99 -O3 -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1234HoneyComb LX2K0.13730.27460.41190.54920.6865SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.480.480.480.480.61-O3-O3-O3-O3-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1234HoneyComb LX2K246810Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.61 / Avg: 0.61 / Max: 0.611. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1234HoneyComb LX2K0.0630.1260.1890.2520.315SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.230.230.230.230.28-O3-O3-O3-O3-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1234HoneyComb LX2K12345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.28 / Avg: 0.28 / Max: 0.281. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1234HoneyComb LX2K0.15750.3150.47250.630.7875SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.550.550.550.70-O3-O3-O3-O3-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1234HoneyComb LX2K246810Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.7 / Avg: 0.7 / Max: 0.71. (CXX) g++ options: -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1234HoneyComb LX2K0.15980.31960.47940.63920.799SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.560.560.560.570.71-O3-O3-O3-O3-O21. (CXX) g++ options: -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1234HoneyComb LX2K246810Min: 0.56 / Avg: 0.56 / Max: 0.57Min: 0.56 / Avg: 0.56 / Max: 0.56Min: 0.56 / Avg: 0.56 / Max: 0.57Min: 0.56 / Avg: 0.57 / Max: 0.57Min: 0.71 / Avg: 0.71 / Max: 0.711. (CXX) g++ options: -pthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance1234HoneyComb LX2K110K220K330K440K550KSE +/- 327.56, N = 5SE +/- 328.25, N = 5SE +/- 264.73, N = 5SE +/- 616.26, N = 55159035159035157105157095109281. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance1234HoneyComb LX2K90K180K270K360K450KMin: 514745 / Avg: 515903.4 / Max: 516677Min: 515227 / Avg: 515903.2 / Max: 517162Min: 515227 / Avg: 515709.8 / Max: 516677Min: 508566 / Avg: 510928.2 / Max: 5118751. (CC) gcc options: -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1234HoneyComb LX2K1224364860SE +/- 1.27, N = 15SE +/- 1.03, N = 15SE +/- 1.36, N = 15SE +/- 1.62, N = 12SE +/- 1.13, N = 329.0133.2232.2830.8953.32MIN: 15.13MIN: 15.35MIN: 15.14MIN: 15.09-O2 - MIN: 50.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1234HoneyComb LX2K1122334455Min: 22.33 / Avg: 29.01 / Max: 36.43Min: 24.42 / Avg: 33.22 / Max: 36.86Min: 23.22 / Avg: 32.28 / Max: 37.25Min: 23.09 / Avg: 30.89 / Max: 36.81Min: 51.12 / Avg: 53.32 / Max: 54.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1234HoneyComb LX2K612182430SE +/- 0.67, N = 15SE +/- 0.68, N = 12SE +/- 0.69, N = 15SE +/- 1.03, N = 15SE +/- 0.02, N = 323.6822.8022.8524.4521.51MIN: 17.03MIN: 17.06MIN: 17.02MIN: 17.03-O2 - MIN: 21.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1234HoneyComb LX2K612182430Min: 20.05 / Avg: 23.68 / Max: 29.54Min: 19.76 / Avg: 22.8 / Max: 27.15Min: 19.26 / Avg: 22.85 / Max: 28.32Min: 19.16 / Avg: 24.45 / Max: 33.92Min: 21.49 / Avg: 21.51 / Max: 21.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200SE +/- 1.68, N = 3SE +/- 2.81, N = 3SE +/- 1.41, N = 3SE +/- 0.18, N = 3SE +/- 0.47, N = 3183.92184.88183.39183.14129.05MIN: 127.82MIN: 135.46MIN: 133.71MIN: 130.48-O2 - MIN: 125.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K306090120150Min: 181.73 / Avg: 183.92 / Max: 187.22Min: 179.6 / Avg: 184.88 / Max: 189.16Min: 180.58 / Avg: 183.39 / Max: 184.97Min: 182.78 / Avg: 183.14 / Max: 183.32Min: 128.45 / Avg: 129.05 / Max: 129.971. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K90180270360450SE +/- 2.55, N = 3SE +/- 0.17, N = 3SE +/- 1.42, N = 3SE +/- 1.34, N = 3SE +/- 0.09, N = 3418.33421.22424.64419.3997.14MIN: 379.92MIN: 376.78MIN: 380.56MIN: 371.23-O2 - MIN: 96.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K80160240320400Min: 413.81 / Avg: 418.33 / Max: 422.63Min: 420.93 / Avg: 421.22 / Max: 421.53Min: 421.85 / Avg: 424.64 / Max: 426.49Min: 417.18 / Avg: 419.39 / Max: 421.81Min: 96.99 / Avg: 97.14 / Max: 97.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1234HoneyComb LX2K306090120150SE +/- 7.37, N = 15SE +/- 7.70, N = 12SE +/- 6.62, N = 15SE +/- 8.19, N = 15SE +/- 0.02, N = 393.4798.2876.3584.37122.32MIN: 30.81MIN: 30.76MIN: 30.78MIN: 30.78-O2 - MIN: 121.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1234HoneyComb LX2K20406080100Min: 47.44 / Avg: 93.47 / Max: 119.35Min: 31.33 / Avg: 98.28 / Max: 121.87Min: 39.59 / Avg: 76.35 / Max: 115.17Min: 32.81 / Avg: 84.37 / Max: 121.06Min: 122.29 / Avg: 122.32 / Max: 122.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1234HoneyComb LX2K130260390520650SE +/- 15.29, N = 12SE +/- 13.95, N = 15SE +/- 12.91, N = 15SE +/- 12.10, N = 15SE +/- 10.39, N = 3191.84194.62173.54173.05620.76MIN: 115.28MIN: 115.45MIN: 116.21MIN: 116-O2 - MIN: 597.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1234HoneyComb LX2K110220330440550Min: 134.35 / Avg: 191.84 / Max: 256.86Min: 127.31 / Avg: 194.62 / Max: 255.27Min: 131.13 / Avg: 173.54 / Max: 259.58Min: 132.43 / Avg: 173.05 / Max: 261.05Min: 600.49 / Avg: 620.76 / Max: 634.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1234HoneyComb LX2K306090120150SE +/- 6.33, N = 15SE +/- 7.31, N = 15SE +/- 7.43, N = 12SE +/- 6.08, N = 15SE +/- 2.32, N = 360.3763.4255.9459.78118.21MIN: 26.98MIN: 26.98MIN: 26.97MIN: 26.97-O2 - MIN: 114.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1234HoneyComb LX2K20406080100Min: 29.23 / Avg: 60.37 / Max: 87.83Min: 27.02 / Avg: 63.42 / Max: 89.94Min: 27.02 / Avg: 55.94 / Max: 90.26Min: 32.53 / Avg: 59.78 / Max: 88.3Min: 115.72 / Avg: 118.21 / Max: 122.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200SE +/- 0.34, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.27, N = 3SE +/- 0.26, N = 3113.02112.31112.55112.92192.12MIN: 98.24MIN: 103.46MIN: 102.72MIN: 104.76-O2 - MIN: 190.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200Min: 112.35 / Avg: 113.02 / Max: 113.38Min: 112.26 / Avg: 112.31 / Max: 112.41Min: 112.34 / Avg: 112.55 / Max: 112.71Min: 112.5 / Avg: 112.92 / Max: 113.42Min: 191.82 / Avg: 192.12 / Max: 192.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K50100150200250SE +/- 0.92, N = 3SE +/- 1.31, N = 3SE +/- 1.64, N = 3SE +/- 2.05, N = 3SE +/- 0.78, N = 3184.80185.97185.16183.15220.11MIN: 121.06MIN: 114.9MIN: 116.12MIN: 111-O2 - MIN: 217.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200Min: 183.54 / Avg: 184.8 / Max: 186.58Min: 183.82 / Avg: 185.97 / Max: 188.35Min: 183.14 / Avg: 185.16 / Max: 188.4Min: 179.42 / Avg: 183.15 / Max: 186.47Min: 218.55 / Avg: 220.11 / Max: 220.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200SE +/- 0.92, N = 3SE +/- 0.48, N = 3SE +/- 1.41, N = 3SE +/- 1.75, N = 3SE +/- 2.17, N = 3114.07113.51113.11112.61196.13MIN: 93.84MIN: 91.33MIN: 92.39MIN: 90.72-O2 - MIN: 179.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1234HoneyComb LX2K4080120160200Min: 112.31 / Avg: 114.07 / Max: 115.42Min: 112.58 / Avg: 113.51 / Max: 114.19Min: 111.68 / Avg: 113.11 / Max: 115.94Min: 109.4 / Avg: 112.61 / Max: 115.41Min: 192.34 / Avg: 196.13 / Max: 199.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1234HoneyComb LX2K16K32K48K64K80KSE +/- 1283.37, N = 9SE +/- 724.02, N = 10SE +/- 171.48, N = 3SE +/- 405.43, N = 12SE +/- 258.40, N = 331838.230600.531978.230770.074820.5MIN: 23896.8MIN: 23693.3MIN: 25646MIN: 24555.1-O2 - MIN: 74172.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1234HoneyComb LX2K13K26K39K52K65KMin: 28315 / Avg: 31838.16 / Max: 41666.1Min: 27903.5 / Avg: 30600.53 / Max: 33846.5Min: 31661.6 / Avg: 31978.2 / Max: 32250.7Min: 28665 / Avg: 30770 / Max: 33455.4Min: 74305.8 / Avg: 74820.5 / Max: 75118.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU123HoneyComb LX2K8K16K24K32K40KSE +/- 343.55, N = 11SE +/- 395.99, N = 12SE +/- 324.82, N = 12SE +/- 219.25, N = 317700.417313.516864.338360.1MIN: 13035MIN: 12871.9MIN: 12759.4-O2 - MIN: 37645.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU123HoneyComb LX2K7K14K21K28K35KMin: 15835.1 / Avg: 17700.4 / Max: 20087Min: 15023.9 / Avg: 17313.53 / Max: 19405.4Min: 15806.1 / Avg: 16864.34 / Max: 18991.7Min: 37926.8 / Avg: 38360.13 / Max: 38634.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K16K32K48K64K80KSE +/- 165.59, N = 3SE +/- 500.69, N = 12SE +/- 588.64, N = 12SE +/- 201.98, N = 330113.830971.030446.574589.3MIN: 24215.3MIN: 23442.3MIN: 23892.7-O2 - MIN: 74079.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K13K26K39K52K65KMin: 29847.1 / Avg: 30113.8 / Max: 30417.2Min: 27739.1 / Avg: 30970.97 / Max: 33470.1Min: 27483.5 / Avg: 30446.53 / Max: 34427.6Min: 74269.3 / Avg: 74589.27 / Max: 74962.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K8K16K24K32K40KSE +/- 188.21, N = 3SE +/- 307.63, N = 12SE +/- 378.61, N = 9SE +/- 169.67, N = 316255.916777.016436.838077.3MIN: 13038.5MIN: 12597.8MIN: 13044.4-O2 - MIN: 37721.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K7K14K21K28K35KMin: 15907.6 / Avg: 16255.9 / Max: 16553.7Min: 15394.7 / Avg: 16777.03 / Max: 19122.5Min: 14374.2 / Avg: 16436.76 / Max: 17944.2Min: 37807.9 / Avg: 38077.33 / Max: 38390.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123HoneyComb LX2K612182430SE +/- 1.04, N = 15SE +/- 0.93, N = 15SE +/- 0.93, N = 15SE +/- 0.00, N = 319.8221.0120.4826.15MIN: 8.13MIN: 8.13MIN: 8.14-O2 - MIN: 25.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123HoneyComb LX2K612182430Min: 12.18 / Avg: 19.82 / Max: 24.56Min: 15.38 / Avg: 21.01 / Max: 26.1Min: 15.73 / Avg: 20.48 / Max: 25.28Min: 26.14 / Avg: 26.15 / Max: 26.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU123HoneyComb LX2K16K32K48K64K80KSE +/- 438.76, N = 12SE +/- 470.00, N = 3SE +/- 360.00, N = 3SE +/- 348.88, N = 329827.831520.330673.774962.7MIN: 23657MIN: 24350.2MIN: 24396.5-O2 - MIN: 74064.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU123HoneyComb LX2K13K26K39K52K65KMin: 27611.7 / Avg: 29827.75 / Max: 31991.6Min: 30936.7 / Avg: 31520.33 / Max: 32450.3Min: 30046.4 / Avg: 30673.7 / Max: 31293.4Min: 74342.1 / Avg: 74962.67 / Max: 75549.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU123HoneyComb LX2K8K16K24K32K40KSE +/- 244.89, N = 3SE +/- 327.46, N = 12SE +/- 506.17, N = 9SE +/- 226.16, N = 317140.116556.717065.538288.2MIN: 13604.9MIN: 12882.3MIN: 13244.1-O2 - MIN: 37745.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU123HoneyComb LX2K7K14K21K28K35KMin: 16718.1 / Avg: 17140.1 / Max: 17566.4Min: 15193.1 / Avg: 16556.68 / Max: 18858.9Min: 15491.3 / Avg: 17065.53 / Max: 20329.5Min: 37836 / Avg: 38288.23 / Max: 38521.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K1020304050SE +/- 1.48, N = 15SE +/- 0.94, N = 15SE +/- 0.82, N = 15SE +/- 0.15, N = 336.6138.3738.4946.01MIN: 19.74MIN: 19.74MIN: 19.79-O2 - MIN: 45.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123HoneyComb LX2K918273645Min: 25.62 / Avg: 36.61 / Max: 41.46Min: 27.96 / Avg: 38.37 / Max: 41.9Min: 32.92 / Avg: 38.49 / Max: 41.45Min: 45.74 / Avg: 46.01 / Max: 46.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1123HoneyComb LX2K0.01890.03780.05670.07560.0945SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0840.0840.0840.081
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1123HoneyComb LX2K12345Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.08

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123HoneyComb LX2K0.04230.08460.12690.16920.2115SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1880.1870.1870.186
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123HoneyComb LX2K12345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123HoneyComb LX2K0.05040.10080.15120.20160.252SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.2230.2210.2220.224
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123HoneyComb LX2K12345Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.22 / Avg: 0.22 / Max: 0.22

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123HoneyComb LX2K0.10010.20020.30030.40040.5005SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4250.4190.4200.445
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123HoneyComb LX2K12345Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.42 / Avg: 0.42 / Max: 0.42Min: 0.42 / Avg: 0.42 / Max: 0.42Min: 0.44 / Avg: 0.45 / Max: 0.45

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video Encoding123HoneyComb LX2K816243240SE +/- 0.38, N = 5SE +/- 0.30, N = 3SE +/- 0.51, N = 3SE +/- 0.06, N = 332.1332.6732.7833.56-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale1. (CC) gcc options: -ldl -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video Encoding123HoneyComb LX2K714212835Min: 30.76 / Avg: 32.13 / Max: 32.88Min: 32.32 / Avg: 32.67 / Max: 33.27Min: 31.85 / Avg: 32.78 / Max: 33.6Min: 33.45 / Avg: 33.56 / Max: 33.631. (CC) gcc options: -ldl -lm -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second123HoneyComb LX2K80K160K240K320K400KSE +/- 656.44, N = 3SE +/- 798.90, N = 3SE +/- 664.33, N = 3SE +/- 23.74, N = 3385397.37385035.21385080.51193571.811. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second123HoneyComb LX2K70K140K210K280K350KMin: 384084.5 / Avg: 385397.37 / Max: 386053.81Min: 383440.18 / Avg: 385035.21 / Max: 385914.13Min: 383785.08 / Avg: 385080.51 / Max: 385983.96Min: 193528.88 / Avg: 193571.81 / Max: 193610.841. (CC) gcc options: -O2 -lrt" -lrt

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time123HoneyComb LX2K3M6M9M12M15MSE +/- 125793.16, N = 3SE +/- 184845.46, N = 6SE +/- 177988.27, N = 15SE +/- 173726.98, N = 315691469154171271561760795144571. (CXX) g++ options: -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time123HoneyComb LX2K3M6M9M12M15MMin: 15447324 / Avg: 15691469.33 / Max: 15866140Min: 14832073 / Avg: 15417127.33 / Max: 16203593Min: 14728243 / Avg: 15617607.13 / Max: 17027764Min: 9295755 / Avg: 9514457.33 / Max: 98576251. (CXX) g++ options: -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth12HoneyComb LX2K7M14M21M28M35MSE +/- 299925.52, N = 3SE +/- 393577.81, N = 3SE +/- 201778.29, N = 3330379623313576716459128
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth12HoneyComb LX2K6M12M18M24M30MMin: 32608036 / Avg: 33037962.33 / Max: 33615194Min: 32438964 / Avg: 33135766.67 / Max: 33801280Min: 16196192 / Avg: 16459127.67 / Max: 16855722

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 012HoneyComb LX2K110220330440550SE +/- 1.37, N = 3SE +/- 0.42, N = 3SE +/- 0.81, N = 3404.74403.85485.911. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 012HoneyComb LX2K90180270360450Min: 402.89 / Avg: 404.74 / Max: 407.4Min: 403.19 / Avg: 403.85 / Max: 404.62Min: 484.96 / Avg: 485.91 / Max: 487.511. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212HoneyComb LX2K70140210280350SE +/- 0.06, N = 3SE +/- 0.40, N = 3SE +/- 0.10, N = 3250.17250.71310.031. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212HoneyComb LX2K60120180240300Min: 250.04 / Avg: 250.17 / Max: 250.26Min: 249.93 / Avg: 250.7 / Max: 251.29Min: 309.85 / Avg: 310.03 / Max: 310.161. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 812HoneyComb LX2K612182430SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 322.3022.3324.461. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 812HoneyComb LX2K612182430Min: 22.21 / Avg: 22.3 / Max: 22.38Min: 22.29 / Avg: 22.33 / Max: 22.36Min: 24.42 / Avg: 24.46 / Max: 24.511. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 1012HoneyComb LX2K612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 321.8821.8323.181. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 1012HoneyComb LX2K510152025Min: 21.85 / Avg: 21.88 / Max: 21.93Min: 21.81 / Avg: 21.83 / Max: 21.86Min: 23.1 / Avg: 23.18 / Max: 23.241. (CXX) g++ options: -O3 -fPIC

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12HoneyComb LX2K20406080100SE +/- 0.30, N = 3SE +/- 0.27, N = 3SE +/- 0.62, N = 391.6591.8697.09
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12HoneyComb LX2K20406080100Min: 91.08 / Avg: 91.65 / Max: 92.11Min: 91.32 / Avg: 91.86 / Max: 92.18Min: 95.85 / Avg: 97.09 / Max: 97.77

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12HoneyComb LX2K80160240320400SE +/- 1.89, N = 3SE +/- 0.65, N = 3SE +/- 0.80, N = 3357.47357.14315.90
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12HoneyComb LX2K60120180240300Min: 353.74 / Avg: 357.47 / Max: 359.82Min: 356.03 / Avg: 357.14 / Max: 358.27Min: 314.57 / Avg: 315.9 / Max: 317.34

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12HoneyComb LX2K20406080100SE +/- 0.05, N = 5SE +/- 0.06, N = 5SE +/- 0.59, N = 596.1339.9237.001. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12HoneyComb LX2K20406080100Min: 96.02 / Avg: 96.13 / Max: 96.29Min: 39.82 / Avg: 39.92 / Max: 40.17Min: 35.78 / Avg: 37 / Max: 39.111. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode12HoneyComb LX2K1122334455SE +/- 0.03, N = 5SE +/- 0.01, N = 5SE +/- 0.02, N = 548.3148.2737.741. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode12HoneyComb LX2K1020304050Min: 48.24 / Avg: 48.31 / Max: 48.4Min: 48.24 / Avg: 48.27 / Max: 48.31Min: 37.69 / Avg: 37.73 / Max: 37.781. (CXX) g++ options: -fvisibility=hidden -logg -lm

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis12HoneyComb LX2K20406080100SE +/- 0.12, N = 4SE +/- 0.21, N = 4SE +/- 4.47, N = 487.8384.9084.43-lpthread -lm1. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis12HoneyComb LX2K20406080100Min: 87.69 / Avg: 87.83 / Max: 88.19Min: 84.47 / Avg: 84.9 / Max: 85.43Min: 76.34 / Avg: 84.43 / Max: 96.521. (CC) gcc options: -O2 -std=c99