pi one

ARMv8 Cortex-A72 testing with a BCM2835 Raspberry Pi 400 Rev 1.0 and V3D 4.2 4GB on Debian 10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103178-HA-PIONE661279
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
March 15 2021
  12 Hours, 35 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pi oneOpenBenchmarking.orgPhoronix Test SuiteARMv8 Cortex-A72 @ 1.80GHz (4 Cores)BCM2835 Raspberry Pi 400 Rev 1.04096MB32GB GB1QTV3D 4.2 4GBDELL P2210HDebian 105.4.51-v8+ (aarch64)LXDE 0.10.0X Server 1.20.42.1 Mesa 19.3.2GCC 8.3.0ext41920x1080ProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionPi One BenchmarksSystem Logs- snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1 snd_bcm2835.enable_headphones=1 - --build=aarch64-linux-gnu --disable-libphobos --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Scaling Governor: cpufreq-dt ondemand- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Vulnerable + srbds: Not affected + tsx_async_abort: Not affected

pi oneonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsysbench: RAM / Memorysysbench: CPU1181.417141.975482.051759.059506.4543779.35441.8051003.975907.737632.59423796012589523824713057580.7102242197129988153.4007065.557139.32OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU14080120160200SE +/- 0.63, N = 3181.42MIN: 179.041. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1306090120150SE +/- 0.21, N = 3141.98MIN: 139.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1100200300400500SE +/- 2.89, N = 3482.05MIN: 462.211. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1160320480640800SE +/- 1.79, N = 3759.06MIN: 749.071. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1110220330440550SE +/- 0.62, N = 3506.45MIN: 503.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU18001600240032004000SE +/- 23.68, N = 33779.35MIN: 3676.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1100200300400500SE +/- 2.63, N = 3441.81MIN: 435.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU12004006008001000SE +/- 7.65, N = 31003.98MIN: 984.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU12004006008001000SE +/- 11.03, N = 3907.74MIN: 874.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1140280420560700SE +/- 3.46, N = 3632.59MIN: 614.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU150K100K150K200K250KSE +/- 436.52, N = 3237960MIN: 2357881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU130K60K90K120K150KSE +/- 1513.69, N = 3125895MIN: 1235331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU150K100K150K200K250KSE +/- 3008.48, N = 9238247MIN: 2319471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU130K60K90K120K150KSE +/- 1296.47, N = 5130575MIN: 1276711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU120406080100SE +/- 0.20, N = 380.71MIN: 79.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU150K100K150K200K250KSE +/- 3796.29, N = 9242197MIN: 2317881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU130K60K90K120K150KSE +/- 1352.35, N = 9129988MIN: 1235781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1306090120150SE +/- 1.57, N = 15153.40MIN: 145.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory115003000450060007500SE +/- 62.33, N = 37065.551. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU115003000450060007500SE +/- 0.76, N = 37139.321. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm