Core i7 10700T Garage

Intel Core i7-10700T testing on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102203-HA-COREI710791
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
CPU Massive 5 Tests
Creator Workloads 5 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 11 Tests
Imaging 2 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 4 Tests
OpenMPI Tests 6 Tests
Python Tests 2 Tests
Scientific Computing 3 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
February 17 2021
  13 Hours, 11 Minutes
2
February 18 2021
  13 Hours, 10 Minutes
3
February 19 2021
  13 Hours, 27 Minutes
Invert Hiding All Results Option
  13 Hours, 16 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 10700T GarageProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-System123Intel Core i7-10700T @ 4.50GHz (8 Cores / 16 Threads)Logic Supply RXM-181 (Z01-0002A026 BIOS)Intel Comet Lake PCH2 x 16384 MB DDR4-2667MT/s M4S0-AGS1O5IK256GB TS256GMTS800(1200MHz)Realtek ALC233Intel I219-LM + Intel I210Ubuntu 20.105.8.0-43-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.91.2.145GCC 10.2.0ext4OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 2.3 Python Details- Python 3.8.6Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%101%102%103%104%Quantum ESPRESSOOpenFOAMNCNNMobile Neural NetworkEtcpakNAS Parallel BenchmarksONNX RuntimeCpuminer-OptHPC ChallengeQuantLibTensorFlow LiteGoogle SynthMarkJPEG XL DecodingTNNHigh Performance Conjugate GradientNgspiceJPEG XLStream-DynamiclzbenchASKAPtoyBrot Fractal Generator

Core i7 10700T Garagenpb: EP.Dcpuminer-opt: Ringcoinopenfoam: Motorbike 30Mqe: AUSURF112ncnn: CPU - resnet50mnn: inception-v3cpuminer-opt: Blake-2 Setcpak: DXT1hpcc: Rand Ring Latencyhpcc: Max Ping Pong Bandwidthonnx: shufflenet-v2-10 - OpenMP CPUncnn: CPU - yolov4-tinycpuminer-opt: Magitensorflow-lite: NASNet Mobileaskap: tConvolve OpenMP - Griddingmnn: SqueezeNetV1.0hpcc: Rand Ring Bandwidthcpuminer-opt: Deepcoinjpegxl: JPEG - 7toybrot: OpenMPonnx: bertsquad-10 - OpenMP CPUtoybrot: TBBncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2npb: FT.Chpcc: EP-DGEMMnpb: IS.Dncnn: CPU - squeezenet_ssdnpb: MG.Caskap: tConvolve MPI - Griddingncnn: CPU - resnet18jpegxl: JPEG - 5jpegxl: PNG - 5hpcc: G-Ptransonnx: super-resolution-10 - OpenMP CPUtensorflow-lite: Inception V4mnn: resnet-v2-50toybrot: C++ Taskshpcc: G-HPLncnn: CPU - alexnettoybrot: C++ Threadsquantlib: jpegxl-decode: Allhpcc: G-Rand Accesslzbench: Libdeflate 1 - Compressionaskap: tConvolve MPI - Degriddingnpb: LU.Cngspice: C7552hpcc: G-Fftetensorflow-lite: Mobilenet Floatncnn: CPU - vgg16synthmark: VoiceMark_100tnn: CPU - MobileNet v2onnx: yolov4 - OpenMP CPUjpegxl-decode: 1lzbench: Brotli 2 - Decompressionnpb: CG.Ctensorflow-lite: Mobilenet Quanthpcg: openfoam: Motorbike 60Maskap: tConvolve OpenMP - Degriddinglzbench: Brotli 0 - Compressionstream-dynamic: - Scalecpuminer-opt: x25xlzbench: Zstd 1 - Compressiontensorflow-lite: Inception ResNet V2etcpak: ETC1stream-dynamic: - Addstream-dynamic: - Triadlzbench: Brotli 0 - Decompressionstream-dynamic: - Copyngspice: C2670jpegxl: PNG - 7askap: Hogbom Clean OpenMPjpegxl: JPEG - 8lzbench: Zstd 1 - Decompressiontensorflow-lite: SqueezeNethpcc: EP-STREAM Triadcpuminer-opt: Myriad-Groestltnn: CPU - SqueezeNet v1.1etcpak: ETC2askap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingonnx: fcn-resnet101-11 - OpenMP CPUjpegxl: PNG - 8lzbench: Brotli 2 - Compressionlzbench: Crush 0 - Decompressionlzbench: Crush 0 - Compressionlzbench: Zstd 8 - Decompressionlzbench: Zstd 8 - Compressionlzbench: XZ 0 - Decompressionlzbench: XZ 0 - Compressionncnn: CPU - regnety_400mncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224cpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: LBC, LBRY Creditscpuminer-opt: Skeincoincpuminer-opt: Garlicoinnpb: EP.Cetcpak: ETC1 + Dithering123825.861013.02255.81398035.7052.3764284731147.3220.3928213427.3521251933.93140.662960041360.778.3931.825007603.7953.13724583887164625.496.2111013.576.67608774.4226.6310209.221929.3817.8954.1754.152.386562894514084344.7557212441.0965715.03723482092.7155.960.029312181776.9621579.62115.0494.1805723395666.40587.112367.08427336.226714801.122330874.139351313.982118.7842123515.387159.024584616837285.58826716.59926676.71557623494.016138.5637.51194.17523.2715683479773.1324810840359.546158.5001776.521198.57430.68169456991688801023718.3619.222.589.405.617.365.562.9034.267637134728323503334001256.67901.18266.470831.341012.53250.68390035.4751.4094313101128.8940.3895613642.7471253033.86140.952943221342.488.2881.812297553.8553.70717003907110825.316.1911070.296.73621779.7526.4210272.061943.7417.7653.8154.522.402332903510747344.6887179341.3298314.95719712086.8156.710.029452171781.0421621.52114.9714.1948923389866.30589.389366.80427336.286694789.422327304.150781310.402124.4142123520.909158.664584607500286.15726664.09326628.33357723515.205138.4497.52194.42723.2715663475773.1305910840359.443158.5171776.281198.45430.68169456991688801023717.5018.202.418.965.867.295.512.8734.263698314680723540333801211.83926.89263.014797.77988.10250.45394036.1651.6384234071148.8020.3860513546.4171233634.37138.892984941360.778.3281.834587645.8753.70722473867182425.566.1511118.966.73862780.6026.6010288.491934.2117.8853.7954.492.402512884512404344.4727222541.2454314.99722692082.0156.670.029422171772.9921676.33114.5434.1764623490166.57588.960365.72327236.336704803.452333844.150571310.372124.4142023570.491159.004574607170286.18126664.63726631.60157723532.010138.3657.52194.17723.3015673478043.1288910847359.345158.4461777.011198.45430.68169456991688801023717.5918.342.438.875.787.375.572.8744.253630904832724530331871215.14929.91267.313OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D1232004006008001000SE +/- 2.34, N = 3SE +/- 3.14, N = 3SE +/- 2.04, N = 3825.86831.34797.771. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D123150300450600750Min: 823.1 / Avg: 825.86 / Max: 830.52Min: 827.23 / Avg: 831.34 / Max: 837.52Min: 794.41 / Avg: 797.77 / Max: 801.441. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Ringcoin1232004006008001000SE +/- 10.49, N = 3SE +/- 4.73, N = 3SE +/- 11.18, N = 41013.021012.53988.101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Ringcoin1232004006008001000Min: 999.19 / Avg: 1013.02 / Max: 1033.59Min: 1007.59 / Avg: 1012.53 / Max: 1021.99Min: 962.44 / Avg: 988.1 / Max: 1008.341. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12360120180240300SE +/- 3.06, N = 3SE +/- 0.93, N = 3SE +/- 0.86, N = 3255.81250.68250.451. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12350100150200250Min: 250.3 / Avg: 255.81 / Max: 260.89Min: 249.72 / Avg: 250.68 / Max: 252.54Min: 249.44 / Avg: 250.45 / Max: 252.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF1121239001800270036004500SE +/- 20.00, N = 3SE +/- 20.00, N = 33980390039401. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF1121237001400210028003500Min: 3960 / Avg: 3980 / Max: 4020Min: 3900 / Avg: 3940 / Max: 39601. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123816243240SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.39, N = 335.7035.4736.16MIN: 34.38 / MAX: 46.02MIN: 34.41 / MAX: 48.44MIN: 34.35 / MAX: 38.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123816243240Min: 35.64 / Avg: 35.7 / Max: 35.8Min: 35.39 / Avg: 35.47 / Max: 35.61Min: 35.75 / Avg: 36.16 / Max: 36.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v31231224364860SE +/- 0.24, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 352.3851.4151.64MIN: 51.02 / MAX: 87.05MIN: 50.38 / MAX: 65.63MIN: 50.38 / MAX: 152.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v31231020304050Min: 52.03 / Avg: 52.38 / Max: 52.84Min: 51.23 / Avg: 51.41 / Max: 51.67Min: 51.47 / Avg: 51.64 / Max: 51.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Blake-2 S12390K180K270K360K450KSE +/- 3759.02, N = 15SE +/- 3459.56, N = 3SE +/- 5917.84, N = 154284734313104234071. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Blake-2 S12370K140K210K280K350KMin: 399500 / Avg: 428472.67 / Max: 447060Min: 425110 / Avg: 431310 / Max: 437070Min: 380670 / Avg: 423407.33 / Max: 4581201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT11232004006008001000SE +/- 1.42, N = 3SE +/- 13.79, N = 4SE +/- 0.60, N = 31147.321128.891148.801. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT11232004006008001000Min: 1144.49 / Avg: 1147.32 / Max: 1148.88Min: 1087.65 / Avg: 1128.89 / Max: 1144.71Min: 1148 / Avg: 1148.8 / Max: 1149.981. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency1230.08840.17680.26520.35360.442SE +/- 0.00046, N = 3SE +/- 0.00047, N = 3SE +/- 0.00374, N = 30.392820.389560.386051. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency12312345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.38 / Avg: 0.39 / Max: 0.391. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth1233K6K9K12K15KSE +/- 237.99, N = 3SE +/- 60.56, N = 3SE +/- 37.98, N = 313427.3513642.7513546.421. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth1232K4K6K8K10KMin: 13025.37 / Avg: 13427.35 / Max: 13849.07Min: 13528.46 / Avg: 13642.75 / Max: 13734.61Min: 13482.45 / Avg: 13546.42 / Max: 13613.871. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU1233K6K9K12K15KSE +/- 46.61, N = 3SE +/- 57.64, N = 3SE +/- 31.67, N = 31251912530123361. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU1232K4K6K8K10KMin: 12455.5 / Avg: 12518.5 / Max: 12609.5Min: 12465.5 / Avg: 12530 / Max: 12645Min: 12279 / Avg: 12335.67 / Max: 12388.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123816243240SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.33, N = 333.9333.8634.37MIN: 33.29 / MAX: 35.31MIN: 33.38 / MAX: 40.78MIN: 33.49 / MAX: 36.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123714212835Min: 33.86 / Avg: 33.93 / Max: 33.99Min: 33.75 / Avg: 33.86 / Max: 33.92Min: 33.99 / Avg: 34.37 / Max: 35.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Magi123306090120150SE +/- 0.58, N = 3SE +/- 0.45, N = 3SE +/- 0.98, N = 3140.66140.95138.891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Magi123306090120150Min: 139.85 / Avg: 140.66 / Max: 141.79Min: 140.06 / Avg: 140.95 / Max: 141.5Min: 137.6 / Avg: 138.89 / Max: 140.821. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12360K120K180K240K300KSE +/- 1888.41, N = 3SE +/- 2471.13, N = 3SE +/- 3551.25, N = 3296004294322298494
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12350K100K150K200K250KMin: 293001 / Avg: 296003.67 / Max: 299489Min: 289440 / Avg: 294322.33 / Max: 297428Min: 291477 / Avg: 298494 / Max: 302954

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding12330060090012001500SE +/- 2.32, N = 3SE +/- 2.25, N = 3SE +/- 2.32, N = 31360.771342.481360.771. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding1232004006008001000Min: 1358.45 / Avg: 1360.77 / Max: 1365.42Min: 1337.97 / Avg: 1342.48 / Max: 1344.73Min: 1358.45 / Avg: 1360.77 / Max: 1365.421. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0123246810SE +/- 0.032, N = 3SE +/- 0.017, N = 3SE +/- 0.029, N = 38.3938.2888.328MIN: 7.82 / MAX: 21.65MIN: 7.81 / MAX: 10.97MIN: 7.76 / MAX: 22.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.01233691215Min: 8.33 / Avg: 8.39 / Max: 8.43Min: 8.26 / Avg: 8.29 / Max: 8.31Min: 8.27 / Avg: 8.33 / Max: 8.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth1230.41280.82561.23841.65122.064SE +/- 0.00583, N = 3SE +/- 0.01081, N = 3SE +/- 0.00756, N = 31.825001.812291.834581. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth123246810Min: 1.82 / Avg: 1.82 / Max: 1.84Min: 1.8 / Avg: 1.81 / Max: 1.83Min: 1.82 / Avg: 1.83 / Max: 1.841. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Deepcoin12316003200480064008000SE +/- 21.23, N = 3SE +/- 2.98, N = 3SE +/- 60.86, N = 147603.797553.857645.871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Deepcoin12313002600390052006500Min: 7579.36 / Avg: 7603.79 / Max: 7646.08Min: 7547.96 / Avg: 7553.85 / Max: 7557.63Min: 7486.8 / Avg: 7645.87 / Max: 8342.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 71231224364860SE +/- 0.46, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 353.1353.7053.701. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 71231122334455Min: 52.23 / Avg: 53.13 / Max: 53.69Min: 53.63 / Avg: 53.7 / Max: 53.77Min: 53.56 / Avg: 53.7 / Max: 53.851. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP12316K32K48K64K80KSE +/- 483.65, N = 3SE +/- 433.05, N = 3SE +/- 545.83, N = 37245871700722471. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP12313K26K39K52K65KMin: 71497 / Avg: 72458 / Max: 73034Min: 70834 / Avg: 71699.67 / Max: 72156Min: 71158 / Avg: 72246.67 / Max: 728611. (CXX) g++ options: -O3 -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU12380160240320400SE +/- 1.36, N = 3SE +/- 1.69, N = 3SE +/- 1.17, N = 33883903861. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU12370140210280350Min: 386 / Avg: 387.83 / Max: 390.5Min: 388 / Avg: 390.17 / Max: 393.5Min: 384 / Avg: 385.83 / Max: 3881. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBB12315K30K45K60K75KSE +/- 628.40, N = 3SE +/- 759.96, N = 4SE +/- 835.11, N = 47164671108718241. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBB12312K24K36K48K60KMin: 70396 / Avg: 71646 / Max: 72384Min: 68923 / Avg: 71108 / Max: 72449Min: 69413 / Avg: 71824.25 / Max: 731941. (CXX) g++ options: -O3 -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123612182430SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 325.4925.3125.56MIN: 24.55 / MAX: 26.52MIN: 24.48 / MAX: 27.34MIN: 24.69 / MAX: 26.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123612182430Min: 25.41 / Avg: 25.49 / Max: 25.57Min: 25.2 / Avg: 25.31 / Max: 25.41Min: 25.44 / Avg: 25.56 / Max: 25.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 36.216.196.15MIN: 6.04 / MAX: 7.07MIN: 5.93 / MAX: 7.07MIN: 5.92 / MAX: 7.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810Min: 6.2 / Avg: 6.21 / Max: 6.21Min: 6.09 / Avg: 6.19 / Max: 6.24Min: 6.08 / Avg: 6.15 / Max: 6.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C1232K4K6K8K10KSE +/- 63.58, N = 3SE +/- 65.81, N = 3SE +/- 76.04, N = 311013.5711070.2911118.961. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C1232K4K6K8K10KMin: 10913.86 / Avg: 11013.57 / Max: 11131.77Min: 10950.03 / Avg: 11070.29 / Max: 11176.75Min: 11016.69 / Avg: 11118.96 / Max: 11267.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM123246810SE +/- 0.02863, N = 3SE +/- 0.03149, N = 3SE +/- 0.02214, N = 36.676086.736216.738621. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM1233691215Min: 6.63 / Avg: 6.68 / Max: 6.73Min: 6.7 / Avg: 6.74 / Max: 6.8Min: 6.71 / Avg: 6.74 / Max: 6.781. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D1232004006008001000SE +/- 4.53, N = 3SE +/- 5.84, N = 3SE +/- 4.42, N = 3774.42779.75780.601. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D123140280420560700Min: 768.18 / Avg: 774.42 / Max: 783.22Min: 773.2 / Avg: 779.75 / Max: 791.41Min: 774.78 / Avg: 780.6 / Max: 789.271. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123612182430SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 326.6326.4226.60MIN: 26.18 / MAX: 27.78MIN: 25.47 / MAX: 29.3MIN: 25.65 / MAX: 36.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123612182430Min: 26.53 / Avg: 26.63 / Max: 26.8Min: 26.2 / Avg: 26.42 / Max: 26.58Min: 26.51 / Avg: 26.6 / Max: 26.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C1232K4K6K8K10KSE +/- 35.04, N = 3SE +/- 12.11, N = 3SE +/- 21.97, N = 310209.2210272.0610288.491. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C1232K4K6K8K10KMin: 10145.79 / Avg: 10209.22 / Max: 10266.73Min: 10250.84 / Avg: 10272.06 / Max: 10292.79Min: 10248.97 / Avg: 10288.49 / Max: 10324.91. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding123400800120016002000SE +/- 0.00, N = 3SE +/- 8.31, N = 3SE +/- 9.46, N = 31929.381943.741934.211. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding12330060090012001500Min: 1929.38 / Avg: 1929.38 / Max: 1929.38Min: 1929.38 / Avg: 1943.74 / Max: 1958.18Min: 1915.3 / Avg: 1934.21 / Max: 1943.671. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet1812348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 317.8917.7617.88MIN: 16.81 / MAX: 19.27MIN: 16.74 / MAX: 19.77MIN: 16.81 / MAX: 18.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123510152025Min: 17.87 / Avg: 17.89 / Max: 17.92Min: 17.74 / Avg: 17.76 / Max: 17.78Min: 17.85 / Avg: 17.88 / Max: 17.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 51231224364860SE +/- 0.38, N = 3SE +/- 0.03, N = 3SE +/- 0.19, N = 354.1753.8153.791. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 51231122334455Min: 53.74 / Avg: 54.17 / Max: 54.93Min: 53.77 / Avg: 53.81 / Max: 53.86Min: 53.41 / Avg: 53.79 / Max: 54.031. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 51231224364860SE +/- 0.62, N = 3SE +/- 0.63, N = 3SE +/- 0.61, N = 354.1554.5254.491. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 51231122334455Min: 53.53 / Avg: 54.15 / Max: 55.39Min: 53.84 / Avg: 54.52 / Max: 55.78Min: 53.8 / Avg: 54.49 / Max: 55.71. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans1230.54061.08121.62182.16242.703SE +/- 0.00306, N = 3SE +/- 0.00413, N = 3SE +/- 0.00438, N = 32.386562.402332.402511. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans123246810Min: 2.38 / Avg: 2.39 / Max: 2.39Min: 2.4 / Avg: 2.4 / Max: 2.41Min: 2.4 / Avg: 2.4 / Max: 2.411. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU1236001200180024003000SE +/- 9.77, N = 3SE +/- 4.06, N = 3SE +/- 5.73, N = 32894290328841. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU1235001000150020002500Min: 2874.5 / Avg: 2893.67 / Max: 2906.5Min: 2895.5 / Avg: 2902.83 / Max: 2909.5Min: 2876.5 / Avg: 2884.33 / Max: 2895.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231.1M2.2M3.3M4.4M5.5MSE +/- 25589.42, N = 3SE +/- 15416.80, N = 3SE +/- 11793.57, N = 3514084351074735124043
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4123900K1800K2700K3600K4500KMin: 5095180 / Avg: 5140843.33 / Max: 5183690Min: 5079580 / Avg: 5107473.33 / Max: 5132800Min: 5100550 / Avg: 5124043.33 / Max: 5137610

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-501231020304050SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 344.7644.6944.47MIN: 43.91 / MAX: 55.53MIN: 43.82 / MAX: 157.25MIN: 43.53 / MAX: 59.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50123918273645Min: 44.6 / Avg: 44.76 / Max: 44.88Min: 44.57 / Avg: 44.69 / Max: 44.84Min: 44.34 / Avg: 44.47 / Max: 44.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks12315K30K45K60K75KSE +/- 435.66, N = 3SE +/- 498.50, N = 3SE +/- 332.79, N = 37212471793722251. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks12313K26K39K52K65KMin: 71256 / Avg: 72124.33 / Max: 72621Min: 70806 / Avg: 71792.67 / Max: 72410Min: 71564 / Avg: 72225.33 / Max: 726211. (CXX) g++ options: -O3 -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL123918273645SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 341.1041.3341.251. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL123918273645Min: 40.88 / Avg: 41.1 / Max: 41.37Min: 41.09 / Avg: 41.33 / Max: 41.53Min: 41.1 / Avg: 41.25 / Max: 41.521. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet12348121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.0314.9514.99MIN: 14.2 / MAX: 24.48MIN: 14.17 / MAX: 25.38MIN: 14.15 / MAX: 15.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet12348121620Min: 15.02 / Avg: 15.03 / Max: 15.03Min: 14.94 / Avg: 14.95 / Max: 14.96Min: 14.97 / Avg: 14.99 / Max: 15.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads12315K30K45K60K75KSE +/- 476.52, N = 3SE +/- 560.86, N = 3SE +/- 439.23, N = 37234871971722691. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads12313K26K39K52K65KMin: 71395 / Avg: 72348 / Max: 72832Min: 70855 / Avg: 71971 / Max: 72627Min: 71391 / Avg: 72268.67 / Max: 727401. (CXX) g++ options: -O3 -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21123400800120016002000SE +/- 22.63, N = 4SE +/- 29.40, N = 3SE +/- 26.47, N = 32092.72086.82082.01. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21123400800120016002000Min: 2025.2 / Avg: 2092.65 / Max: 2119.7Min: 2028 / Avg: 2086.8 / Max: 2116.5Min: 2029.2 / Avg: 2082.03 / Max: 2111.41. (CXX) g++ options: -O3 -march=native -rdynamic

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: All123306090120150SE +/- 1.95, N = 3SE +/- 2.16, N = 3SE +/- 1.88, N = 3155.96156.71156.67
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: All123306090120150Min: 153.84 / Avg: 155.96 / Max: 159.86Min: 154.36 / Avg: 156.71 / Max: 161.02Min: 154.55 / Avg: 156.67 / Max: 160.41

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access1230.00660.01320.01980.02640.033SE +/- 0.00064, N = 3SE +/- 0.00012, N = 3SE +/- 0.00015, N = 30.029310.029450.029421. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access12312345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.031. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Compression12350100150200250SE +/- 0.33, N = 32182172171. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Compression1234080120160200Min: 217 / Avg: 217.67 / Max: 2181. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding123400800120016002000SE +/- 4.02, N = 3SE +/- 8.10, N = 3SE +/- 6.92, N = 31776.961781.041772.991. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding12330060090012001500Min: 1772.94 / Avg: 1776.96 / Max: 1785Min: 1772.94 / Avg: 1781.04 / Max: 1797.23Min: 1761.04 / Avg: 1772.99 / Max: 17851. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C1235K10K15K20K25KSE +/- 52.44, N = 3SE +/- 67.23, N = 3SE +/- 43.98, N = 321579.6221621.5221676.331. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C1234K8K12K16K20KMin: 21500.23 / Avg: 21579.62 / Max: 21678.68Min: 21541.45 / Avg: 21621.52 / Max: 21755.1Min: 21606.85 / Avg: 21676.33 / Max: 21757.781. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552123306090120150SE +/- 0.02, N = 3SE +/- 0.34, N = 3SE +/- 0.08, N = 3115.05114.97114.541. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C755212320406080100Min: 115.01 / Avg: 115.05 / Max: 115.07Min: 114.56 / Avg: 114.97 / Max: 115.65Min: 114.4 / Avg: 114.54 / Max: 114.681. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte1230.94391.88782.83173.77564.7195SE +/- 0.00966, N = 3SE +/- 0.00931, N = 3SE +/- 0.00949, N = 34.180574.194894.176461. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte123246810Min: 4.17 / Avg: 4.18 / Max: 4.2Min: 4.18 / Avg: 4.19 / Max: 4.21Min: 4.16 / Avg: 4.18 / Max: 4.191. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12350K100K150K200K250KSE +/- 1981.56, N = 3SE +/- 1520.01, N = 3SE +/- 1572.66, N = 3233956233898234901
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12340K80K120K160K200KMin: 230001 / Avg: 233956 / Max: 236153Min: 230858 / Avg: 233898 / Max: 235426Min: 231757 / Avg: 234901.33 / Max: 236542

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231530456075SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 366.4066.3066.57MIN: 66.04 / MAX: 67.71MIN: 65.68 / MAX: 76.72MIN: 65.98 / MAX: 75.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231326395265Min: 66.38 / Avg: 66.4 / Max: 66.42Min: 66.21 / Avg: 66.3 / Max: 66.36Min: 66.39 / Avg: 66.57 / Max: 66.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100123130260390520650SE +/- 0.32, N = 3SE +/- 1.09, N = 3SE +/- 1.28, N = 3587.11589.39588.961. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100123100200300400500Min: 586.49 / Avg: 587.11 / Max: 587.58Min: 587.2 / Avg: 589.39 / Max: 590.53Min: 586.46 / Avg: 588.96 / Max: 590.661. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212380160240320400SE +/- 0.55, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3367.08366.80365.72MIN: 365.67 / MAX: 406.12MIN: 366.28 / MAX: 370.1MIN: 365.1 / MAX: 369.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212370140210280350Min: 366.13 / Avg: 367.08 / Max: 368.02Min: 366.67 / Avg: 366.8 / Max: 366.9Min: 365.58 / Avg: 365.72 / Max: 365.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU12360120180240300SE +/- 2.09, N = 3SE +/- 1.67, N = 3SE +/- 2.25, N = 32732732721. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU12350100150200250Min: 270.5 / Avg: 272.83 / Max: 277Min: 271 / Avg: 272.67 / Max: 276Min: 269 / Avg: 271.5 / Max: 2761. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1123816243240SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 336.2236.2836.33
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1123816243240Min: 36.19 / Avg: 36.22 / Max: 36.23Min: 36.21 / Avg: 36.28 / Max: 36.35Min: 36.31 / Avg: 36.33 / Max: 36.35

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression123140280420560700SE +/- 0.33, N = 3SE +/- 0.67, N = 36716696701. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression123120240360480600Min: 670 / Avg: 670.67 / Max: 671Min: 669 / Avg: 670.33 / Max: 6711. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12310002000300040005000SE +/- 47.86, N = 3SE +/- 13.02, N = 3SE +/- 14.38, N = 34801.124789.424803.451. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C1238001600240032004000Min: 4743.38 / Avg: 4801.12 / Max: 4896.1Min: 4775.98 / Avg: 4789.42 / Max: 4815.46Min: 4786.85 / Avg: 4803.45 / Max: 4832.081. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12350K100K150K200K250KSE +/- 1815.17, N = 3SE +/- 1644.95, N = 3SE +/- 1815.46, N = 3233087232730233384
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12340K80K120K160K200KMin: 229461 / Avg: 233086.67 / Max: 235059Min: 229447 / Avg: 232730.33 / Max: 234552Min: 229755 / Avg: 233383.67 / Max: 235309

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.11230.93391.86782.80173.73564.6695SE +/- 0.00143, N = 3SE +/- 0.00178, N = 3SE +/- 0.00095, N = 34.139354.150784.150571. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1123246810Min: 4.14 / Avg: 4.14 / Max: 4.14Min: 4.15 / Avg: 4.15 / Max: 4.15Min: 4.15 / Avg: 4.15 / Max: 4.151. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M12330060090012001500SE +/- 1.46, N = 3SE +/- 3.94, N = 3SE +/- 4.01, N = 31313.981310.401310.371. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M1232004006008001000Min: 1311.11 / Avg: 1313.98 / Max: 1315.86Min: 1302.64 / Avg: 1310.4 / Max: 1315.43Min: 1302.36 / Avg: 1310.37 / Max: 1314.671. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding1235001000150020002500SE +/- 5.64, N = 3SE +/- 5.64, N = 3SE +/- 5.64, N = 32118.782124.412124.411. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding123400800120016002000Min: 2113.14 / Avg: 2118.78 / Max: 2130.05Min: 2113.14 / Avg: 2124.41 / Max: 2130.05Min: 2113.14 / Avg: 2124.41 / Max: 2130.051. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Compression123901802703604504214214201. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Scale1235K10K15K20K25KSE +/- 11.57, N = 3SE +/- 15.27, N = 3SE +/- 26.03, N = 323515.3923520.9123570.491. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Scale1234K8K12K16K20KMin: 23502.29 / Avg: 23515.39 / Max: 23538.45Min: 23503.78 / Avg: 23520.91 / Max: 23551.36Min: 23522.52 / Avg: 23570.49 / Max: 236121. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25x1234080120160200SE +/- 0.77, N = 3SE +/- 0.92, N = 3SE +/- 0.39, N = 3159.02158.66159.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25x123306090120150Min: 157.51 / Avg: 159.02 / Max: 160.05Min: 157.65 / Avg: 158.66 / Max: 160.49Min: 158.31 / Avg: 159 / Max: 159.651. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression123100200300400500SE +/- 0.33, N = 3SE +/- 0.88, N = 34584584571. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression12380160240320400Min: 457 / Avg: 457.67 / Max: 458Min: 455 / Avg: 456.67 / Max: 4581. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V21231000K2000K3000K4000K5000KSE +/- 14007.88, N = 3SE +/- 11817.84, N = 3SE +/- 11723.44, N = 3461683746075004607170
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2123800K1600K2400K3200K4000KMin: 4588880 / Avg: 4616836.67 / Max: 4632390Min: 4583930 / Avg: 4607500 / Max: 4620810Min: 4584330 / Avg: 4607170 / Max: 4623180

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC112360120180240300SE +/- 0.76, N = 3SE +/- 0.62, N = 3SE +/- 0.65, N = 3285.59286.16286.181. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC112350100150200250Min: 284.81 / Avg: 285.59 / Max: 287.12Min: 285.03 / Avg: 286.16 / Max: 287.18Min: 285.03 / Avg: 286.18 / Max: 287.31. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Add1236K12K18K24K30KSE +/- 51.12, N = 3SE +/- 3.13, N = 3SE +/- 0.68, N = 326716.6026664.0926664.641. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Add1235K10K15K20K25KMin: 26663.78 / Avg: 26716.6 / Max: 26818.83Min: 26659.81 / Avg: 26664.09 / Max: 26670.18Min: 26663.3 / Avg: 26664.64 / Max: 26665.511. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Triad1236K12K18K24K30KSE +/- 50.97, N = 3SE +/- 0.62, N = 3SE +/- 0.14, N = 326676.7226628.3326631.601. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Triad1235K10K15K20K25KMin: 26622.05 / Avg: 26676.72 / Max: 26778.56Min: 26627.11 / Avg: 26628.33 / Max: 26629.14Min: 26631.38 / Avg: 26631.6 / Max: 26631.871. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression123120240360480600SE +/- 0.67, N = 35765775771. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression123100200300400500Min: 576 / Avg: 576.67 / Max: 5781. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Copy1235K10K15K20K25KSE +/- 10.91, N = 3SE +/- 32.13, N = 3SE +/- 25.41, N = 323494.0223515.2123532.011. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- Copy1234K8K12K16K20KMin: 23482.24 / Avg: 23494.02 / Max: 23515.82Min: 23480.5 / Avg: 23515.21 / Max: 23579.39Min: 23482.8 / Avg: 23532.01 / Max: 23567.591. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670123306090120150SE +/- 0.24, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 3138.56138.45138.371. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670123306090120150Min: 138.08 / Avg: 138.56 / Max: 138.86Min: 138.18 / Avg: 138.45 / Max: 138.68Min: 138.24 / Avg: 138.37 / Max: 138.611. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7123246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 37.517.527.521. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 71233691215Min: 7.51 / Avg: 7.51 / Max: 7.51Min: 7.52 / Avg: 7.52 / Max: 7.53Min: 7.51 / Avg: 7.52 / Max: 7.531. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP1234080120160200SE +/- 0.22, N = 3SE +/- 0.13, N = 3SE +/- 0.38, N = 3194.18194.43194.181. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP1234080120160200Min: 193.8 / Avg: 194.18 / Max: 194.55Min: 194.18 / Avg: 194.43 / Max: 194.55Min: 193.42 / Avg: 194.18 / Max: 194.551. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8123612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 323.2723.2723.301. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8123510152025Min: 23.26 / Avg: 23.27 / Max: 23.28Min: 23.24 / Avg: 23.27 / Max: 23.32Min: 23.27 / Avg: 23.3 / Max: 23.351. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression12330060090012001500SE +/- 0.33, N = 3SE +/- 1.20, N = 3SE +/- 0.58, N = 31568156615671. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression12330060090012001500Min: 1567 / Avg: 1567.67 / Max: 1568Min: 1564 / Avg: 1566.33 / Max: 1568Min: 1566 / Avg: 1567 / Max: 15681. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12370K140K210K280K350KSE +/- 4007.01, N = 3SE +/- 3968.42, N = 3SE +/- 4024.08, N = 3347977347577347804
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12360K120K180K240K300KMin: 339963 / Avg: 347977 / Max: 351997Min: 339646 / Avg: 347577 / Max: 351806Min: 339757 / Avg: 347804.33 / Max: 351928

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad1230.70481.40962.11442.81923.524SE +/- 0.00220, N = 3SE +/- 0.00218, N = 3SE +/- 0.00311, N = 33.132483.130593.128891. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad123246810Min: 3.13 / Avg: 3.13 / Max: 3.14Min: 3.13 / Avg: 3.13 / Max: 3.13Min: 3.12 / Avg: 3.13 / Max: 3.141. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Myriad-Groestl1232K4K6K8K10KSE +/- 40.00, N = 3SE +/- 92.92, N = 3SE +/- 47.02, N = 31084010840108471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Myriad-Groestl1232K4K6K8K10KMin: 10760 / Avg: 10840 / Max: 10880Min: 10690 / Avg: 10840 / Max: 11010Min: 10790 / Avg: 10846.67 / Max: 109401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112380160240320400SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.04, N = 3359.55359.44359.35MIN: 359.27 / MAX: 360.29MIN: 359.12 / MAX: 360.16MIN: 359.15 / MAX: 359.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300Min: 359.41 / Avg: 359.55 / Max: 359.75Min: 359.26 / Avg: 359.44 / Max: 359.63Min: 359.27 / Avg: 359.35 / Max: 359.421. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC21234080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3158.50158.52158.451. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2123306090120150Min: 158.44 / Avg: 158.5 / Max: 158.57Min: 158.46 / Avg: 158.52 / Max: 158.61Min: 158.3 / Avg: 158.45 / Max: 158.521. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding123400800120016002000SE +/- 0.85, N = 3SE +/- 1.62, N = 3SE +/- 0.65, N = 31776.521776.281777.011. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding12330060090012001500Min: 1775.04 / Avg: 1776.52 / Max: 1778Min: 1774.3 / Avg: 1776.28 / Max: 1779.49Min: 1775.78 / Avg: 1777.01 / Max: 17781. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding12330060090012001500SE +/- 0.74, N = 3SE +/- 0.30, N = 3SE +/- 0.23, N = 31198.571198.451198.451. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding1232004006008001000Min: 1197.67 / Avg: 1198.57 / Max: 1200.03Min: 1198 / Avg: 1198.45 / Max: 1199.01Min: 1198 / Avg: 1198.45 / Max: 1198.681. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU1231020304050SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.29, N = 34343431. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU123918273645Min: 42.5 / Avg: 42.67 / Max: 43Min: 42.5 / Avg: 43 / Max: 43.5Min: 42 / Avg: 42.5 / Max: 431. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 81230.1530.3060.4590.6120.765SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.680.680.681. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8123246810Min: 0.68 / Avg: 0.68 / Max: 0.68Min: 0.68 / Avg: 0.68 / Max: 0.68Min: 0.68 / Avg: 0.68 / Max: 0.681. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Compression12340801201602001691691691. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Decompression1231002003004005004564564561. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Compression12320406080100SE +/- 0.67, N = 3SE +/- 0.67, N = 39999991. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Compression12320406080100Min: 98 / Avg: 98.67 / Max: 100Min: 98 / Avg: 99.33 / Max: 1001. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression123400800120016002000SE +/- 1.00, N = 3SE +/- 1.53, N = 31688168816881. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression12330060090012001500Min: 1686 / Avg: 1688 / Max: 1689Min: 1685 / Avg: 1688 / Max: 16901. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Compression123204060801008080801. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Decompression123204060801001021021021. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Compression1239182736453737371. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025SE +/- 0.11, N = 3SE +/- 0.99, N = 3SE +/- 0.88, N = 318.3617.5017.59MIN: 17.8 / MAX: 21.9MIN: 15.46 / MAX: 27.8MIN: 15.75 / MAX: 19.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025Min: 18.15 / Avg: 18.36 / Max: 18.48Min: 15.53 / Avg: 17.5 / Max: 18.5Min: 15.82 / Avg: 17.59 / Max: 18.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123510152025SE +/- 0.06, N = 3SE +/- 0.67, N = 3SE +/- 0.81, N = 319.2218.2018.34MIN: 18.26 / MAX: 20.45MIN: 15.25 / MAX: 19.9MIN: 15.43 / MAX: 20.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123510152025Min: 19.16 / Avg: 19.22 / Max: 19.34Min: 16.87 / Avg: 18.2 / Max: 18.88Min: 16.71 / Avg: 18.34 / Max: 19.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.58051.1611.74152.3222.9025SE +/- 0.01, N = 3SE +/- 0.18, N = 3SE +/- 0.16, N = 32.582.412.43MIN: 2.45 / MAX: 3.35MIN: 2.04 / MAX: 2.96MIN: 2.07 / MAX: 2.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 2.57 / Avg: 2.58 / Max: 2.59Min: 2.06 / Avg: 2.41 / Max: 2.59Min: 2.11 / Avg: 2.43 / Max: 2.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215SE +/- 0.20, N = 3SE +/- 0.64, N = 3SE +/- 0.59, N = 39.408.968.87MIN: 7.66 / MAX: 10.49MIN: 7.64 / MAX: 10.32MIN: 7.63 / MAX: 10.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215Min: 9.01 / Avg: 9.4 / Max: 9.65Min: 7.68 / Avg: 8.96 / Max: 9.67Min: 7.69 / Avg: 8.87 / Max: 9.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1231.31852.6373.95555.2746.5925SE +/- 0.62, N = 2SE +/- 0.47, N = 3SE +/- 0.44, N = 35.615.865.78MIN: 4.96 / MAX: 6.53MIN: 4.89 / MAX: 16.77MIN: 4.87 / MAX: 6.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810Min: 4.99 / Avg: 5.61 / Max: 6.22Min: 4.92 / Avg: 5.86 / Max: 6.37Min: 4.9 / Avg: 5.78 / Max: 6.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2123246810SE +/- 0.64, N = 3SE +/- 0.63, N = 3SE +/- 0.66, N = 37.367.297.37MIN: 6.04 / MAX: 8.86MIN: 6.02 / MAX: 8.8MIN: 6.01 / MAX: 9.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215Min: 6.07 / Avg: 7.36 / Max: 8.03Min: 6.04 / Avg: 7.29 / Max: 7.93Min: 6.05 / Avg: 7.37 / Max: 8.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31231.25332.50663.75995.01326.2665SE +/- 0.26, N = 3SE +/- 0.29, N = 3SE +/- 0.32, N = 35.565.515.57MIN: 4.85 / MAX: 7.19MIN: 4.85 / MAX: 13.28MIN: 4.86 / MAX: 15.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810Min: 5.05 / Avg: 5.56 / Max: 5.82Min: 4.94 / Avg: 5.51 / Max: 5.8Min: 4.93 / Avg: 5.57 / Max: 5.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.01230.65321.30641.95962.61283.266SE +/- 0.109, N = 3SE +/- 0.105, N = 3SE +/- 0.102, N = 32.9032.8732.874MIN: 2.61 / MAX: 4.98MIN: 2.61 / MAX: 4.73MIN: 2.59 / MAX: 6.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0123246810Min: 2.69 / Avg: 2.9 / Max: 3.02Min: 2.67 / Avg: 2.87 / Max: 3Min: 2.67 / Avg: 2.87 / Max: 31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2241230.96011.92022.88033.84044.8005SE +/- 0.227, N = 3SE +/- 0.175, N = 3SE +/- 0.236, N = 34.2674.2634.253MIN: 2.86 / MAX: 18.37MIN: 2.83 / MAX: 18.23MIN: 2.81 / MAX: 6.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224123246810Min: 3.81 / Avg: 4.27 / Max: 4.51Min: 3.91 / Avg: 4.26 / Max: 4.44Min: 3.78 / Avg: 4.25 / Max: 4.511. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Triple SHA-256, Onecoin12315K30K45K60K75KSE +/- 343.33, N = 3SE +/- 3320.23, N = 15SE +/- 555.37, N = 36371369831630901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Triple SHA-256, Onecoin12312K24K36K48K60KMin: 63370 / Avg: 63713.33 / Max: 64400Min: 62020 / Avg: 69831.33 / Max: 105570Min: 62360 / Avg: 63090 / Max: 641801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Quad SHA-256, Pyrite12310K20K30K40K50KSE +/- 243.20, N = 3SE +/- 130.94, N = 3SE +/- 1312.55, N = 154728346807483271. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Quad SHA-256, Pyrite1238K16K24K32K40KMin: 46910 / Avg: 47283.33 / Max: 47740Min: 46550 / Avg: 46806.67 / Max: 46980Min: 46390 / Avg: 48326.67 / Max: 666201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY Credits1235K10K15K20K25KSE +/- 31.80, N = 3SE +/- 51.32, N = 3SE +/- 624.18, N = 152350323540245301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY Credits1234K8K12K16K20KMin: 23440 / Avg: 23503.33 / Max: 23540Min: 23440 / Avg: 23540 / Max: 23610Min: 23430 / Avg: 24530 / Max: 310701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Skeincoin1237K14K21K28K35KSE +/- 66.58, N = 3SE +/- 56.86, N = 3SE +/- 2454.29, N = 123340033380331871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Skeincoin1236K12K18K24K30KMin: 33270 / Avg: 33400 / Max: 33490Min: 33270 / Avg: 33380 / Max: 33460Min: 10460 / Avg: 33186.67 / Max: 490001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Garlicoin12330060090012001500SE +/- 44.21, N = 15SE +/- 8.01, N = 15SE +/- 15.32, N = 31256.671211.831215.141. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Garlicoin1232004006008001000Min: 1171.59 / Avg: 1256.67 / Max: 1873.13Min: 1162.27 / Avg: 1211.83 / Max: 1284.27Min: 1184.73 / Avg: 1215.14 / Max: 1233.611. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C1232004006008001000SE +/- 15.92, N = 15SE +/- 9.33, N = 15SE +/- 9.99, N = 15901.18926.89929.911. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C123160320480640800Min: 831.87 / Avg: 901.18 / Max: 1074.2Min: 873.75 / Avg: 926.89 / Max: 1035.72Min: 851.71 / Avg: 929.91 / Max: 1033.91. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering12360120180240300SE +/- 1.29, N = 3SE +/- 4.66, N = 12SE +/- 0.39, N = 3266.47263.01267.311. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering12350100150200250Min: 263.88 / Avg: 266.47 / Max: 267.83Min: 211.82 / Avg: 263.01 / Max: 268.07Min: 266.53 / Avg: 267.31 / Max: 267.731. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

108 Results Shown

NAS Parallel Benchmarks
Cpuminer-Opt
OpenFOAM
Quantum ESPRESSO
NCNN
Mobile Neural Network
Cpuminer-Opt
Etcpak
HPC Challenge:
  Rand Ring Latency
  Max Ping Pong Bandwidth
ONNX Runtime
NCNN
Cpuminer-Opt
TensorFlow Lite
ASKAP
Mobile Neural Network
HPC Challenge
Cpuminer-Opt
JPEG XL
toyBrot Fractal Generator
ONNX Runtime
toyBrot Fractal Generator
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
NAS Parallel Benchmarks
HPC Challenge
NAS Parallel Benchmarks
NCNN
NAS Parallel Benchmarks
ASKAP
NCNN
JPEG XL:
  JPEG - 5
  PNG - 5
HPC Challenge
ONNX Runtime
TensorFlow Lite
Mobile Neural Network
toyBrot Fractal Generator
HPC Challenge
NCNN
toyBrot Fractal Generator
QuantLib
JPEG XL Decoding
HPC Challenge
lzbench
ASKAP
NAS Parallel Benchmarks
Ngspice
HPC Challenge
TensorFlow Lite
NCNN
Google SynthMark
TNN
ONNX Runtime
JPEG XL Decoding
lzbench
NAS Parallel Benchmarks
TensorFlow Lite
High Performance Conjugate Gradient
OpenFOAM
ASKAP
lzbench
Stream-Dynamic
Cpuminer-Opt
lzbench
TensorFlow Lite
Etcpak
Stream-Dynamic:
  - Add
  - Triad
lzbench
Stream-Dynamic
Ngspice
JPEG XL
ASKAP
JPEG XL
lzbench
TensorFlow Lite
HPC Challenge
Cpuminer-Opt
TNN
Etcpak
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
ONNX Runtime
JPEG XL
lzbench:
  Brotli 2 - Compression
  Crush 0 - Decompression
  Crush 0 - Compression
  Zstd 8 - Decompression
  Zstd 8 - Compression
  XZ 0 - Decompression
  XZ 0 - Compression
NCNN:
  CPU - regnety_400m
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
Mobile Neural Network:
  mobilenet-v1-1.0
  MobileNetV2_224
Cpuminer-Opt:
  Triple SHA-256, Onecoin
  Quad SHA-256, Pyrite
  LBC, LBRY Credits
  Skeincoin
  Garlicoin
NAS Parallel Benchmarks
Etcpak