OnLogic Helix 500 Linux benchmarks

OnLogic Helix 500 benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102172-PTS-2102044H93
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 9 Tests
Creator Workloads 11 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 2 Tests
Finance 2 Tests
Fortran Tests 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 14 Tests
Imaging 3 Tests
Common Kernel Benchmarks 2 Tests
Linear Algebra 2 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 5 Tests
Multi-Core 9 Tests
NVIDIA GPU Compute 5 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 7 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 3 Tests
Scientific Computing 6 Tests
Server 4 Tests
Server CPU Tests 4 Tests
Single-Threaded 3 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 2 Tests
Video Encoding 2 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
OnLogic Helix 500
January 27 2021
  6 Hours, 41 Minutes
OnLogic Karbon 700
February 01 2021
  7 Hours, 41 Minutes
Invert Hiding All Results Option
  7 Hours, 11 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


OnLogic Helix 500 Linux benchmarksProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionOnLogic Helix 500OnLogic Karbon 700Intel Core i7-10700T @ 4.50GHz (8 Cores / 16 Threads)Logic Supply RXM-181 (Z01-0002A026 BIOS)Intel Comet Lake PCH32GB256GB TS256GMTS800Intel UHD 630 3GB (1200MHz)Realtek ALC233DELL P2415QIntel I219-LM + Intel I210Ubuntu 20.105.8.0-41-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.61.2.145GCC 10.2.0ext41920x1080Intel Xeon E-2278GEL @ 3.90GHz (8 Cores / 16 Threads)Logic Supply RXM-181 (Z01-0001A027 BIOS)Intel Cannon Lake PCH16GB512GB TS512GMTE510TIntel UHD P630 3GB (1150MHz)Intel I219-LM + 2 x Intel I210intelOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- OnLogic Helix 500: MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096- OnLogic Karbon 700: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- OnLogic Helix 500: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 2.3- OnLogic Karbon 700: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xde - Thermald 2.3Java Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- OnLogic Helix 500: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - OnLogic Karbon 700: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Mitigation of TSX disabled

OnLogic Helix 500 vs. OnLogic Karbon 700 ComparisonPhoronix Test SuiteBaseline+25.9%+25.9%+51.8%+51.8%+77.7%+77.7%16%14.2%10.9%10.8%9.3%5.7%5.2%4.7%4.7%4.5%4.4%4.4%4.2%3.4%3.3%3.1%Static OMP Speedup103.4%IP Shapes 3D - f32 - CPU88.5%tConvolve OpenMP - Gridding87%M.M.B.S.T - f32 - CPU81.5%C.B.S.A - f32 - CPU80.7%R.R.B79.9%Motorbike 30M76.2%H.C.O76%74.8%MG.C71.7%CPU - vgg1670.9%1920 x 108062.4%1920 x 108061.7%tConvolve MPI - Gridding57%CG.C55.4%FT.C49.2%CPU47.6%CPU - alexnet37.7%LU.C36.5%CPU - resnet1833.1%tConvolve MPI - Degridding30.3%Summer Nature 4K26%D.T.S25.3%Water Benchmark23.6%Kostya22.9%4x - Yes22.2%4x - No21.9%Rand Fill21.7%Chimera 1080p21.2%CPU-v2-v2 - mobilenet-v220.4%CPU - resnet5019.2%CPU - yolov4-tiny16.3%super-resolution-10 - OpenMP CPUCPU - googlenet15%Seek Rand14.5%shufflenet-v2-10 - OpenMP CPUS.N.113.4%Device AI Score12.9%2x - 3 - Yes12.4%2x - 3 - No11.1%Bonds OpenMPRepo OpenMPRhodopsin Protein10.7%Hot Read10.5%DistinctUserID9.8%Rand DeletePartialTweets8.3%CPU - efficientnet-b06.9%Pathtracer ISPC - Asian Dragon Obj6.6%CPU - squeezenet_ssd6.6%P.D.S6.3%CPU - mobilenet6.1%firefox-84.0.source.tar.xz5.9%DXT1Pathtracer ISPC - Asian Dragon5.6%Q.1.L.H.C5.6%4 - 10000 - 2,5000,1 - 100005.4%P.P.B5.4%ETC1 + DitheringQ.1.C.E.54.9%bertsquad-10 - OpenMP CPUfcn-resnet101-11 - OpenMP CPUETC1VoiceMark_100Fast4.2%ETC2T.S.2.O3.6%yolov4 - OpenMP CPU3.4%Timed Time - Size 1,000Zstd 8 - DecompressionZstd 1 - DecompressionQ.1.L2.8%LargeRand2.7%Brotli 0 - Decompression2.5%CLOMPoneDNNASKAPoneDNNoneDNNHPC ChallengeOpenFOAMASKAPAlgebraic Multi-Grid BenchmarkNAS Parallel BenchmarksNCNNVKMarkGLmark2ASKAPNAS Parallel BenchmarksNAS Parallel BenchmarksDeepSpeechNCNNNAS Parallel BenchmarksNCNNASKAPdav1dAI Benchmark AlphaGROMACSsimdjsonRealSR-NCNNRealSR-NCNNLevelDBdav1dNCNNNCNNNCNNONNX RuntimeNCNNLevelDBONNX Runtimedav1dAI Benchmark AlphaWaifu2x-NCNN VulkanWaifu2x-NCNN VulkanFinanceBenchFinanceBenchLAMMPS Molecular Dynamics SimulatorLevelDBsimdjsonLevelDBsimdjsonNCNNEmbreeNCNNTimed HMMer SearchNCNNUnpacking FirefoxEtcpakEmbreeWebP Image EncodeInfluxDBLibRawEtcpakWebP2 Image EncodeONNX RuntimeONNX RuntimeEtcpakQuantLibGoogle SynthMarkASTC EncoderEtcpakCpuminer-OptONNX RuntimeSQLite SpeedtestlzbenchlzbenchWebP Image EncodesimdjsonlzbenchOnLogic Helix 500OnLogic Karbon 700

OnLogic Helix 500 Linux benchmarksclomp: Static OMP Speeduponednn: IP Shapes 3D - f32 - CPUaskap: tConvolve OpenMP - Griddingonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUhpcc: Rand Ring Bandwidthopenfoam: Motorbike 30Maskap: Hogbom Clean OpenMPamg: npb: MG.Cncnn: CPU - vgg16vkmark: 1920 x 1080glmark2: 1920 x 1080askap: tConvolve MPI - Griddingnpb: CG.Cnpb: FT.Cdeepspeech: CPUncnn: CPU - alexnetnpb: LU.Cncnn: CPU - resnet18askap: tConvolve MPI - Degriddingdav1d: Summer Nature 4Kai-benchmark: Device Training Scoregromacs: Water Benchmarksimdjson: Kostyarealsr-ncnn: 4x - Yesrealsr-ncnn: 4x - Nodav1d: Chimera 1080pncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - resnet50ncnn: CPU - yolov4-tinyonnx: super-resolution-10 - OpenMP CPUncnn: CPU - googlenetleveldb: Seek Randonnx: shufflenet-v2-10 - OpenMP CPUdav1d: Summer Nature 1080pai-benchmark: Device AI Scorewaifu2x-ncnn: 2x - 3 - Yeswaifu2x-ncnn: 2x - 3 - Nofinancebench: Bonds OpenMPfinancebench: Repo OpenMPlammps: Rhodopsin Proteinleveldb: Hot Readsimdjson: DistinctUserIDleveldb: Rand Deletesimdjson: PartialTweetsncnn: CPU - efficientnet-b0embree: Pathtracer ISPC - Asian Dragon Objncnn: CPU - squeezenet_ssdhmmer: Pfam Database Searchncnn: CPU - mobilenetunpack-firefox: firefox-84.0.source.tar.xzetcpak: DXT1embree: Pathtracer ISPC - Asian Dragonwebp: Quality 100, Lossless, Highest Compressioninfluxdb: 4 - 10000 - 2,5000,1 - 10000libraw: Post-Processing Benchmarketcpak: ETC1 + Ditheringwebp2: Quality 100, Compression Effort 5onnx: bertsquad-10 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUetcpak: ETC1quantlib: synthmark: VoiceMark_100astcenc: Fastetcpak: ETC2onnx: yolov4 - OpenMP CPUsqlite-speedtest: Timed Time - Size 1,000lzbench: Zstd 8 - Decompressionlzbench: Zstd 1 - Decompressionwebp: Quality 100, Losslesssimdjson: LargeRandlzbench: Brotli 0 - Decompressiongnupg: 2.7GB Sample File Encryptionai-benchmark: Device Inference Scorelzbench: Crush 0 - Decompressionlzbench: Zstd 1 - Compressionlzbench: Zstd 8 - Compressionrav1e: 10tnn: CPU - MobileNet v2lzbench: Brotli 0 - Compressionleveldb: Rand Fillleveldb: Seq Fillleveldb: Seq Fillcpuminer-opt: Triple SHA-256, OnecoinOnLogic Helix 500OnLogic Karbon 7005.98.819061272.453.7961617.66831.70442263.49184.3892139629279747.5368.428338491833.784511.0111016.6278.4024015.3121135.3218.171679.21128.888570.6400.591441.629182.998553.326.4736.3935.02283919.1812.59112412496.82160870.5339.69378159.79947955140.2942715.4219.3810.6753.3940.659.418.487826.80123.57325.9520.7221146.0859.419341.1171118524.728.35266.70020.39238243285.5682089.1590.2436.83158.31827263.3611683156319.7030.3857673.925751456459803.189366.51842133.653.89532.9635032.916.6276680.3896.8899631.93180.94729464.34104.7861224168005678.15116.925135251167.942903.317382.96115.7279521.0815483.7424.181288.36102.306840.5180.481761.933223.166456.527.7943.3740.74329322.0614.42014173438.06142479.26410.76570481.50781349773.4179694.89610.3650.6148.8510.610.067.960728.56131.31527.5221.9471211.4498.921343.4091061687.926.91280.57921.39540045298.3462181.6616.1007.12165.02926361.2841738161120.2460.3756272.714740450453813.164368.05342027.653.43733.261289OpenBenchmarking.org

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupOnLogic Helix 500OnLogic Karbon 7001.32752.6553.98255.316.6375SE +/- 0.06, N = 15SE +/- 0.02, N = 135.92.91. (CC) gcc options: -fopenmp -O3 -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUOnLogic Helix 500OnLogic Karbon 70048121620SE +/- 0.05726, N = 5SE +/- 0.05826, N = 38.8190616.62760MIN: 8.46MIN: 16.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingOnLogic Helix 500OnLogic Karbon 70030060090012001500SE +/- 2.92, N = 4SE +/- 1.53, N = 31272.45680.391. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUOnLogic Helix 500OnLogic Karbon 700246810SE +/- 0.01932, N = 4SE +/- 0.02873, N = 33.796166.88996MIN: 3.65MIN: 6.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUOnLogic Helix 500OnLogic Karbon 700714212835SE +/- 0.02, N = 6SE +/- 0.04, N = 317.6731.93MIN: 17.51MIN: 31.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthOnLogic Helix 500OnLogic Karbon 7000.38350.7671.15051.5341.9175SE +/- 0.01667, N = 3SE +/- 0.00336, N = 31.704420.947291. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MOnLogic Helix 500OnLogic Karbon 700100200300400500SE +/- 0.75, N = 3SE +/- 0.51, N = 3263.49464.341. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPOnLogic Helix 500OnLogic Karbon 7004080120160200SE +/- 0.11, N = 3SE +/- 0.26, N = 3184.39104.791. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2OnLogic Helix 500OnLogic Karbon 70050M100M150M200M250MSE +/- 1557444.52, N = 11SE +/- 15952.12, N = 32139629271224168001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.COnLogic Helix 500OnLogic Karbon 7002K4K6K8K10KSE +/- 38.18, N = 3SE +/- 11.62, N = 39747.535678.151. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16OnLogic Helix 500OnLogic Karbon 700306090120150SE +/- 0.03, N = 3SE +/- 0.12, N = 368.42116.92MIN: 67.97 / MAX: 79.66MIN: 116.11 / MAX: 126.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080OnLogic Helix 500OnLogic Karbon 7002004006008001000SE +/- 1.86, N = 3SE +/- 1.20, N = 38335131. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080OnLogic Helix 500OnLogic Karbon 7002004006008001000849525

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingOnLogic Helix 500OnLogic Karbon 700400800120016002000SE +/- 19.29, N = 6SE +/- 1.74, N = 31833.781167.941. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.COnLogic Helix 500OnLogic Karbon 70010002000300040005000SE +/- 26.29, N = 3SE +/- 1.39, N = 34511.012903.311. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.COnLogic Helix 500OnLogic Karbon 7002K4K6K8K10KSE +/- 104.27, N = 3SE +/- 9.54, N = 311016.627382.961. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUOnLogic Helix 500OnLogic Karbon 700306090120150SE +/- 0.42, N = 3SE +/- 0.55, N = 378.40115.73

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetOnLogic Helix 500OnLogic Karbon 700510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 315.3121.08MIN: 14.48 / MAX: 16.21MIN: 20.8 / MAX: 22.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.COnLogic Helix 500OnLogic Karbon 7005K10K15K20K25KSE +/- 251.61, N = 3SE +/- 18.47, N = 321135.3215483.741. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18OnLogic Helix 500OnLogic Karbon 700612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 318.1724.18MIN: 17.2 / MAX: 20.93MIN: 23.7 / MAX: 25.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingOnLogic Helix 500OnLogic Karbon 700400800120016002000SE +/- 16.46, N = 6SE +/- 2.11, N = 31679.211288.361. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KOnLogic Helix 500OnLogic Karbon 700306090120150SE +/- 1.19, N = 7SE +/- 0.24, N = 3128.88102.30MIN: 107.81 / MAX: 171.68MIN: 94.6 / MAX: 107.251. (CC) gcc options: -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreOnLogic Helix 500OnLogic Karbon 7002004006008001000857684

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkOnLogic Helix 500OnLogic Karbon 7000.1440.2880.4320.5760.72SE +/- 0.002, N = 3SE +/- 0.003, N = 30.6400.5181. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaOnLogic Helix 500OnLogic Karbon 7000.13280.26560.39840.53120.664SE +/- 0.00, N = 3SE +/- 0.00, N = 30.590.481. (CXX) g++ options: -O3 -pthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: YesOnLogic Helix 500OnLogic Karbon 700400800120016002000SE +/- 0.34, N = 3SE +/- 0.10, N = 31441.631761.93

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoOnLogic Helix 500OnLogic Karbon 70050100150200250SE +/- 0.05, N = 3SE +/- 0.03, N = 3183.00223.17

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pOnLogic Helix 500OnLogic Karbon 700120240360480600SE +/- 7.54, N = 3SE +/- 0.40, N = 3553.32456.52MIN: 347.49 / MAX: 860.7MIN: 348.97 / MAX: 682.681. (CC) gcc options: -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2OnLogic Helix 500OnLogic Karbon 700246810SE +/- 0.09, N = 3SE +/- 0.04, N = 36.477.79MIN: 5.97 / MAX: 8.53MIN: 7.54 / MAX: 9.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50OnLogic Helix 500OnLogic Karbon 7001020304050SE +/- 0.18, N = 3SE +/- 0.06, N = 336.3943.37MIN: 34.97 / MAX: 119.89MIN: 41.18 / MAX: 52.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyOnLogic Helix 500OnLogic Karbon 700918273645SE +/- 0.38, N = 3SE +/- 0.05, N = 335.0240.74MIN: 33.91 / MAX: 37.21MIN: 40.38 / MAX: 41.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUOnLogic Helix 500OnLogic Karbon 7007001400210028003500SE +/- 26.92, N = 3SE +/- 5.97, N = 3283932931. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetOnLogic Helix 500OnLogic Karbon 700510152025SE +/- 0.01, N = 3SE +/- 0.31, N = 319.1822.06MIN: 18.66 / MAX: 21.99MIN: 21.09 / MAX: 23.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomOnLogic Helix 500OnLogic Karbon 70048121620SE +/- 0.14, N = 15SE +/- 0.24, N = 312.5914.421. (CXX) g++ options: -O3 -lsnappy -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUOnLogic Helix 500OnLogic Karbon 7003K6K9K12K15KSE +/- 77.45, N = 3SE +/- 50.48, N = 312412141731. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pOnLogic Helix 500OnLogic Karbon 700110220330440550SE +/- 2.57, N = 3SE +/- 0.21, N = 3496.82438.06MIN: 435.8 / MAX: 560.09MIN: 375.03 / MAX: 472.981. (CC) gcc options: -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreOnLogic Helix 500OnLogic Karbon 7003006009001200150016081424

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: YesOnLogic Helix 500OnLogic Karbon 70020406080100SE +/- 0.41, N = 3SE +/- 0.12, N = 370.5379.26

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: NoOnLogic Helix 500OnLogic Karbon 7003691215SE +/- 0.005, N = 5SE +/- 0.005, N = 39.69310.765

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPOnLogic Helix 500OnLogic Karbon 70020K40K60K80K100KSE +/- 778.31, N = 3SE +/- 857.94, N = 378159.8070481.511. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPOnLogic Helix 500OnLogic Karbon 70012K24K36K48K60KSE +/- 595.41, N = 3SE +/- 173.73, N = 355140.2949773.421. (CXX) g++ options: -O3 -march=native -fopenmp

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinOnLogic Helix 500OnLogic Karbon 7001.21972.43943.65914.87886.0985SE +/- 0.046, N = 15SE +/- 0.056, N = 155.4214.8961. (CXX) g++ options: -O3 -pthread -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadOnLogic Helix 500OnLogic Karbon 7003691215SE +/- 0.122, N = 15SE +/- 0.127, N = 159.38110.3651. (CXX) g++ options: -O3 -lsnappy -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDOnLogic Helix 500OnLogic Karbon 7000.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.611. (CXX) g++ options: -O3 -pthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteOnLogic Helix 500OnLogic Karbon 7001224364860SE +/- 0.40, N = 3SE +/- 0.24, N = 353.3948.851. (CXX) g++ options: -O3 -lsnappy -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsOnLogic Helix 500OnLogic Karbon 7000.14630.29260.43890.58520.7315SE +/- 0.00, N = 3SE +/- 0.00, N = 30.650.601. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0OnLogic Helix 500OnLogic Karbon 7003691215SE +/- 0.17, N = 3SE +/- 0.03, N = 39.4110.06MIN: 7.55 / MAX: 10.38MIN: 9.87 / MAX: 10.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjOnLogic Helix 500OnLogic Karbon 700246810SE +/- 0.0868, N = 3SE +/- 0.0163, N = 38.48787.9607MIN: 8.11 / MAX: 11.76MIN: 7.76 / MAX: 8.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdOnLogic Helix 500OnLogic Karbon 700714212835SE +/- 0.11, N = 3SE +/- 0.02, N = 326.8028.56MIN: 26.13 / MAX: 29.38MIN: 28.2 / MAX: 29.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchOnLogic Helix 500OnLogic Karbon 700306090120150SE +/- 0.04, N = 3SE +/- 0.03, N = 3123.57131.321. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetOnLogic Helix 500OnLogic Karbon 700612182430SE +/- 0.18, N = 3SE +/- 0.02, N = 325.9527.52MIN: 25.19 / MAX: 27.05MIN: 27.18 / MAX: 28.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzOnLogic Helix 500OnLogic Karbon 700510152025SE +/- 0.05, N = 4SE +/- 0.25, N = 2020.7221.95

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1OnLogic Helix 500OnLogic Karbon 70030060090012001500SE +/- 1.09, N = 8SE +/- 0.44, N = 31146.091211.451. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonOnLogic Helix 500OnLogic Karbon 7003691215SE +/- 0.1025, N = 5SE +/- 0.0458, N = 39.41938.9213MIN: 9.02 / MAX: 13.36MIN: 8.6 / MAX: 9.81

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionOnLogic Helix 500OnLogic Karbon 7001020304050SE +/- 0.02, N = 3SE +/- 0.02, N = 341.1243.411. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000OnLogic Helix 500OnLogic Karbon 700200K400K600K800K1000KSE +/- 5258.21, N = 3SE +/- 18170.95, N = 31118524.71061687.9

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkOnLogic Helix 500OnLogic Karbon 700714212835SE +/- 0.29, N = 3SE +/- 0.06, N = 328.3526.911. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + DitheringOnLogic Helix 500OnLogic Karbon 70060120180240300SE +/- 1.05, N = 3SE +/- 0.14, N = 3266.70280.581. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5OnLogic Helix 500OnLogic Karbon 700510152025SE +/- 0.23, N = 12SE +/- 0.21, N = 920.3921.40-lOpenGL -lGLX -lGLU -lglut -lXmu -lXi1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUOnLogic Helix 500OnLogic Karbon 70090180270360450SE +/- 2.50, N = 3SE +/- 1.04, N = 33824001. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUOnLogic Helix 500OnLogic Karbon 7001020304050SE +/- 0.33, N = 3SE +/- 0.17, N = 343451. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1OnLogic Helix 500OnLogic Karbon 70060120180240300SE +/- 0.46, N = 3SE +/- 0.67, N = 3285.57298.351. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21OnLogic Helix 500OnLogic Karbon 7005001000150020002500SE +/- 28.73, N = 3SE +/- 17.49, N = 32089.12181.61. (CXX) g++ options: -O3 -march=native -rdynamic

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100OnLogic Helix 500OnLogic Karbon 700130260390520650SE +/- 0.05, N = 3SE +/- 0.63, N = 3590.24616.101. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastOnLogic Helix 500OnLogic Karbon 700246810SE +/- 0.05, N = 6SE +/- 0.01, N = 36.837.121. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2OnLogic Helix 500OnLogic Karbon 7004080120160200SE +/- 0.20, N = 3SE +/- 0.19, N = 3158.32165.031. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUOnLogic Helix 500OnLogic Karbon 70060120180240300SE +/- 1.17, N = 3SE +/- 1.09, N = 32722631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000OnLogic Helix 500OnLogic Karbon 7001428425670SE +/- 0.06, N = 3SE +/- 0.07, N = 363.3661.281. (CC) gcc options: -O2 -ldl -lz -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionOnLogic Helix 500OnLogic Karbon 700400800120016002000SE +/- 4.67, N = 3SE +/- 5.51, N = 3168317381. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionOnLogic Helix 500OnLogic Karbon 70030060090012001500SE +/- 0.58, N = 3156316111. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessOnLogic Helix 500OnLogic Karbon 700510152025SE +/- 0.01, N = 3SE +/- 0.00, N = 319.7020.251. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomOnLogic Helix 500OnLogic Karbon 7000.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.371. (CXX) g++ options: -O3 -pthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionOnLogic Helix 500OnLogic Karbon 700120240360480600SE +/- 0.33, N = 35765621. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

GnuPG

This test times how long it takes to encrypt a sample file using GnuPG. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File EncryptionOnLogic Helix 500OnLogic Karbon 7001632486480SE +/- 0.81, N = 4SE +/- 0.66, N = 373.9372.711. (CC) gcc options: -O2

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreOnLogic Helix 500OnLogic Karbon 700160320480640800751740

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionOnLogic Helix 500OnLogic Karbon 7001002003004005004564501. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionOnLogic Helix 500OnLogic Karbon 7001002003004005004594531. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionOnLogic Helix 500OnLogic Karbon 7002040608010080811. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10OnLogic Helix 500OnLogic Karbon 7000.71751.4352.15252.873.5875SE +/- 0.014, N = 3SE +/- 0.007, N = 33.1893.164

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2OnLogic Helix 500OnLogic Karbon 70080160240320400SE +/- 0.41, N = 3SE +/- 0.39, N = 3366.52368.05MIN: 365.5 / MAX: 368.07MIN: 367.15 / MAX: 370.371. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionOnLogic Helix 500OnLogic Karbon 70090180270360450SE +/- 0.58, N = 34214201. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringOnLogic Helix 50020406080100Min: 45 / Avg: 77.92 / Max: 96

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillOnLogic Helix 500OnLogic Karbon 700816243240SE +/- 0.32, N = 7SE +/- 1.32, N = 1533.627.61. (CXX) g++ options: -O3 -lsnappy -lpthread

MinAvgMaxOnLogic Helix 50065.074.179.0OpenBenchmarking.orgCelsius, Fewer Is BetterLevelDB 1.22CPU Temperature Monitor20406080100

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillOnLogic Helix 500OnLogic Karbon 7001224364860SE +/- 0.68, N = 3SE +/- 0.87, N = 1553.9053.441. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillOnLogic Helix 500OnLogic Karbon 700816243240SE +/- 0.41, N = 3SE +/- 0.52, N = 1532.933.21. (CXX) g++ options: -O3 -lsnappy -lpthread

Cpuminer-Opt

MinAvgMaxOnLogic Helix 50061.071.283.0OpenBenchmarking.orgCelsius, Fewer Is BetterCpuminer-Opt 3.15.5CPU Temperature Monitor20406080100

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: Triple SHA-256, OnecoinOnLogic Helix 500OnLogic Karbon 70014K28K42K56K70KSE +/- 293.11, N = 3SE +/- 1646.47, N = 1563503612891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp