10900k june

Intel Core i9-10900K testing with a Gigabyte Z490 AORUS MASTER (F20d BIOS) and Gigabyte Intel UHD 630 CML GT2 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106300-IB-10900KJUN95
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 6 Tests
Creator Workloads 7 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 4 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 29 2021
  1 Hour, 35 Minutes
2
June 29 2021
  4 Hours, 54 Minutes
3
June 30 2021
  4 Hours, 30 Minutes
Invert Hiding All Results Option
  3 Hours, 40 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


10900k juneProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolution123Intel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F20d BIOS)Intel Comet Lake PCH16GBSamsung SSD 970 EVO 500GBGigabyte Intel UHD 630 CML GT2 3GB (1200MHz)Realtek ALC1220G237HLIntel Device 15f3 + Intel Wi-Fi 6 AX201Ubuntu 20.045.9.0-050900daily20201012-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.94.6 Mesa 20.0.8OpenCL 2.11.2.131GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%101%102%103%104%srsRANNAS Parallel BenchmarksMobile Neural NetworkC-BloscNCNNZstd CompressionTimed GDB GNU Debugger CompilationVP9 libvpx EncodingASTC EncoderSVT-AV1Timed FFmpeg CompilationTNNBRL-CADdav1dEmbreeGROMACSIntel Open Image Denoise

10900k junenpb: MG.Cnpb: EP.Dsrsran: OFDM_Testcompress-zstd: 8 - Compression Speedncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - shufflenet-v2npb: FT.Cmnn: inception-v3srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMnpb: SP.Bsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMcompress-zstd: 8 - Decompression Speedsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMnpb: LU.Cmnn: squeezenetv1.1compress-zstd: 8, Long Mode - Compression Speedsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-zstd: 3 - Compression Speedsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 5G PHY_DL_NR Test 270 PRB SISO 256-QAMmnn: SqueezeNetV1.0compress-zstd: 3 - Decompression Speedcompress-zstd: 19 - Decompression Speedsrsran: 5G PHY_DL_NR Test 270 PRB SISO 256-QAMncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - efficientnet-b0ncnn: CPU - regnety_400mcompress-zstd: 19, Long Mode - Compression Speedblosc: blosclzmnn: mobilenetV3ncnn: CPU - mnasnetnpb: SP.Cvpxenc: Speed 5 - Bosphorus 1080pembree: Pathtracer ISPC - Asian Dragonmnn: mobilenet-v1-1.0compress-zstd: 19, Long Mode - Decompression Speednpb: EP.Castcenc: Mediumembree: Pathtracer - Crownncnn: CPU - blazefacenpb: CG.Ccompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pmnn: resnet-v2-50tnn: CPU - DenseNetvpxenc: Speed 5 - Bosphorus 4Kembree: Pathtracer - Asian Dragonvpxenc: Speed 0 - Bosphorus 4Kbuild-gdb: Time To Compilecompress-zstd: 3, Long Mode - Compression Speedsvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - googlenetdav1d: Chimera 1080ptnn: CPU - MobileNet v2npb: BT.Csvt-av1: Preset 4 - Bosphorus 1080pembree: Pathtracer ISPC - Crownncnn: CPU - vgg16embree: Pathtracer ISPC - Asian Dragon Objncnn: CPU - mobilenetncnn: CPU - yolov4-tinyembree: Pathtracer - Asian Dragon Objastcenc: Thoroughdav1d: Summer Nature 1080pbuild-ffmpeg: Time To Compiledav1d: Chimera 1080p 10-bitncnn: CPU - alexnetsvt-av1: Preset 4 - Bosphorus 4Kbrl-cad: VGR Performance Metrictnn: CPU - SqueezeNet v1.1dav1d: Summer Nature 4Kncnn: CPU - resnet18compress-zstd: 8, Long Mode - Decompression Speedtnn: CPU - SqueezeNet v2gromacs: MPI CPU - water_GMX50_barencnn: CPU - resnet50vpxenc: Speed 0 - Bosphorus 1080pastcenc: Exhaustivencnn: CPU - squeezenet_ssdoidn: RTLightmap.hdr.4096x4096oidn: RT.ldr_alb_nrm.3840x2160oidn: RT.hdr_alb_nrm.3840x2160mnn: MobileNetV2_22412311137.191838.96127100000177.64.33.1512947.6629.905138.4292.35354.13423.4418.5461.2244.14157.26826904.823.083528.8461.12270.9142.2151.794.34.5434136.73870.4158.33.395.338.4329.719606.31.6013.344949.730.4218.19863.3183835.51812.544.049214.3431.425554.074555.932.969.44129.4152870.07115.2815.93076.3256.5951319.618.37912.61820.43286.22626301.995.43116.013962.0616.407715.6923.0914.74419.3677770.5748.51491.1312.341.631187701262.424190.7413.714653.358.6610.96322.514.1850.202717.570.230.460.463.15111121.481833.03122966667189.04.433.3213116.8229.321136.9288.85290.88413.2414.1457.5238.64318.167.826820.573.173546.5456.32347.7142.1150.593.24.5524263.33812.8156.43.485.478.6429.319796.81.6373.375047.5630.9918.53023.3593823.61790.824.072014.26031.445558.744498.533.069.23129.7542882.36415.4015.80306.3856.0641307.718.24812.72819.80284.08726239.815.40115.952262.0816.310515.7723.2214.80259.3489770.7448.332490.7212.331.629187547261.693191.0913.684653.758.7220.96422.5014.1950.235917.570.230.460.462.90910054.561669.82133566667186.34.553.3012454.5928.533143.3301.85126.92431.5431.9476.5248.14243.670.425911.233.060527.3472.32306.9146.9155.596.24.4114248.73758.1161.03.475.478.5329.020063.71.6293.415031.1230.3918.32833.3053884.61786.574.014514.14071.445627.284546.233.370.03229.4882851.86315.2415.96596.3856.2631311.518.21712.63813.38285.36126422.405.39615.917462.4316.338415.7823.1914.82449.3285767.7748.405492.5212.301.634187146261.967191.2013.684643.858.7340.96322.5214.1950.214017.570.230.460.462.783OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C3212K4K6K8K10KSE +/- 119.21, N = 6SE +/- 5.16, N = 310054.5611121.4811137.191. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C3212K4K6K8K10KMin: 9466.71 / Avg: 10054.56 / Max: 10223.02Min: 11111.28 / Avg: 11121.48 / Max: 11127.991. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D321400800120016002000SE +/- 28.88, N = 3SE +/- 2.47, N = 31669.821833.031838.961. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D32130060090012001500Min: 1624.03 / Avg: 1669.82 / Max: 1723.22Min: 1828.4 / Avg: 1833.03 / Max: 1836.851. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.04Test: OFDM_Test21330M60M90M120M150MSE +/- 1530068.99, N = 3SE +/- 437162.57, N = 31229666671271000001335666671. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.04Test: OFDM_Test21320M40M60M80M100MMin: 120000000 / Avg: 122966666.67 / Max: 125100000Min: 132700000 / Avg: 133566666.67 / Max: 1341000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed1324080120160200SE +/- 3.13, N = 3SE +/- 3.24, N = 3177.6186.3189.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed132306090120150Min: 181.2 / Avg: 186.33 / Max: 192Min: 183.9 / Avg: 189 / Max: 1951. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v23211.02382.04763.07144.09525.119SE +/- 0.10, N = 3SE +/- 0.04, N = 34.554.434.30MIN: 4.3 / MAX: 5.62MIN: 4.09 / MAX: 6.38MIN: 4.13 / MAX: 4.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2321246810Min: 4.42 / Avg: 4.55 / Max: 4.75Min: 4.35 / Avg: 4.43 / Max: 4.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v22310.7471.4942.2412.9883.735SE +/- 0.03, N = 3SE +/- 0.05, N = 33.323.303.15MIN: 3.09 / MAX: 4.08MIN: 3.05 / MAX: 4MIN: 3.02 / MAX: 3.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2231246810Min: 3.26 / Avg: 3.32 / Max: 3.37Min: 3.22 / Avg: 3.3 / Max: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C3123K6K9K12K15KSE +/- 108.08, N = 3SE +/- 12.29, N = 312454.5912947.6613116.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C3122K4K6K8K10KMin: 12242.43 / Avg: 12454.59 / Max: 12596.49Min: 13092.64 / Avg: 13116.82 / Max: 13132.721. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123714212835SE +/- 0.17, N = 3SE +/- 0.23, N = 329.9129.3228.53MIN: 29.05 / MAX: 39.9MIN: 28.88 / MAX: 40.74MIN: 28.11 / MAX: 41.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123714212835Min: 29.09 / Avg: 29.32 / Max: 29.64Min: 28.18 / Avg: 28.53 / Max: 28.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM213306090120150SE +/- 0.44, N = 3SE +/- 0.19, N = 3136.9138.4143.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM213306090120150Min: 136 / Avg: 136.87 / Max: 137.4Min: 142.9 / Avg: 143.27 / Max: 143.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM21370140210280350SE +/- 0.15, N = 3SE +/- 0.72, N = 3288.8292.3301.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM21350100150200250Min: 288.6 / Avg: 288.83 / Max: 289.1Min: 300.4 / Avg: 301.8 / Max: 302.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B32111002200330044005500SE +/- 85.04, N = 3SE +/- 3.99, N = 35126.925290.885354.131. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B3219001800270036004500Min: 4956.84 / Avg: 5126.92 / Max: 5212.48Min: 5284.11 / Avg: 5290.88 / Max: 5297.931. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM21390180270360450SE +/- 1.36, N = 3SE +/- 3.74, N = 3413.2423.4431.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM21380160240320400Min: 410.6 / Avg: 413.2 / Max: 415.2Min: 424 / Avg: 431.47 / Max: 435.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM21390180270360450SE +/- 1.10, N = 3SE +/- 0.59, N = 3414.1418.5431.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM21380160240320400Min: 412.3 / Avg: 414.13 / Max: 416.1Min: 431 / Avg: 431.87 / Max: 4331. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM213100200300400500SE +/- 0.93, N = 3SE +/- 0.84, N = 3457.5461.2476.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM21380160240320400Min: 455.8 / Avg: 457.47 / Max: 459Min: 475 / Avg: 476.5 / Max: 477.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM21350100150200250SE +/- 0.52, N = 3SE +/- 2.18, N = 3238.6244.1248.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM2134080120160200Min: 237.7 / Avg: 238.6 / Max: 239.5Min: 244 / Avg: 248.13 / Max: 251.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed1329001800270036004500SE +/- 22.34, N = 3SE +/- 34.82, N = 34157.24243.64318.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed1328001600240032004000Min: 4200.4 / Avg: 4243.6 / Max: 4275.1Min: 4265.3 / Avg: 4318.07 / Max: 4383.81. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM2131632486480SE +/- 0.13, N = 3SE +/- 0.06, N = 367.868.070.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM2131428425670Min: 67.5 / Avg: 67.77 / Max: 67.9Min: 70.3 / Avg: 70.4 / Max: 70.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C3216K12K18K24K30KSE +/- 23.04, N = 3SE +/- 44.76, N = 325911.2326820.5726904.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C3215K10K15K20K25KMin: 25866.48 / Avg: 25911.23 / Max: 25943.1Min: 26731.14 / Avg: 26820.57 / Max: 26868.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.12130.71391.42782.14172.85563.5695SE +/- 0.050, N = 3SE +/- 0.018, N = 33.1733.0833.060MIN: 2.97 / MAX: 5.58MIN: 3.01 / MAX: 3.14MIN: 2.98 / MAX: 3.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1213246810Min: 3.1 / Avg: 3.17 / Max: 3.27Min: 3.02 / Avg: 3.06 / Max: 3.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed312120240360480600SE +/- 5.98, N = 7SE +/- 7.74, N = 4527.3528.8546.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed312100200300400500Min: 507 / Avg: 527.31 / Max: 543.3Min: 524.1 / Avg: 546.45 / Max: 559.61. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM213100200300400500SE +/- 0.73, N = 3SE +/- 2.21, N = 3456.3461.1472.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM21380160240320400Min: 455 / Avg: 456.33 / Max: 457.5Min: 468.2 / Avg: 472.3 / Max: 475.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed1325001000150020002500SE +/- 18.18, N = 3SE +/- 12.19, N = 32270.92306.92347.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed132400800120016002000Min: 2279.9 / Avg: 2306.93 / Max: 2341.5Min: 2331.2 / Avg: 2347.7 / Max: 2371.51. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM213306090120150SE +/- 0.12, N = 3SE +/- 0.20, N = 3142.1142.2146.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM213306090120150Min: 141.9 / Avg: 142.07 / Max: 142.3Min: 146.5 / Avg: 146.9 / Max: 147.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM213306090120150SE +/- 0.51, N = 3SE +/- 0.96, N = 3150.5151.7155.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM213306090120150Min: 149.5 / Avg: 150.5 / Max: 151.2Min: 153.6 / Avg: 155.5 / Max: 156.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM21320406080100SE +/- 0.12, N = 3SE +/- 0.70, N = 393.294.396.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM21320406080100Min: 93 / Avg: 93.23 / Max: 93.4Min: 94.8 / Avg: 96.2 / Max: 971. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.02131.02422.04843.07264.09685.121SE +/- 0.020, N = 3SE +/- 0.044, N = 34.5524.5434.411MIN: 4.41 / MAX: 6.47MIN: 4.41 / MAX: 5.05MIN: 4.29 / MAX: 5.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0213246810Min: 4.52 / Avg: 4.55 / Max: 4.59Min: 4.35 / Avg: 4.41 / Max: 4.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed1329001800270036004500SE +/- 15.19, N = 3SE +/- 3.55, N = 34136.74248.74263.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed1327001400210028003500Min: 4219.5 / Avg: 4248.73 / Max: 4270.5Min: 4257 / Avg: 4263.3 / Max: 4269.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed3218001600240032004000SE +/- 7.49, N = 3SE +/- 24.61, N = 33758.13812.83870.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed3217001400210028003500Min: 3743.3 / Avg: 3758.13 / Max: 3767.3Min: 3783.7 / Avg: 3812.77 / Max: 3861.71. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM2134080120160200SE +/- 0.12, N = 3SE +/- 0.81, N = 3156.4158.3161.01. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM213306090120150Min: 156.2 / Avg: 156.37 / Max: 156.6Min: 159.4 / Avg: 160.97 / Max: 162.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v32310.7831.5662.3493.1323.915SE +/- 0.02, N = 3SE +/- 0.02, N = 33.483.473.39MIN: 3.32 / MAX: 4.74MIN: 3.32 / MAX: 4.33MIN: 3.27 / MAX: 3.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3231246810Min: 3.45 / Avg: 3.48 / Max: 3.53Min: 3.44 / Avg: 3.47 / Max: 3.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b03211.23082.46163.69244.92326.154SE +/- 0.03, N = 3SE +/- 0.05, N = 35.475.475.33MIN: 5.26 / MAX: 6.64MIN: 5.23 / MAX: 6.69MIN: 5.2 / MAX: 5.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b0321246810Min: 5.41 / Avg: 5.47 / Max: 5.52Min: 5.36 / Avg: 5.47 / Max: 5.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m231246810SE +/- 0.01, N = 3SE +/- 0.05, N = 38.648.538.43MIN: 8.31 / MAX: 9.23MIN: 8.23 / MAX: 9.86MIN: 8.22 / MAX: 8.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m2313691215Min: 8.62 / Avg: 8.64 / Max: 8.66Min: 8.44 / Avg: 8.53 / Max: 8.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed321714212835SE +/- 0.27, N = 3SE +/- 0.09, N = 329.029.329.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed321714212835Min: 28.5 / Avg: 29.03 / Max: 29.3Min: 29.2 / Avg: 29.33 / Max: 29.51. (CC) gcc options: -O3 -pthread -lz -llzma

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz1234K8K12K16K20KSE +/- 32.57, N = 3SE +/- 10.14, N = 319606.319796.820063.71. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz1233K6K9K12K15KMin: 19736.9 / Avg: 19796.8 / Max: 19848.9Min: 20047.4 / Avg: 20063.67 / Max: 20082.31. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV32310.36830.73661.10491.47321.8415SE +/- 0.016, N = 3SE +/- 0.006, N = 31.6371.6291.601MIN: 1.58 / MAX: 2.42MIN: 1.57 / MAX: 1.75MIN: 1.56 / MAX: 2.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3231246810Min: 1.62 / Avg: 1.64 / Max: 1.67Min: 1.62 / Avg: 1.63 / Max: 1.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet3210.76731.53462.30193.06923.8365SE +/- 0.02, N = 3SE +/- 0.05, N = 33.413.373.34MIN: 3.2 / MAX: 4.67MIN: 3.17 / MAX: 4.84MIN: 3.3 / MAX: 3.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet321246810Min: 3.36 / Avg: 3.41 / Max: 3.44Min: 3.28 / Avg: 3.37 / Max: 3.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C13211002200330044005500SE +/- 1.40, N = 3SE +/- 5.95, N = 34949.705031.125047.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C1329001800270036004500Min: 5029.19 / Avg: 5031.12 / Max: 5033.85Min: 5035.75 / Avg: 5047.56 / Max: 5054.661. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p312714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 330.3930.4230.991. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p312714212835Min: 30.35 / Avg: 30.39 / Max: 30.44Min: 30.92 / Avg: 30.99 / Max: 31.031. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon132510152025SE +/- 0.01, N = 3SE +/- 0.13, N = 318.2018.3318.53MIN: 18.05 / MAX: 18.61MIN: 18.19 / MAX: 18.67MIN: 18.17 / MAX: 19.14
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon132510152025Min: 18.3 / Avg: 18.33 / Max: 18.35Min: 18.29 / Avg: 18.53 / Max: 18.73

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.02130.75581.51162.26743.02323.779SE +/- 0.033, N = 3SE +/- 0.011, N = 33.3593.3183.305MIN: 3.26 / MAX: 3.84MIN: 3.27 / MAX: 4.82MIN: 3.24 / MAX: 15.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0213246810Min: 3.32 / Avg: 3.36 / Max: 3.42Min: 3.29 / Avg: 3.31 / Max: 3.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed2138001600240032004000SE +/- 10.65, N = 3SE +/- 36.64, N = 33823.63835.53884.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed2137001400210028003500Min: 3802.8 / Avg: 3823.6 / Max: 3838Min: 3811.3 / Avg: 3884.57 / Max: 3922.51. (CC) gcc options: -O3 -pthread -lz -llzma

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C321400800120016002000SE +/- 10.88, N = 3SE +/- 28.41, N = 31786.571790.821812.541. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C32130060090012001500Min: 1767.4 / Avg: 1786.57 / Max: 1805.09Min: 1734.09 / Avg: 1790.82 / Max: 1822.011. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Medium2130.91621.83242.74863.66484.581SE +/- 0.0030, N = 3SE +/- 0.0314, N = 34.07204.04924.01451. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Medium213246810Min: 4.07 / Avg: 4.07 / Max: 4.08Min: 3.96 / Avg: 4.01 / Max: 4.071. (CXX) g++ options: -O3 -flto -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown32148121620SE +/- 0.07, N = 3SE +/- 0.11, N = 314.1414.2614.34MIN: 13.94 / MAX: 14.5MIN: 13.99 / MAX: 14.72MIN: 14.23 / MAX: 14.6
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown32148121620Min: 14.06 / Avg: 14.14 / Max: 14.29Min: 14.11 / Avg: 14.26 / Max: 14.47

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface3210.3240.6480.9721.2961.62SE +/- 0.02, N = 3SE +/- 0.02, N = 31.441.441.42MIN: 1.37 / MAX: 2.73MIN: 1.36 / MAX: 1.68MIN: 1.37 / MAX: 1.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface321246810Min: 1.41 / Avg: 1.44 / Max: 1.46Min: 1.4 / Avg: 1.44 / Max: 1.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12312002400360048006000SE +/- 12.02, N = 3SE +/- 4.50, N = 35554.075558.745627.281. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12310002000300040005000Min: 5540.55 / Avg: 5558.74 / Max: 5581.44Min: 5622.57 / Avg: 5627.28 / Max: 5636.281. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed23110002000300040005000SE +/- 12.77, N = 3SE +/- 6.37, N = 34498.54546.24555.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed2318001600240032004000Min: 4473.1 / Avg: 4498.53 / Max: 4513.3Min: 4533.5 / Avg: 4546.23 / Max: 45531. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed123816243240SE +/- 0.30, N = 3SE +/- 0.00, N = 332.933.033.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed123714212835Min: 32.4 / Avg: 33 / Max: 33.3Min: 33.3 / Avg: 33.3 / Max: 33.31. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p2131632486480SE +/- 0.16, N = 3SE +/- 0.14, N = 369.2369.4470.031. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p2131428425670Min: 68.99 / Avg: 69.23 / Max: 69.54Min: 69.81 / Avg: 70.03 / Max: 70.281. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50231714212835SE +/- 0.23, N = 3SE +/- 0.27, N = 329.7529.4929.42MIN: 29.2 / MAX: 40.31MIN: 28.92 / MAX: 40.86MIN: 28.55 / MAX: 42.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50231714212835Min: 29.33 / Avg: 29.75 / Max: 30.11Min: 29.12 / Avg: 29.49 / Max: 30.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet2136001200180024003000SE +/- 30.42, N = 12SE +/- 1.97, N = 32882.362870.072851.86MIN: 2814.73 / MAX: 5048.47MIN: 2851.69 / MAX: 2906.41MIN: 2826.92 / MAX: 2907.551. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet2135001000150020002500Min: 2833.52 / Avg: 2882.36 / Max: 3214.95Min: 2849.74 / Avg: 2851.86 / Max: 2855.791. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K31248121620SE +/- 0.11, N = 3SE +/- 0.17, N = 315.2415.2815.401. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K31248121620Min: 15.07 / Avg: 15.24 / Max: 15.45Min: 15.07 / Avg: 15.4 / Max: 15.661. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon21348121620SE +/- 0.08, N = 3SE +/- 0.17, N = 315.8015.9315.97MIN: 15.63 / MAX: 16.19MIN: 15.85 / MAX: 16.15MIN: 15.66 / MAX: 16.58
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon21348121620Min: 15.72 / Avg: 15.8 / Max: 15.96Min: 15.76 / Avg: 15.97 / Max: 16.3

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K123246810SE +/- 0.00, N = 3SE +/- 0.03, N = 36.326.386.381. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K1233691215Min: 6.38 / Avg: 6.38 / Max: 6.39Min: 6.33 / Avg: 6.38 / Max: 6.441. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile1321326395265SE +/- 0.16, N = 3SE +/- 0.09, N = 356.6056.2656.06
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile1321122334455Min: 56.05 / Avg: 56.26 / Max: 56.58Min: 55.94 / Avg: 56.06 / Max: 56.24

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed23130060090012001500SE +/- 4.76, N = 3SE +/- 4.36, N = 31307.71311.51319.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed2312004006008001000Min: 1300 / Avg: 1307.7 / Max: 1316.4Min: 1303.7 / Avg: 1311.47 / Max: 1318.81. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K321510152025SE +/- 0.02, N = 3SE +/- 0.05, N = 318.2218.2518.381. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K321510152025Min: 18.2 / Avg: 18.22 / Max: 18.26Min: 18.15 / Avg: 18.25 / Max: 18.331. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet2313691215SE +/- 0.03, N = 3SE +/- 0.02, N = 312.7212.6312.61MIN: 12.43 / MAX: 12.91MIN: 12.28 / MAX: 12.8MIN: 12.52 / MAX: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet23148121620Min: 12.67 / Avg: 12.72 / Max: 12.77Min: 12.59 / Avg: 12.63 / Max: 12.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p3212004006008001000SE +/- 1.09, N = 3SE +/- 1.49, N = 3813.38819.80820.43MIN: 630.35 / MAX: 1137.18MIN: 635.09 / MAX: 1147.17MIN: 635.4 / MAX: 1129.61. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p321140280420560700Min: 811.38 / Avg: 813.38 / Max: 815.12Min: 817.41 / Avg: 819.8 / Max: 822.531. (CC) gcc options: -pthread -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213260120180240300SE +/- 2.10, N = 3SE +/- 0.57, N = 3286.23285.36284.09MIN: 285.92 / MAX: 286.65MIN: 282.61 / MAX: 296.95MIN: 282.55 / MAX: 286.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213250100150200250Min: 283.07 / Avg: 285.36 / Max: 289.56Min: 283.03 / Avg: 284.09 / Max: 284.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C2136K12K18K24K30KSE +/- 4.43, N = 3SE +/- 17.15, N = 326239.8126301.9926422.401. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C2135K10K15K20K25KMin: 26231.78 / Avg: 26239.81 / Max: 26247.07Min: 26389.54 / Avg: 26422.4 / Max: 26447.341. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p3211.2222.4443.6664.8886.11SE +/- 0.024, N = 3SE +/- 0.034, N = 35.3965.4015.4311. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p321246810Min: 5.35 / Avg: 5.4 / Max: 5.42Min: 5.35 / Avg: 5.4 / Max: 5.461. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown32148121620SE +/- 0.04, N = 3SE +/- 0.03, N = 315.9215.9516.01MIN: 15.73 / MAX: 16.29MIN: 15.78 / MAX: 16.39MIN: 15.87 / MAX: 16.33
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown32148121620Min: 15.85 / Avg: 15.92 / Max: 15.99Min: 15.9 / Avg: 15.95 / Max: 15.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg163211428425670SE +/- 0.02, N = 3SE +/- 0.02, N = 362.4362.0862.06MIN: 62.27 / MAX: 62.95MIN: 61.89 / MAX: 64.13MIN: 61.88 / MAX: 62.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg163211224364860Min: 62.41 / Avg: 62.43 / Max: 62.47Min: 62.04 / Avg: 62.08 / Max: 62.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj23148121620SE +/- 0.01, N = 3SE +/- 0.01, N = 316.3116.3416.41MIN: 16.18 / MAX: 16.71MIN: 16.2 / MAX: 16.73MIN: 16.26 / MAX: 16.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj23148121620Min: 16.3 / Avg: 16.31 / Max: 16.33Min: 16.33 / Avg: 16.34 / Max: 16.35

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet32148121620SE +/- 0.04, N = 3SE +/- 0.02, N = 315.7815.7715.69MIN: 15.25 / MAX: 25.89MIN: 15.35 / MAX: 16.37MIN: 15.36 / MAX: 17.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet32148121620Min: 15.71 / Avg: 15.78 / Max: 15.86Min: 15.75 / Avg: 15.77 / Max: 15.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny231612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 323.2223.1923.09MIN: 22.94 / MAX: 24.13MIN: 22.9 / MAX: 26.04MIN: 22.93 / MAX: 23.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny231510152025Min: 23.17 / Avg: 23.22 / Max: 23.26Min: 23.16 / Avg: 23.19 / Max: 23.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj12348121620SE +/- 0.02, N = 3SE +/- 0.02, N = 314.7414.8014.82MIN: 14.68 / MAX: 14.92MIN: 14.63 / MAX: 15.07MIN: 14.71 / MAX: 15.06
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj12348121620Min: 14.76 / Avg: 14.8 / Max: 14.83Min: 14.79 / Avg: 14.82 / Max: 14.85

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Thorough1233691215SE +/- 0.0103, N = 3SE +/- 0.0027, N = 39.36779.34899.32851. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Thorough1233691215Min: 9.33 / Avg: 9.35 / Max: 9.36Min: 9.32 / Avg: 9.33 / Max: 9.331. (CXX) g++ options: -O3 -flto -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 1080p312170340510680850SE +/- 1.29, N = 3SE +/- 0.13, N = 3767.77770.57770.74MIN: 632.05 / MAX: 833.44MIN: 661.07 / MAX: 835.71MIN: 651.67 / MAX: 836.21. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 1080p312140280420560700Min: 765.53 / Avg: 767.77 / Max: 770.01Min: 770.49 / Avg: 770.74 / Max: 770.921. (CC) gcc options: -pthread -lm

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile1321122334455SE +/- 0.08, N = 3SE +/- 0.05, N = 348.5148.4148.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile1321020304050Min: 48.24 / Avg: 48.4 / Max: 48.53Min: 48.27 / Avg: 48.33 / Max: 48.42

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bit213110220330440550SE +/- 1.02, N = 3SE +/- 0.13, N = 3490.72491.13492.52MIN: 392.28 / MAX: 749.63MIN: 392.46 / MAX: 737.75MIN: 392.25 / MAX: 787.531. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bit21390180270360450Min: 488.69 / Avg: 490.72 / Max: 491.86Min: 492.27 / Avg: 492.52 / Max: 492.651. (CC) gcc options: -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet1233691215SE +/- 0.02, N = 3SE +/- 0.01, N = 312.3412.3312.30MIN: 12.25 / MAX: 12.43MIN: 12.21 / MAX: 20.83MIN: 12.19 / MAX: 13.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet12348121620Min: 12.31 / Avg: 12.33 / Max: 12.38Min: 12.29 / Avg: 12.3 / Max: 12.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K2130.36770.73541.10311.47081.8385SE +/- 0.004, N = 3SE +/- 0.003, N = 31.6291.6311.6341. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K213246810Min: 1.62 / Avg: 1.63 / Max: 1.63Min: 1.63 / Avg: 1.63 / Max: 1.641. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance Metric32140K80K120K160K200K1871461875471877011. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113260120180240300SE +/- 0.07, N = 3SE +/- 0.04, N = 3262.42261.97261.69MIN: 261.96 / MAX: 268.6MIN: 261.39 / MAX: 268.58MIN: 261.37 / MAX: 268.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113250100150200250Min: 261.87 / Avg: 261.97 / Max: 262.1Min: 261.62 / Avg: 261.69 / Max: 261.771. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4K1234080120160200SE +/- 0.10, N = 3SE +/- 0.08, N = 3190.74191.09191.20MIN: 167.81 / MAX: 197.99MIN: 165.76 / MAX: 198.45MIN: 167.42 / MAX: 198.751. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4K1234080120160200Min: 190.9 / Avg: 191.09 / Max: 191.22Min: 191.1 / Avg: 191.2 / Max: 191.351. (CC) gcc options: -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet1813248121620SE +/- 0.02, N = 3SE +/- 0.02, N = 313.7113.6813.68MIN: 13.51 / MAX: 14.03MIN: 13.49 / MAX: 13.88MIN: 13.49 / MAX: 14.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet1813248121620Min: 13.66 / Avg: 13.68 / Max: 13.71Min: 13.64 / Avg: 13.68 / Max: 13.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed31210002000300040005000SE +/- 7.50, N = 7SE +/- 3.98, N = 34643.84653.34653.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed3128001600240032004000Min: 4612.7 / Avg: 4643.8 / Max: 4671.4Min: 4648.1 / Avg: 4653.7 / Max: 4661.41. (CC) gcc options: -O3 -pthread -lz -llzma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v23211326395265SE +/- 0.04, N = 3SE +/- 0.02, N = 358.7358.7258.66MIN: 58.61 / MAX: 59.06MIN: 58.6 / MAX: 60.74MIN: 58.6 / MAX: 58.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v23211224364860Min: 58.7 / Avg: 58.73 / Max: 58.81Min: 58.69 / Avg: 58.72 / Max: 58.751. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare1320.21690.43380.65070.86761.0845SE +/- 0.000, N = 3SE +/- 0.001, N = 30.9630.9630.9641. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare132246810Min: 0.96 / Avg: 0.96 / Max: 0.96Min: 0.96 / Avg: 0.96 / Max: 0.971. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50321510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 322.5222.5022.50MIN: 22.24 / MAX: 23.04MIN: 21.98 / MAX: 22.91MIN: 22.25 / MAX: 22.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50321510152025Min: 22.41 / Avg: 22.52 / Max: 22.6Min: 22.47 / Avg: 22.5 / Max: 22.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p12348121620SE +/- 0.02, N = 3SE +/- 0.03, N = 314.1814.1914.191. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p12348121620Min: 14.17 / Avg: 14.19 / Max: 14.22Min: 14.12 / Avg: 14.19 / Max: 14.231. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Exhaustive2311122334455SE +/- 0.02, N = 3SE +/- 0.02, N = 350.2450.2150.201. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Exhaustive2311020304050Min: 50.21 / Avg: 50.24 / Max: 50.25Min: 50.18 / Avg: 50.21 / Max: 50.241. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd32148121620SE +/- 0.03, N = 3SE +/- 0.02, N = 317.5717.5717.57MIN: 17.31 / MAX: 18.24MIN: 17.36 / MAX: 18.94MIN: 17.33 / MAX: 27.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd32148121620Min: 17.53 / Avg: 17.57 / Max: 17.62Min: 17.54 / Avg: 17.57 / Max: 17.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x40961230.05180.10360.15540.20720.259SE +/- 0.00, N = 3SE +/- 0.00, N = 30.230.230.23
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409612312345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x21601230.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x216012312345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x21601230.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216012312345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.46 / Max: 0.46

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241230.7091.4182.1272.8363.545SE +/- 0.139, N = 3SE +/- 0.007, N = 33.1512.9092.783MIN: 2.69 / MAX: 5.79MIN: 2.7 / MAX: 4.09MIN: 2.67 / MAX: 3.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224123246810Min: 2.77 / Avg: 2.91 / Max: 3.19Min: 2.77 / Avg: 2.78 / Max: 2.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

89 Results Shown

NAS Parallel Benchmarks:
  MG.C
  EP.D
srsRAN
Zstd Compression
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - shufflenet-v2
NAS Parallel Benchmarks
Mobile Neural Network
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
NAS Parallel Benchmarks
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
Zstd Compression
srsRAN
NAS Parallel Benchmarks
Mobile Neural Network
Zstd Compression
srsRAN
Zstd Compression
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  5G PHY_DL_NR Test 270 PRB SISO 256-QAM
Mobile Neural Network
Zstd Compression:
  3 - Decompression Speed
  19 - Decompression Speed
srsRAN
NCNN:
  CPU-v3-v3 - mobilenet-v3
  CPU - efficientnet-b0
  CPU - regnety_400m
Zstd Compression
C-Blosc
Mobile Neural Network
NCNN
NAS Parallel Benchmarks
VP9 libvpx Encoding
Embree
Mobile Neural Network
Zstd Compression
NAS Parallel Benchmarks
ASTC Encoder
Embree
NCNN
NAS Parallel Benchmarks
Zstd Compression:
  3, Long Mode - Decompression Speed
  19 - Compression Speed
SVT-AV1
Mobile Neural Network
TNN
VP9 libvpx Encoding
Embree
VP9 libvpx Encoding
Timed GDB GNU Debugger Compilation
Zstd Compression
SVT-AV1
NCNN
dav1d
TNN
NAS Parallel Benchmarks
SVT-AV1
Embree
NCNN
Embree
NCNN:
  CPU - mobilenet
  CPU - yolov4-tiny
Embree
ASTC Encoder
dav1d
Timed FFmpeg Compilation
dav1d
NCNN
SVT-AV1
BRL-CAD
TNN
dav1d
NCNN
Zstd Compression
TNN
GROMACS
NCNN
VP9 libvpx Encoding
ASTC Encoder
NCNN
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096
  RT.ldr_alb_nrm.3840x2160
  RT.hdr_alb_nrm.3840x2160
Mobile Neural Network