10900k june

Intel Core i9-10900K testing with a Gigabyte Z490 AORUS MASTER (F20d BIOS) and Gigabyte Intel UHD 630 CML GT2 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106300-IB-10900KJUN95
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 6 Tests
Creator Workloads 7 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 4 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
June 29
  1 Hour, 35 Minutes
2
June 29
  4 Hours, 54 Minutes
3
June 30
  4 Hours, 30 Minutes
Invert Hiding All Results Option
  3 Hours, 40 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


10900k juneProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolution123Intel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F20d BIOS)Intel Comet Lake PCH16GBSamsung SSD 970 EVO 500GBGigabyte Intel UHD 630 CML GT2 3GB (1200MHz)Realtek ALC1220G237HLIntel Device 15f3 + Intel Wi-Fi 6 AX201Ubuntu 20.045.9.0-050900daily20201012-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.94.6 Mesa 20.0.8OpenCL 2.11.2.131GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite 10.6.1100%101%102%103%104%srsRANNAS Parallel BenchmarksMobile Neural NetworkC-BloscNCNNZstd CompressionTimed GDB GNU Debugger CompilationVP9 libvpx EncodingASTC EncoderSVT-AV1Timed FFmpeg CompilationTNNBRL-CADdav1dEmbreeGROMACSIntel Open Image Denoise

10900k junenpb: MG.Cnpb: EP.Dsrsran: OFDM_Testcompress-zstd: 8 - Compression Speedncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - shufflenet-v2npb: FT.Cmnn: inception-v3srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMnpb: SP.Bsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMcompress-zstd: 8 - Decompression Speedsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMnpb: LU.Cmnn: squeezenetv1.1compress-zstd: 8, Long Mode - Compression Speedsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-zstd: 3 - Compression Speedsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 5G PHY_DL_NR Test 270 PRB SISO 256-QAMmnn: SqueezeNetV1.0compress-zstd: 3 - Decompression Speedcompress-zstd: 19 - Decompression Speedsrsran: 5G PHY_DL_NR Test 270 PRB SISO 256-QAMncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - efficientnet-b0ncnn: CPU - regnety_400mcompress-zstd: 19, Long Mode - Compression Speedblosc: blosclzmnn: mobilenetV3ncnn: CPU - mnasnetnpb: SP.Cvpxenc: Speed 5 - Bosphorus 1080pembree: Pathtracer ISPC - Asian Dragonmnn: mobilenet-v1-1.0compress-zstd: 19, Long Mode - Decompression Speednpb: EP.Castcenc: Mediumembree: Pathtracer - Crownncnn: CPU - blazefacenpb: CG.Ccompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pmnn: resnet-v2-50tnn: CPU - DenseNetvpxenc: Speed 5 - Bosphorus 4Kembree: Pathtracer - Asian Dragonvpxenc: Speed 0 - Bosphorus 4Kbuild-gdb: Time To Compilecompress-zstd: 3, Long Mode - Compression Speedsvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - googlenetdav1d: Chimera 1080ptnn: CPU - MobileNet v2npb: BT.Csvt-av1: Preset 4 - Bosphorus 1080pembree: Pathtracer ISPC - Crownncnn: CPU - vgg16embree: Pathtracer ISPC - Asian Dragon Objncnn: CPU - mobilenetncnn: CPU - yolov4-tinyembree: Pathtracer - Asian Dragon Objastcenc: Thoroughdav1d: Summer Nature 1080pbuild-ffmpeg: Time To Compiledav1d: Chimera 1080p 10-bitncnn: CPU - alexnetsvt-av1: Preset 4 - Bosphorus 4Kbrl-cad: VGR Performance Metrictnn: CPU - SqueezeNet v1.1dav1d: Summer Nature 4Kncnn: CPU - resnet18compress-zstd: 8, Long Mode - Decompression Speedtnn: CPU - SqueezeNet v2gromacs: MPI CPU - water_GMX50_barencnn: CPU - resnet50vpxenc: Speed 0 - Bosphorus 1080pastcenc: Exhaustivencnn: CPU - squeezenet_ssdoidn: RTLightmap.hdr.4096x4096oidn: RT.ldr_alb_nrm.3840x2160oidn: RT.hdr_alb_nrm.3840x2160mnn: MobileNetV2_22412311137.191838.96127100000177.64.33.1512947.6629.905138.4292.35354.13423.4418.5461.2244.14157.26826904.823.083528.8461.12270.9142.2151.794.34.5434136.73870.4158.33.395.338.4329.719606.31.6013.344949.730.4218.19863.3183835.51812.544.049214.3431.425554.074555.932.969.44129.4152870.07115.2815.93076.3256.5951319.618.37912.61820.43286.22626301.995.43116.013962.0616.407715.6923.0914.74419.3677770.5748.51491.1312.341.631187701262.424190.7413.714653.358.6610.96322.514.1850.202717.570.230.460.463.15111121.481833.03122966667189.04.433.3213116.8229.321136.9288.85290.88413.2414.1457.5238.64318.167.826820.573.173546.5456.32347.7142.1150.593.24.5524263.33812.8156.43.485.478.6429.319796.81.6373.375047.5630.9918.53023.3593823.61790.824.072014.26031.445558.744498.533.069.23129.7542882.36415.4015.80306.3856.0641307.718.24812.72819.80284.08726239.815.40115.952262.0816.310515.7723.2214.80259.3489770.7448.332490.7212.331.629187547261.693191.0913.684653.758.7220.96422.5014.1950.235917.570.230.460.462.90910054.561669.82133566667186.34.553.3012454.5928.533143.3301.85126.92431.5431.9476.5248.14243.670.425911.233.060527.3472.32306.9146.9155.596.24.4114248.73758.1161.03.475.478.5329.020063.71.6293.415031.1230.3918.32833.3053884.61786.574.014514.14071.445627.284546.233.370.03229.4882851.86315.2415.96596.3856.2631311.518.21712.63813.38285.36126422.405.39615.917462.4316.338415.7823.1914.82449.3285767.7748.405492.5212.301.634187146261.967191.2013.684643.858.7340.96322.5214.1950.214017.570.230.460.462.783OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C1232K4K6K8K10KSE +/- 5.16, N = 3SE +/- 119.21, N = 611137.1911121.4810054.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C1232K4K6K8K10KMin: 11111.28 / Avg: 11121.48 / Max: 11127.99Min: 9466.71 / Avg: 10054.56 / Max: 10223.021. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D123400800120016002000SE +/- 2.47, N = 3SE +/- 28.88, N = 31838.961833.031669.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D12330060090012001500Min: 1828.4 / Avg: 1833.03 / Max: 1836.85Min: 1624.03 / Avg: 1669.82 / Max: 1723.221. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.04Test: OFDM_Test31230M60M90M120M150MSE +/- 437162.57, N = 3SE +/- 1530068.99, N = 31335666671271000001229666671. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.04Test: OFDM_Test31220M40M60M80M100MMin: 132700000 / Avg: 133566666.67 / Max: 134100000Min: 120000000 / Avg: 122966666.67 / Max: 1251000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed2314080120160200SE +/- 3.24, N = 3SE +/- 3.13, N = 3189.0186.3177.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed231306090120150Min: 183.9 / Avg: 189 / Max: 195Min: 181.2 / Avg: 186.33 / Max: 1921. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v21231.02382.04763.07144.09525.119SE +/- 0.04, N = 3SE +/- 0.10, N = 34.304.434.55MIN: 4.13 / MAX: 4.55MIN: 4.09 / MAX: 6.38MIN: 4.3 / MAX: 5.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2123246810Min: 4.35 / Avg: 4.43 / Max: 4.48Min: 4.42 / Avg: 4.55 / Max: 4.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v21320.7471.4942.2412.9883.735SE +/- 0.05, N = 3SE +/- 0.03, N = 33.153.303.32MIN: 3.02 / MAX: 3.29MIN: 3.05 / MAX: 4MIN: 3.09 / MAX: 4.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2132246810Min: 3.22 / Avg: 3.3 / Max: 3.38Min: 3.26 / Avg: 3.32 / Max: 3.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C2133K6K9K12K15KSE +/- 12.29, N = 3SE +/- 108.08, N = 313116.8212947.6612454.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C2132K4K6K8K10KMin: 13092.64 / Avg: 13116.82 / Max: 13132.72Min: 12242.43 / Avg: 12454.59 / Max: 12596.491. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3321714212835SE +/- 0.23, N = 3SE +/- 0.17, N = 328.5329.3229.91MIN: 28.11 / MAX: 41.18MIN: 28.88 / MAX: 40.74MIN: 29.05 / MAX: 39.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3321714212835Min: 28.18 / Avg: 28.53 / Max: 28.96Min: 29.09 / Avg: 29.32 / Max: 29.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM312306090120150SE +/- 0.19, N = 3SE +/- 0.44, N = 3143.3138.4136.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM312306090120150Min: 142.9 / Avg: 143.27 / Max: 143.5Min: 136 / Avg: 136.87 / Max: 137.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM31270140210280350SE +/- 0.72, N = 3SE +/- 0.15, N = 3301.8292.3288.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM31250100150200250Min: 300.4 / Avg: 301.8 / Max: 302.8Min: 288.6 / Avg: 288.83 / Max: 289.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B12311002200330044005500SE +/- 3.99, N = 3SE +/- 85.04, N = 35354.135290.885126.921. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B1239001800270036004500Min: 5284.11 / Avg: 5290.88 / Max: 5297.93Min: 4956.84 / Avg: 5126.92 / Max: 5212.481. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM31290180270360450SE +/- 3.74, N = 3SE +/- 1.36, N = 3431.5423.4413.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM31280160240320400Min: 424 / Avg: 431.47 / Max: 435.5Min: 410.6 / Avg: 413.2 / Max: 415.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM31290180270360450SE +/- 0.59, N = 3SE +/- 1.10, N = 3431.9418.5414.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM31280160240320400Min: 431 / Avg: 431.87 / Max: 433Min: 412.3 / Avg: 414.13 / Max: 416.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM312100200300400500SE +/- 0.84, N = 3SE +/- 0.93, N = 3476.5461.2457.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM31280160240320400Min: 475 / Avg: 476.5 / Max: 477.9Min: 455.8 / Avg: 457.47 / Max: 4591. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM31250100150200250SE +/- 2.18, N = 3SE +/- 0.52, N = 3248.1244.1238.61. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM3124080120160200Min: 244 / Avg: 248.13 / Max: 251.4Min: 237.7 / Avg: 238.6 / Max: 239.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed2319001800270036004500SE +/- 34.82, N = 3SE +/- 22.34, N = 34318.14243.64157.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed2318001600240032004000Min: 4265.3 / Avg: 4318.07 / Max: 4383.8Min: 4200.4 / Avg: 4243.6 / Max: 4275.11. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM3121632486480SE +/- 0.06, N = 3SE +/- 0.13, N = 370.468.067.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM3121428425670Min: 70.3 / Avg: 70.4 / Max: 70.5Min: 67.5 / Avg: 67.77 / Max: 67.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C1236K12K18K24K30KSE +/- 44.76, N = 3SE +/- 23.04, N = 326904.8226820.5725911.231. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C1235K10K15K20K25KMin: 26731.14 / Avg: 26820.57 / Max: 26868.59Min: 25866.48 / Avg: 25911.23 / Max: 25943.11. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.13120.71391.42782.14172.85563.5695SE +/- 0.018, N = 3SE +/- 0.050, N = 33.0603.0833.173MIN: 2.98 / MAX: 3.82MIN: 3.01 / MAX: 3.14MIN: 2.97 / MAX: 5.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1312246810Min: 3.02 / Avg: 3.06 / Max: 3.08Min: 3.1 / Avg: 3.17 / Max: 3.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed213120240360480600SE +/- 7.74, N = 4SE +/- 5.98, N = 7546.5528.8527.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed213100200300400500Min: 524.1 / Avg: 546.45 / Max: 559.6Min: 507 / Avg: 527.31 / Max: 543.31. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM312100200300400500SE +/- 2.21, N = 3SE +/- 0.73, N = 3472.3461.1456.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM31280160240320400Min: 468.2 / Avg: 472.3 / Max: 475.8Min: 455 / Avg: 456.33 / Max: 457.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed2315001000150020002500SE +/- 12.19, N = 3SE +/- 18.18, N = 32347.72306.92270.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed231400800120016002000Min: 2331.2 / Avg: 2347.7 / Max: 2371.5Min: 2279.9 / Avg: 2306.93 / Max: 2341.51. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM312306090120150SE +/- 0.20, N = 3SE +/- 0.12, N = 3146.9142.2142.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM312306090120150Min: 146.5 / Avg: 146.9 / Max: 147.1Min: 141.9 / Avg: 142.07 / Max: 142.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM312306090120150SE +/- 0.96, N = 3SE +/- 0.51, N = 3155.5151.7150.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM312306090120150Min: 153.6 / Avg: 155.5 / Max: 156.7Min: 149.5 / Avg: 150.5 / Max: 151.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM31220406080100SE +/- 0.70, N = 3SE +/- 0.12, N = 396.294.393.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM31220406080100Min: 94.8 / Avg: 96.2 / Max: 97Min: 93 / Avg: 93.23 / Max: 93.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.03121.02422.04843.07264.09685.121SE +/- 0.044, N = 3SE +/- 0.020, N = 34.4114.5434.552MIN: 4.29 / MAX: 5.37MIN: 4.41 / MAX: 5.05MIN: 4.41 / MAX: 6.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0312246810Min: 4.35 / Avg: 4.41 / Max: 4.5Min: 4.52 / Avg: 4.55 / Max: 4.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed2319001800270036004500SE +/- 3.55, N = 3SE +/- 15.19, N = 34263.34248.74136.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed2317001400210028003500Min: 4257 / Avg: 4263.3 / Max: 4269.3Min: 4219.5 / Avg: 4248.73 / Max: 4270.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed1238001600240032004000SE +/- 24.61, N = 3SE +/- 7.49, N = 33870.43812.83758.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed1237001400210028003500Min: 3783.7 / Avg: 3812.77 / Max: 3861.7Min: 3743.3 / Avg: 3758.13 / Max: 3767.31. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM3124080120160200SE +/- 0.81, N = 3SE +/- 0.12, N = 3161.0158.3156.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.04Test: 5G PHY_DL_NR Test 270 PRB SISO 256-QAM312306090120150Min: 159.4 / Avg: 160.97 / Max: 162.1Min: 156.2 / Avg: 156.37 / Max: 156.61. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v31320.7831.5662.3493.1323.915SE +/- 0.02, N = 3SE +/- 0.02, N = 33.393.473.48MIN: 3.27 / MAX: 3.56MIN: 3.32 / MAX: 4.33MIN: 3.32 / MAX: 4.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3132246810Min: 3.44 / Avg: 3.47 / Max: 3.51Min: 3.45 / Avg: 3.48 / Max: 3.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b01231.23082.46163.69244.92326.154SE +/- 0.05, N = 3SE +/- 0.03, N = 35.335.475.47MIN: 5.2 / MAX: 5.46MIN: 5.23 / MAX: 6.69MIN: 5.26 / MAX: 6.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b0123246810Min: 5.36 / Avg: 5.47 / Max: 5.53Min: 5.41 / Avg: 5.47 / Max: 5.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m132246810SE +/- 0.05, N = 3SE +/- 0.01, N = 38.438.538.64MIN: 8.22 / MAX: 8.97MIN: 8.23 / MAX: 9.86MIN: 8.31 / MAX: 9.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400m1323691215Min: 8.44 / Avg: 8.53 / Max: 8.58Min: 8.62 / Avg: 8.64 / Max: 8.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed123714212835SE +/- 0.09, N = 3SE +/- 0.27, N = 329.729.329.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed123714212835Min: 29.2 / Avg: 29.33 / Max: 29.5Min: 28.5 / Avg: 29.03 / Max: 29.31. (CC) gcc options: -O3 -pthread -lz -llzma

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz3214K8K12K16K20KSE +/- 10.14, N = 3SE +/- 32.57, N = 320063.719796.819606.31. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz3213K6K9K12K15KMin: 20047.4 / Avg: 20063.67 / Max: 20082.3Min: 19736.9 / Avg: 19796.8 / Max: 19848.91. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31320.36830.73661.10491.47321.8415SE +/- 0.006, N = 3SE +/- 0.016, N = 31.6011.6291.637MIN: 1.56 / MAX: 2.29MIN: 1.57 / MAX: 1.75MIN: 1.58 / MAX: 2.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3132246810Min: 1.62 / Avg: 1.63 / Max: 1.64Min: 1.62 / Avg: 1.64 / Max: 1.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet1230.76731.53462.30193.06923.8365SE +/- 0.05, N = 3SE +/- 0.02, N = 33.343.373.41MIN: 3.3 / MAX: 3.7MIN: 3.17 / MAX: 4.84MIN: 3.2 / MAX: 4.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnet123246810Min: 3.28 / Avg: 3.37 / Max: 3.44Min: 3.36 / Avg: 3.41 / Max: 3.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C23111002200330044005500SE +/- 5.95, N = 3SE +/- 1.40, N = 35047.565031.124949.701. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C2319001800270036004500Min: 5035.75 / Avg: 5047.56 / Max: 5054.66Min: 5029.19 / Avg: 5031.12 / Max: 5033.851. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p213714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 330.9930.4230.391. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p213714212835Min: 30.92 / Avg: 30.99 / Max: 31.03Min: 30.35 / Avg: 30.39 / Max: 30.441. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon231510152025SE +/- 0.13, N = 3SE +/- 0.01, N = 318.5318.3318.20MIN: 18.17 / MAX: 19.14MIN: 18.19 / MAX: 18.67MIN: 18.05 / MAX: 18.61
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon231510152025Min: 18.29 / Avg: 18.53 / Max: 18.73Min: 18.3 / Avg: 18.33 / Max: 18.35

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.03120.75581.51162.26743.02323.779SE +/- 0.011, N = 3SE +/- 0.033, N = 33.3053.3183.359MIN: 3.24 / MAX: 15.08MIN: 3.27 / MAX: 4.82MIN: 3.26 / MAX: 3.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0312246810Min: 3.29 / Avg: 3.31 / Max: 3.33Min: 3.32 / Avg: 3.36 / Max: 3.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed3128001600240032004000SE +/- 36.64, N = 3SE +/- 10.65, N = 33884.63835.53823.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed3127001400210028003500Min: 3811.3 / Avg: 3884.57 / Max: 3922.5Min: 3802.8 / Avg: 3823.6 / Max: 38381. (CC) gcc options: -O3 -pthread -lz -llzma

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C123400800120016002000SE +/- 28.41, N = 3SE +/- 10.88, N = 31812.541790.821786.571. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C12330060090012001500Min: 1734.09 / Avg: 1790.82 / Max: 1822.01Min: 1767.4 / Avg: 1786.57 / Max: 1805.091. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Medium3120.91621.83242.74863.66484.581SE +/- 0.0314, N = 3SE +/- 0.0030, N = 34.01454.04924.07201. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Medium312246810Min: 3.96 / Avg: 4.01 / Max: 4.07Min: 4.07 / Avg: 4.07 / Max: 4.081. (CXX) g++ options: -O3 -flto -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown12348121620SE +/- 0.11, N = 3SE +/- 0.07, N = 314.3414.2614.14MIN: 14.23 / MAX: 14.6MIN: 13.99 / MAX: 14.72MIN: 13.94 / MAX: 14.5
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown12348121620Min: 14.11 / Avg: 14.26 / Max: 14.47Min: 14.06 / Avg: 14.14 / Max: 14.29

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface1230.3240.6480.9721.2961.62SE +/- 0.02, N = 3SE +/- 0.02, N = 31.421.441.44MIN: 1.37 / MAX: 1.64MIN: 1.36 / MAX: 1.68MIN: 1.37 / MAX: 2.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazeface123246810Min: 1.4 / Avg: 1.44 / Max: 1.46Min: 1.41 / Avg: 1.44 / Max: 1.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C32112002400360048006000SE +/- 4.50, N = 3SE +/- 12.02, N = 35627.285558.745554.071. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C32110002000300040005000Min: 5622.57 / Avg: 5627.28 / Max: 5636.28Min: 5540.55 / Avg: 5558.74 / Max: 5581.441. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed13210002000300040005000SE +/- 6.37, N = 3SE +/- 12.77, N = 34555.94546.24498.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed1328001600240032004000Min: 4533.5 / Avg: 4546.23 / Max: 4553Min: 4473.1 / Avg: 4498.53 / Max: 4513.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed321816243240SE +/- 0.00, N = 3SE +/- 0.30, N = 333.333.032.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed321714212835Min: 33.3 / Avg: 33.3 / Max: 33.3Min: 32.4 / Avg: 33 / Max: 33.31. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p3121632486480SE +/- 0.14, N = 3SE +/- 0.16, N = 370.0369.4469.231. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p3121428425670Min: 69.81 / Avg: 70.03 / Max: 70.28Min: 68.99 / Avg: 69.23 / Max: 69.541. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50132714212835SE +/- 0.27, N = 3SE +/- 0.23, N = 329.4229.4929.75MIN: 28.55 / MAX: 42.19MIN: 28.92 / MAX: 40.86MIN: 29.2 / MAX: 40.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50132714212835Min: 29.12 / Avg: 29.49 / Max: 30.01Min: 29.33 / Avg: 29.75 / Max: 30.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet3126001200180024003000SE +/- 1.97, N = 3SE +/- 30.42, N = 122851.862870.072882.36MIN: 2826.92 / MAX: 2907.55MIN: 2851.69 / MAX: 2906.41MIN: 2814.73 / MAX: 5048.471. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet3125001000150020002500Min: 2849.74 / Avg: 2851.86 / Max: 2855.79Min: 2833.52 / Avg: 2882.36 / Max: 3214.951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K21348121620SE +/- 0.17, N = 3SE +/- 0.11, N = 315.4015.2815.241. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K21348121620Min: 15.07 / Avg: 15.4 / Max: 15.66Min: 15.07 / Avg: 15.24 / Max: 15.451. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon31248121620SE +/- 0.17, N = 3SE +/- 0.08, N = 315.9715.9315.80MIN: 15.66 / MAX: 16.58MIN: 15.85 / MAX: 16.15MIN: 15.63 / MAX: 16.19
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon31248121620Min: 15.76 / Avg: 15.97 / Max: 16.3Min: 15.72 / Avg: 15.8 / Max: 15.96

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K321246810SE +/- 0.03, N = 3SE +/- 0.00, N = 36.386.386.321. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K3213691215Min: 6.33 / Avg: 6.38 / Max: 6.44Min: 6.38 / Avg: 6.38 / Max: 6.391. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile2311326395265SE +/- 0.09, N = 3SE +/- 0.16, N = 356.0656.2656.60
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile2311122334455Min: 55.94 / Avg: 56.06 / Max: 56.24Min: 56.05 / Avg: 56.26 / Max: 56.58

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed13230060090012001500SE +/- 4.36, N = 3SE +/- 4.76, N = 31319.61311.51307.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed1322004006008001000Min: 1303.7 / Avg: 1311.47 / Max: 1318.8Min: 1300 / Avg: 1307.7 / Max: 1316.41. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K123510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 318.3818.2518.221. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K123510152025Min: 18.15 / Avg: 18.25 / Max: 18.33Min: 18.2 / Avg: 18.22 / Max: 18.261. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet1323691215SE +/- 0.02, N = 3SE +/- 0.03, N = 312.6112.6312.72MIN: 12.52 / MAX: 12.69MIN: 12.28 / MAX: 12.8MIN: 12.43 / MAX: 12.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenet13248121620Min: 12.59 / Avg: 12.63 / Max: 12.65Min: 12.67 / Avg: 12.72 / Max: 12.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p1232004006008001000SE +/- 1.49, N = 3SE +/- 1.09, N = 3820.43819.80813.38MIN: 635.4 / MAX: 1129.6MIN: 635.09 / MAX: 1147.17MIN: 630.35 / MAX: 1137.181. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p123140280420560700Min: 817.41 / Avg: 819.8 / Max: 822.53Min: 811.38 / Avg: 813.38 / Max: 815.121. (CC) gcc options: -pthread -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v223160120180240300SE +/- 0.57, N = 3SE +/- 2.10, N = 3284.09285.36286.23MIN: 282.55 / MAX: 286.38MIN: 282.61 / MAX: 296.95MIN: 285.92 / MAX: 286.651. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v223150100150200250Min: 283.03 / Avg: 284.09 / Max: 284.99Min: 283.07 / Avg: 285.36 / Max: 289.561. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C3126K12K18K24K30KSE +/- 17.15, N = 3SE +/- 4.43, N = 326422.4026301.9926239.811. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C3125K10K15K20K25KMin: 26389.54 / Avg: 26422.4 / Max: 26447.34Min: 26231.78 / Avg: 26239.81 / Max: 26247.071. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p1231.2222.4443.6664.8886.11SE +/- 0.034, N = 3SE +/- 0.024, N = 35.4315.4015.3961. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p123246810Min: 5.35 / Avg: 5.4 / Max: 5.46Min: 5.35 / Avg: 5.4 / Max: 5.421. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12348121620SE +/- 0.03, N = 3SE +/- 0.04, N = 316.0115.9515.92MIN: 15.87 / MAX: 16.33MIN: 15.78 / MAX: 16.39MIN: 15.73 / MAX: 16.29
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12348121620Min: 15.9 / Avg: 15.95 / Max: 15.99Min: 15.85 / Avg: 15.92 / Max: 15.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161231428425670SE +/- 0.02, N = 3SE +/- 0.02, N = 362.0662.0862.43MIN: 61.88 / MAX: 62.6MIN: 61.89 / MAX: 64.13MIN: 62.27 / MAX: 62.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg161231224364860Min: 62.04 / Avg: 62.08 / Max: 62.12Min: 62.41 / Avg: 62.43 / Max: 62.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj13248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 316.4116.3416.31MIN: 16.26 / MAX: 16.8MIN: 16.2 / MAX: 16.73MIN: 16.18 / MAX: 16.71
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj13248121620Min: 16.33 / Avg: 16.34 / Max: 16.35Min: 16.3 / Avg: 16.31 / Max: 16.33

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet12348121620SE +/- 0.02, N = 3SE +/- 0.04, N = 315.6915.7715.78MIN: 15.36 / MAX: 17.55MIN: 15.35 / MAX: 16.37MIN: 15.25 / MAX: 25.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenet12348121620Min: 15.75 / Avg: 15.77 / Max: 15.8Min: 15.71 / Avg: 15.78 / Max: 15.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny132612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 323.0923.1923.22MIN: 22.93 / MAX: 23.33MIN: 22.9 / MAX: 26.04MIN: 22.94 / MAX: 24.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tiny132510152025Min: 23.16 / Avg: 23.19 / Max: 23.23Min: 23.17 / Avg: 23.22 / Max: 23.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj32148121620SE +/- 0.02, N = 3SE +/- 0.02, N = 314.8214.8014.74MIN: 14.71 / MAX: 15.06MIN: 14.63 / MAX: 15.07MIN: 14.68 / MAX: 14.92
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj32148121620Min: 14.79 / Avg: 14.82 / Max: 14.85Min: 14.76 / Avg: 14.8 / Max: 14.83

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Thorough3213691215SE +/- 0.0027, N = 3SE +/- 0.0103, N = 39.32859.34899.36771. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Thorough3213691215Min: 9.32 / Avg: 9.33 / Max: 9.33Min: 9.33 / Avg: 9.35 / Max: 9.361. (CXX) g++ options: -O3 -flto -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 1080p213170340510680850SE +/- 0.13, N = 3SE +/- 1.29, N = 3770.74770.57767.77MIN: 651.67 / MAX: 836.2MIN: 661.07 / MAX: 835.71MIN: 632.05 / MAX: 833.441. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 1080p213140280420560700Min: 770.49 / Avg: 770.74 / Max: 770.92Min: 765.53 / Avg: 767.77 / Max: 770.011. (CC) gcc options: -pthread -lm

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile2311122334455SE +/- 0.05, N = 3SE +/- 0.08, N = 348.3348.4148.51
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile2311020304050Min: 48.27 / Avg: 48.33 / Max: 48.42Min: 48.24 / Avg: 48.4 / Max: 48.53

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bit312110220330440550SE +/- 0.13, N = 3SE +/- 1.02, N = 3492.52491.13490.72MIN: 392.25 / MAX: 787.53MIN: 392.46 / MAX: 737.75MIN: 392.28 / MAX: 749.631. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bit31290180270360450Min: 492.27 / Avg: 492.52 / Max: 492.65Min: 488.69 / Avg: 490.72 / Max: 491.861. (CC) gcc options: -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet3213691215SE +/- 0.01, N = 3SE +/- 0.02, N = 312.3012.3312.34MIN: 12.19 / MAX: 13.7MIN: 12.21 / MAX: 20.83MIN: 12.25 / MAX: 12.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnet32148121620Min: 12.29 / Avg: 12.3 / Max: 12.32Min: 12.31 / Avg: 12.33 / Max: 12.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K3120.36770.73541.10311.47081.8385SE +/- 0.003, N = 3SE +/- 0.004, N = 31.6341.6311.6291. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K312246810Min: 1.63 / Avg: 1.63 / Max: 1.64Min: 1.62 / Avg: 1.63 / Max: 1.631. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance Metric12340K80K120K160K200K1877011875471871461. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.123160120180240300SE +/- 0.04, N = 3SE +/- 0.07, N = 3261.69261.97262.42MIN: 261.37 / MAX: 268.05MIN: 261.39 / MAX: 268.58MIN: 261.96 / MAX: 268.61. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.123150100150200250Min: 261.62 / Avg: 261.69 / Max: 261.77Min: 261.87 / Avg: 261.97 / Max: 262.11. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4K3214080120160200SE +/- 0.08, N = 3SE +/- 0.10, N = 3191.20191.09190.74MIN: 167.42 / MAX: 198.75MIN: 165.76 / MAX: 198.45MIN: 167.81 / MAX: 197.991. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4K3214080120160200Min: 191.1 / Avg: 191.2 / Max: 191.35Min: 190.9 / Avg: 191.09 / Max: 191.221. (CC) gcc options: -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet1823148121620SE +/- 0.02, N = 3SE +/- 0.02, N = 313.6813.6813.71MIN: 13.49 / MAX: 14.06MIN: 13.49 / MAX: 13.88MIN: 13.51 / MAX: 14.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet1823148121620Min: 13.64 / Avg: 13.68 / Max: 13.71Min: 13.66 / Avg: 13.68 / Max: 13.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed21310002000300040005000SE +/- 3.98, N = 3SE +/- 7.50, N = 74653.74653.34643.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed2138001600240032004000Min: 4648.1 / Avg: 4653.7 / Max: 4661.4Min: 4612.7 / Avg: 4643.8 / Max: 4671.41. (CC) gcc options: -O3 -pthread -lz -llzma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231326395265SE +/- 0.02, N = 3SE +/- 0.04, N = 358.6658.7258.73MIN: 58.6 / MAX: 58.91MIN: 58.6 / MAX: 60.74MIN: 58.61 / MAX: 59.061. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231224364860Min: 58.69 / Avg: 58.72 / Max: 58.75Min: 58.7 / Avg: 58.73 / Max: 58.811. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare2310.21690.43380.65070.86761.0845SE +/- 0.001, N = 3SE +/- 0.000, N = 30.9640.9630.9631. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare231246810Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 0.96 / Avg: 0.96 / Max: 0.961. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50123510152025SE +/- 0.03, N = 3SE +/- 0.06, N = 322.5022.5022.52MIN: 22.25 / MAX: 22.96MIN: 21.98 / MAX: 22.91MIN: 22.24 / MAX: 23.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50123510152025Min: 22.47 / Avg: 22.5 / Max: 22.55Min: 22.41 / Avg: 22.52 / Max: 22.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p32148121620SE +/- 0.03, N = 3SE +/- 0.02, N = 314.1914.1914.181. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p32148121620Min: 14.12 / Avg: 14.19 / Max: 14.23Min: 14.17 / Avg: 14.19 / Max: 14.221. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Exhaustive1321122334455SE +/- 0.02, N = 3SE +/- 0.02, N = 350.2050.2150.241. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Exhaustive1321020304050Min: 50.18 / Avg: 50.21 / Max: 50.24Min: 50.21 / Avg: 50.24 / Max: 50.251. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd12348121620SE +/- 0.02, N = 3SE +/- 0.03, N = 317.5717.5717.57MIN: 17.33 / MAX: 27.34MIN: 17.36 / MAX: 18.94MIN: 17.31 / MAX: 18.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssd12348121620Min: 17.54 / Avg: 17.57 / Max: 17.6Min: 17.53 / Avg: 17.57 / Max: 17.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x40963210.05180.10360.15540.20720.259SE +/- 0.00, N = 3SE +/- 0.00, N = 30.230.230.23
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409632112345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x21603210.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x216032112345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x21603210.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216032112345Min: 0.45 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2243210.7091.4182.1272.8363.545SE +/- 0.007, N = 3SE +/- 0.139, N = 32.7832.9093.151MIN: 2.67 / MAX: 3.57MIN: 2.7 / MAX: 4.09MIN: 2.69 / MAX: 5.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224321246810Min: 2.77 / Avg: 2.78 / Max: 2.79Min: 2.77 / Avg: 2.91 / Max: 3.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl