Ryzen 9 5950X Clang 12 vs. GCC 11 Benchmarks

GCC 11.1 versus LLVM Clang 12 on AMD Ryzen 9 5950X. Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105198-IB-11900KCOM08
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
Bioinformatics 3 Tests
C/C++ Compiler Tests 12 Tests
CPU Massive 12 Tests
Creator Workloads 12 Tests
Encoding 6 Tests
HPC - High Performance Computing 6 Tests
Imaging 3 Tests
Machine Learning 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 8 Tests
OpenMPI Tests 3 Tests
Renderers 2 Tests
Scientific Computing 4 Tests
Server CPU Tests 6 Tests
Single-Threaded 3 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 11.1: -O2
May 18 2021
  4 Hours, 27 Minutes
GCC 11.1: -O3 -march=native
May 18 2021
  4 Hours, 36 Minutes
GCC 11.1: -O3 -march=native -flto
May 18 2021
  1 Hour, 47 Minutes
Clang 12: -O2
May 17 2021
  1 Hour, 43 Minutes
Clang 12: -O3 -march=native
May 17 2021
  1 Hour, 42 Minutes
Clang 12: -O3 -march=native -flto
May 18 2021
  1 Hour, 32 Minutes
Invert Hiding All Results Option
  2 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X Clang 12 vs. GCC 11 BenchmarksOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3302 BIOS)AMD Starship/Matisse32GB500GB Western Digital WDS500G3X0C-00SJG0AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (2100/875MHz)AMD Navi 10 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Fedora 345.11.20-300.fc34.x86_64 (x86_64)GNOME Shell 40.1X Server + Wayland4.6 Mesa 21.0.3 (LLVM 12.0.0)GCC 11.1.1 20210428 + Clang 12.0.0Clang 12.0.0btrfs3840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilersFile-SystemScreen ResolutionRyzen 9 5950X Clang 12 Vs. GCC 11 Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- GCC 11.1: -O2: CXXFLAGS=-O2 CFLAGS=-O2- GCC 11.1: -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 11.1: -O3 -march=native -flto: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto" - Clang 12: -O2: CXXFLAGS=-O2 CFLAGS=-O2- Clang 12: -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 12: -O3 -march=native -flto: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto" - GCC 11.1: -O2, GCC 11.1: -O3 -march=native, GCC 11.1: -O3 -march=native -flto: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

c-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenncnn: CPU - blazefacetnn: CPU - MobileNet v2ncnn: CPU - shufflenet-v2ncnn: CPU - regnety_400mastcenc: Mediumncnn: CPU-v3-v3 - mobilenet-v3encode-flac: WAV To FLACncnn: CPU-v2-v2 - mobilenet-v2aobench: 2048 x 2048 - Total Timencnn: CPU - mnasnetencode-mp3: WAV To MP3ncnn: CPU - efficientnet-b0astcenc: Thoroughgraphics-magick: Resizingncnn: CPU - mobilenetencode-opus: WAV To Opus Encodecoremark: CoreMark Size 666 - Iterations Per Secondncnn: CPU - squeezenet_ssdncnn: CPU - resnet50ncnn: CPU - googlenetncnn: CPU - yolov4-tinywebp: Quality 100, Highest Compressiongraphics-magick: Rotateastcenc: Exhaustivegraphics-magick: Enhancedx265: Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080pliquid-dsp: 8 - 256 - 57webp: Quality 100, Losslesshmmer: Pfam Database Searchncnn: CPU - resnet18mrbayes: Primate Phylogeny Analysistnn: CPU - SqueezeNet v1.1webp: Quality 100, Lossless, Highest Compressionpjsip: INVITEliquid-dsp: 16 - 256 - 57lammps: Rhodopsin Proteinsqlite-speedtest: Timed Time - Size 1,000pjsip: OPTIONS, Statelesstjbench: Decompression Throughputncnn: CPU - vgg16svt-hevc: 10 - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pncnn: CPU - alexnetsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080ppjsip: OPTIONS, Statefulhimeno: Poisson Pressure SolverGCC 11.1Clang 12 -O2 -O3 -march=native -O3 -march=native -flto -O2 -O3 -march=native -O3 -march=native -flto60.9312261.85219.1304.3317.244.62964.155.8744.5431.7323.976.8025.417.6841180312.956.515830811.53211614.6126.1813.1521.065.35598557.144542226.24217.3259485000013.18199.72214.5096.785208.78027.4244815104120000012.85547.404230599277.49680256.82365.79222.6311.16235.27230.5378605120.94026225.4053701.80227.7684.3917.184.62773.846.2374.4325.4793.955.5035.377.5718212012.705.387808580.86265013.9325.3312.8521.005.200103656.641144925.91221.7161156000013.55097.74414.3392.973214.27728.3274671108500000012.74946.917221759268.11345657.61369.54223.1110.99235.59231.1679825314.73390425.4823822.55215.3416.0118.534.69953.866.1954.4626.3123.925.4035.367.5299208513.745.475849671.95944214.5025.8413.2623.945.18496456.190044926.17227.0461939333313.25497.71414.8093.337202.57227.9494616109326666712.81346.479222581270.90915057.84368.55222.3311.13234.70230.4379165445.63522149.2752061.54271.0283.8112.933.33673.367.6873.5633.7783.266.5654.529.3950169311.585.874712758.35782612.6122.9911.8821.974.77892152.204841127.77225.1557666000013.64796.09214.0593.529208.55529.1194575105640000013.08747.792221107267.55306857.26368.55225.5711.09235.64231.3579424877.30641045.0102351.52355.9253.7412.593.31793.075.6663.5730.2343.266.0984.519.3549176711.465.508722694.00627512.3523.2511.7721.734.70398151.823045227.75229.2757605000014.10595.21413.9090.785215.78128.6404708106190000013.29947.630222572272.51355956.93375.95227.7311.16239.88234.7179585192.19004244.7422371.46261.8363.6912.143.37282.935.7563.4229.7303.085.8104.309.4432176211.055.558714933.85898312.4322.9011.6221.584.72296151.897445128.47236.0558780000013.63693.31113.8692.405207.92527.400107346666713.34248.584266.22982956.11375.08227.6611.25238.13232.665289.917571OpenBenchmarking.org

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelGCC 11.1Clang 121428425670SE +/- 0.18, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 360.9349.2825.4145.0125.4844.741. (CC) gcc options: -lm -lpthread -O3 -march=native -flto
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelGCC 11.1Clang 121224364860Min: 60.58 / Avg: 60.93 / Max: 61.17Min: 48.97 / Avg: 49.27 / Max: 49.51Min: 25.26 / Avg: 25.41 / Max: 25.53Min: 44.82 / Avg: 45.01 / Max: 45.16Min: 25.33 / Avg: 25.48 / Max: 25.6Min: 44.45 / Avg: 44.74 / Max: 45.031. (CC) gcc options: -lm -lpthread -O3 -march=native -flto

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenGCC 11.1Clang 1280160240320400SE +/- 0.33, N = 3SE +/- 0.58, N = 3SE +/- 1.20, N = 3SE +/- 0.33, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 32262063702353822371. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenGCC 11.1Clang 1270140210280350Min: 226 / Avg: 226.33 / Max: 227Min: 205 / Avg: 206 / Max: 207Min: 368 / Avg: 369.67 / Max: 372Min: 235 / Avg: 235.33 / Max: 236Min: 381 / Avg: 382.33 / Max: 384Min: 236 / Avg: 236.67 / Max: 2381. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceGCC 11.1Clang 120.57381.14761.72142.29522.869SE +/- 0.01, N = 15SE +/- 0.04, N = 3SE +/- 0.01, N = 15SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 31.851.541.801.522.551.461. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceGCC 11.1Clang 12246810Min: 1.81 / Avg: 1.85 / Max: 1.99Min: 1.49 / Avg: 1.54 / Max: 1.61Min: 1.77 / Avg: 1.8 / Max: 1.87Min: 1.44 / Avg: 1.52 / Max: 1.58Min: 2.53 / Avg: 2.55 / Max: 2.56Min: 1.36 / Avg: 1.46 / Max: 1.521. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2GCC 11.1Clang 1280160240320400SE +/- 1.57, N = 3SE +/- 0.41, N = 3SE +/- 0.86, N = 3SE +/- 0.48, N = 3SE +/- 1.27, N = 3SE +/- 0.55, N = 3219.13271.03227.77355.93215.34261.841. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2GCC 11.1Clang 1260120180240300Min: 216.8 / Avg: 219.13 / Max: 222.13Min: 270.29 / Avg: 271.03 / Max: 271.72Min: 226.54 / Avg: 227.77 / Max: 229.43Min: 355 / Avg: 355.93 / Max: 356.62Min: 212.93 / Avg: 215.34 / Max: 217.24Min: 260.77 / Avg: 261.84 / Max: 262.61. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2GCC 11.1Clang 12246810SE +/- 0.01, N = 14SE +/- 0.05, N = 3SE +/- 0.15, N = 15SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 34.333.814.393.746.013.691. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2GCC 11.1Clang 12246810Min: 4.29 / Avg: 4.33 / Max: 4.36Min: 3.74 / Avg: 3.81 / Max: 3.91Min: 4.2 / Avg: 4.39 / Max: 6.45Min: 3.67 / Avg: 3.74 / Max: 3.83Min: 5.96 / Avg: 6.01 / Max: 6.03Min: 3.66 / Avg: 3.69 / Max: 3.731. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mGCC 11.1Clang 12510152025SE +/- 0.06, N = 15SE +/- 0.20, N = 3SE +/- 0.05, N = 15SE +/- 0.16, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 317.2412.9317.1812.5918.5312.141. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mGCC 11.1Clang 12510152025Min: 16.99 / Avg: 17.24 / Max: 17.86Min: 12.53 / Avg: 12.93 / Max: 13.17Min: 17.05 / Avg: 17.18 / Max: 17.66Min: 12.26 / Avg: 12.59 / Max: 12.79Min: 18.36 / Avg: 18.53 / Max: 18.65Min: 11.97 / Avg: 12.14 / Max: 12.421. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumGCC 11.1Clang 121.05742.11483.17224.22965.287SE +/- 0.0337, N = 3SE +/- 0.0156, N = 3SE +/- 0.0292, N = 3SE +/- 0.0243, N = 3SE +/- 0.0045, N = 3SE +/- 0.0302, N = 34.62963.33674.62773.31794.69953.37281. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumGCC 11.1Clang 12246810Min: 4.59 / Avg: 4.63 / Max: 4.7Min: 3.31 / Avg: 3.34 / Max: 3.36Min: 4.57 / Avg: 4.63 / Max: 4.67Min: 3.29 / Avg: 3.32 / Max: 3.37Min: 4.69 / Avg: 4.7 / Max: 4.71Min: 3.31 / Avg: 3.37 / Max: 3.411. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3GCC 11.1Clang 120.93381.86762.80143.73524.669SE +/- 0.03, N = 15SE +/- 0.07, N = 3SE +/- 0.02, N = 15SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 34.153.363.843.073.862.931. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3GCC 11.1Clang 12246810Min: 4.06 / Avg: 4.15 / Max: 4.42Min: 3.29 / Avg: 3.36 / Max: 3.5Min: 3.79 / Avg: 3.84 / Max: 4.05Min: 3 / Avg: 3.07 / Max: 3.16Min: 3.81 / Avg: 3.86 / Max: 3.91Min: 2.92 / Avg: 2.93 / Max: 2.941. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACGCC 11.1Clang 12246810SE +/- 0.045, N = 5SE +/- 0.024, N = 5SE +/- 0.020, N = 5SE +/- 0.005, N = 5SE +/- 0.036, N = 5SE +/- 0.030, N = 55.8747.6876.2375.6666.1955.7561. (CXX) g++ options: -O3 -march=native -flto -logg -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACGCC 11.1Clang 123691215Min: 5.79 / Avg: 5.87 / Max: 5.99Min: 7.61 / Avg: 7.69 / Max: 7.74Min: 6.16 / Avg: 6.24 / Max: 6.27Min: 5.65 / Avg: 5.67 / Max: 5.68Min: 6.09 / Avg: 6.2 / Max: 6.28Min: 5.69 / Avg: 5.76 / Max: 5.831. (CXX) g++ options: -O3 -march=native -flto -logg -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2GCC 11.1Clang 121.02152.0433.06454.0865.1075SE +/- 0.04, N = 15SE +/- 0.11, N = 3SE +/- 0.01, N = 15SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 34.543.564.433.574.463.421. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2GCC 11.1Clang 12246810Min: 4.45 / Avg: 4.54 / Max: 5.09Min: 3.42 / Avg: 3.56 / Max: 3.77Min: 4.39 / Avg: 4.43 / Max: 4.51Min: 3.49 / Avg: 3.57 / Max: 3.64Min: 4.44 / Avg: 4.46 / Max: 4.49Min: 3.27 / Avg: 3.42 / Max: 3.511. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeGCC 11.1Clang 12816243240SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.43, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 331.7333.7825.4830.2326.3129.731. (CC) gcc options: -lm -O3 -march=native -flto
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeGCC 11.1Clang 12714212835Min: 31.54 / Avg: 31.73 / Max: 32Min: 33.72 / Avg: 33.78 / Max: 33.85Min: 25.45 / Avg: 25.48 / Max: 25.53Min: 29.42 / Avg: 30.23 / Max: 30.88Min: 26.29 / Avg: 26.31 / Max: 26.33Min: 29.62 / Avg: 29.73 / Max: 29.911. (CC) gcc options: -lm -O3 -march=native -flto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetGCC 11.1Clang 120.89331.78662.67993.57324.4665SE +/- 0.04, N = 15SE +/- 0.08, N = 3SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 33.973.263.953.263.923.081. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetGCC 11.1Clang 12246810Min: 3.88 / Avg: 3.97 / Max: 4.48Min: 3.18 / Avg: 3.26 / Max: 3.42Min: 3.85 / Avg: 3.95 / Max: 4.35Min: 3.18 / Avg: 3.26 / Max: 3.36Min: 3.88 / Avg: 3.92 / Max: 3.95Min: 3.02 / Avg: 3.08 / Max: 3.161. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3GCC 11.1Clang 12246810SE +/- 0.084, N = 3SE +/- 0.068, N = 4SE +/- 0.071, N = 3SE +/- 0.045, N = 3SE +/- 0.010, N = 3SE +/- 0.056, N = 36.8026.5655.5036.0985.4035.8101. (CC) gcc options: -O3 -pipe -march=native -flto -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3GCC 11.1Clang 123691215Min: 6.64 / Avg: 6.8 / Max: 6.93Min: 6.38 / Avg: 6.57 / Max: 6.67Min: 5.37 / Avg: 5.5 / Max: 5.61Min: 6.05 / Avg: 6.1 / Max: 6.19Min: 5.39 / Avg: 5.4 / Max: 5.42Min: 5.7 / Avg: 5.81 / Max: 5.891. (CC) gcc options: -O3 -pipe -march=native -flto -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0GCC 11.1Clang 121.21732.43463.65194.86926.0865SE +/- 0.05, N = 15SE +/- 0.06, N = 3SE +/- 0.03, N = 15SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 35.414.525.374.515.364.301. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0GCC 11.1Clang 12246810Min: 5.29 / Avg: 5.41 / Max: 6.01Min: 4.41 / Avg: 4.52 / Max: 4.63Min: 5.29 / Avg: 5.37 / Max: 5.68Min: 4.49 / Avg: 4.51 / Max: 4.53Min: 5.32 / Avg: 5.36 / Max: 5.39Min: 4.25 / Avg: 4.3 / Max: 4.361. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughGCC 11.1Clang 123691215SE +/- 0.0109, N = 3SE +/- 0.0279, N = 3SE +/- 0.0351, N = 3SE +/- 0.0407, N = 3SE +/- 0.0109, N = 3SE +/- 0.0114, N = 37.68419.39507.57189.35497.52999.44321. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughGCC 11.1Clang 123691215Min: 7.67 / Avg: 7.68 / Max: 7.7Min: 9.36 / Avg: 9.39 / Max: 9.45Min: 7.5 / Avg: 7.57 / Max: 7.61Min: 9.28 / Avg: 9.35 / Max: 9.41Min: 7.51 / Avg: 7.53 / Max: 7.55Min: 9.42 / Avg: 9.44 / Max: 9.461. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingGCC 11.1Clang 125001000150020002500SE +/- 0.88, N = 3SE +/- 4.48, N = 3SE +/- 1.53, N = 3SE +/- 3.61, N = 3SE +/- 3.71, N = 3SE +/- 2.85, N = 31803169321201767208517621. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingGCC 11.1Clang 12400800120016002000Min: 1802 / Avg: 1803.33 / Max: 1805Min: 1687 / Avg: 1693.33 / Max: 1702Min: 2117 / Avg: 2120 / Max: 2122Min: 1760 / Avg: 1767 / Max: 1772Min: 2080 / Avg: 2084.67 / Max: 2092Min: 1759 / Avg: 1762.33 / Max: 17681. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetGCC 11.1Clang 1248121620SE +/- 0.11, N = 15SE +/- 0.03, N = 3SE +/- 0.13, N = 15SE +/- 0.11, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 312.9511.5812.7011.4613.7411.051. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetGCC 11.1Clang 1248121620Min: 12.28 / Avg: 12.95 / Max: 13.68Min: 11.53 / Avg: 11.58 / Max: 11.64Min: 12.05 / Avg: 12.7 / Max: 14.02Min: 11.34 / Avg: 11.46 / Max: 11.68Min: 13.51 / Avg: 13.74 / Max: 14.07Min: 10.82 / Avg: 11.05 / Max: 11.181. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 11.1Clang 12246810SE +/- 0.035, N = 5SE +/- 0.052, N = 5SE +/- 0.048, N = 5SE +/- 0.045, N = 5SE +/- 0.023, N = 5SE +/- 0.016, N = 56.5155.8745.3875.5085.4755.5581. (CXX) g++ options: -O3 -march=native -flto -logg -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 11.1Clang 123691215Min: 6.43 / Avg: 6.52 / Max: 6.6Min: 5.68 / Avg: 5.87 / Max: 5.98Min: 5.25 / Avg: 5.39 / Max: 5.48Min: 5.35 / Avg: 5.51 / Max: 5.6Min: 5.42 / Avg: 5.48 / Max: 5.53Min: 5.52 / Avg: 5.56 / Max: 5.61. (CXX) g++ options: -O3 -march=native -flto -logg -lm

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondGCC 11.1Clang 12200K400K600K800K1000KSE +/- 989.68, N = 3SE +/- 789.22, N = 3SE +/- 2230.96, N = 3SE +/- 845.83, N = 3SE +/- 836.68, N = 3SE +/- 581.76, N = 3830811.53712758.36808580.86722694.01849671.96714933.861. (CC) gcc options: -O2 -O3 -march=native -flto -lrt" -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondGCC 11.1Clang 12150K300K450K600K750KMin: 829015.54 / Avg: 830811.53 / Max: 832430.09Min: 711532.76 / Avg: 712758.36 / Max: 714232.57Min: 805707.09 / Avg: 808580.86 / Max: 812973.71Min: 721451.92 / Avg: 722694.01 / Max: 724309.64Min: 848019.08 / Avg: 849671.96 / Max: 850724.45Min: 714338.86 / Avg: 714933.86 / Max: 716097.271. (CC) gcc options: -O2 -O3 -march=native -flto -lrt" -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdGCC 11.1Clang 1248121620SE +/- 0.10, N = 15SE +/- 0.31, N = 3SE +/- 0.09, N = 15SE +/- 0.20, N = 3SE +/- 0.06, N = 3SE +/- 0.20, N = 314.6112.6113.9312.3514.5012.431. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdGCC 11.1Clang 1248121620Min: 13.63 / Avg: 14.61 / Max: 15.03Min: 12.27 / Avg: 12.61 / Max: 13.22Min: 13.16 / Avg: 13.93 / Max: 14.57Min: 12.13 / Avg: 12.35 / Max: 12.75Min: 14.38 / Avg: 14.5 / Max: 14.58Min: 12.05 / Avg: 12.43 / Max: 12.731. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50GCC 11.1Clang 12612182430SE +/- 0.18, N = 15SE +/- 0.11, N = 3SE +/- 0.10, N = 15SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.09, N = 326.1822.9925.3323.2525.8422.901. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50GCC 11.1Clang 12612182430Min: 24.98 / Avg: 26.18 / Max: 27.12Min: 22.79 / Avg: 22.99 / Max: 23.18Min: 24.76 / Avg: 25.33 / Max: 25.86Min: 23.14 / Avg: 23.25 / Max: 23.34Min: 25.42 / Avg: 25.84 / Max: 26.1Min: 22.79 / Avg: 22.9 / Max: 23.071. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetGCC 11.1Clang 123691215SE +/- 0.22, N = 15SE +/- 0.13, N = 3SE +/- 0.12, N = 15SE +/- 0.16, N = 3SE +/- 0.44, N = 3SE +/- 0.09, N = 313.1511.8812.8511.7713.2611.621. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetGCC 11.1Clang 1248121620Min: 12.43 / Avg: 13.15 / Max: 14.57Min: 11.65 / Avg: 11.88 / Max: 12.1Min: 12.38 / Avg: 12.85 / Max: 13.81Min: 11.54 / Avg: 11.77 / Max: 12.07Min: 12.7 / Avg: 13.26 / Max: 14.12Min: 11.45 / Avg: 11.62 / Max: 11.771. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyGCC 11.1Clang 12612182430SE +/- 0.11, N = 15SE +/- 0.18, N = 3SE +/- 0.14, N = 15SE +/- 0.12, N = 3SE +/- 0.41, N = 3SE +/- 0.10, N = 321.0621.9721.0021.7323.9421.581. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyGCC 11.1Clang 12612182430Min: 20.37 / Avg: 21.06 / Max: 21.65Min: 21.72 / Avg: 21.97 / Max: 22.32Min: 20.34 / Avg: 21 / Max: 22.04Min: 21.5 / Avg: 21.73 / Max: 21.9Min: 23.12 / Avg: 23.94 / Max: 24.4Min: 21.43 / Avg: 21.58 / Max: 21.781. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionGCC 11.1Clang 121.20492.40983.61474.81966.0245SE +/- 0.021, N = 3SE +/- 0.059, N = 4SE +/- 0.023, N = 3SE +/- 0.026, N = 3SE +/- 0.036, N = 3SE +/- 0.027, N = 35.3554.7785.2004.7035.1844.7221. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionGCC 11.1Clang 12246810Min: 5.32 / Avg: 5.35 / Max: 5.39Min: 4.7 / Avg: 4.78 / Max: 4.95Min: 5.18 / Avg: 5.2 / Max: 5.25Min: 4.67 / Avg: 4.7 / Max: 4.75Min: 5.15 / Avg: 5.18 / Max: 5.26Min: 4.67 / Avg: 4.72 / Max: 4.761. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateGCC 11.1Clang 122004006008001000SE +/- 11.70, N = 3SE +/- 5.55, N = 3SE +/- 5.57, N = 3SE +/- 0.58, N = 3SE +/- 3.48, N = 3SE +/- 4.37, N = 398592110369819649611. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateGCC 11.1Clang 122004006008001000Min: 969 / Avg: 985.33 / Max: 1008Min: 911 / Avg: 921.33 / Max: 930Min: 1029 / Avg: 1036 / Max: 1047Min: 980 / Avg: 981 / Max: 982Min: 958 / Avg: 963.67 / Max: 970Min: 956 / Avg: 961.33 / Max: 9701. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveGCC 11.1Clang 121326395265SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 357.1452.2056.6451.8256.1951.901. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveGCC 11.1Clang 121122334455Min: 56.97 / Avg: 57.14 / Max: 57.28Min: 51.99 / Avg: 52.2 / Max: 52.35Min: 56.48 / Avg: 56.64 / Max: 56.73Min: 51.61 / Avg: 51.82 / Max: 51.96Min: 56.01 / Avg: 56.19 / Max: 56.33Min: 51.75 / Avg: 51.9 / Max: 51.981. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedGCC 11.1Clang 12100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34224114494524494511. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedGCC 11.1Clang 1280160240320400Min: 422 / Avg: 422.33 / Max: 423Min: 411 / Avg: 411.33 / Max: 412Min: 449 / Avg: 449.33 / Max: 450Min: 452 / Avg: 452.33 / Max: 4531. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KGCC 11.1Clang 12714212835SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 326.2427.7725.9127.7526.1728.471. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread -lrt -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KGCC 11.1Clang 12612182430Min: 26.14 / Avg: 26.24 / Max: 26.34Min: 27.52 / Avg: 27.77 / Max: 27.91Min: 25.84 / Avg: 25.91 / Max: 25.96Min: 27.68 / Avg: 27.75 / Max: 27.8Min: 25.99 / Avg: 26.17 / Max: 26.26Min: 28.36 / Avg: 28.47 / Max: 28.671. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread -lrt -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pGCC 11.1Clang 1250100150200250SE +/- 0.49, N = 3SE +/- 0.95, N = 3SE +/- 0.83, N = 3SE +/- 0.32, N = 3SE +/- 0.03, N = 3SE +/- 1.15, N = 3217.32225.15221.71229.27227.04236.051. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pGCC 11.1Clang 124080120160200Min: 216.61 / Avg: 217.32 / Max: 218.26Min: 223.3 / Avg: 225.15 / Max: 226.42Min: 220.51 / Avg: 221.71 / Max: 223.3Min: 228.83 / Avg: 229.27 / Max: 229.89Min: 227.01 / Avg: 227.04 / Max: 227.1Min: 233.74 / Avg: 236.05 / Max: 237.251. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57GCC 11.1Clang 12130M260M390M520M650MSE +/- 185202.59, N = 3SE +/- 1824399.44, N = 3SE +/- 4455270.29, N = 3SE +/- 4072558.57, N = 3SE +/- 1343755.10, N = 3SE +/- 1855020.22, N = 35948500005766600006115600005760500006193933335878000001. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57GCC 11.1Clang 12110M220M330M440M550MMin: 594500000 / Avg: 594850000 / Max: 595130000Min: 574300000 / Avg: 576660000 / Max: 580250000Min: 603250000 / Avg: 611560000 / Max: 618500000Min: 567910000 / Avg: 576050000 / Max: 580370000Min: 617130000 / Avg: 619393333.33 / Max: 621780000Min: 585930000 / Avg: 587800000 / Max: 5915100001. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessGCC 11.1Clang 1248121620SE +/- 0.04, N = 3SE +/- 0.19, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.15, N = 413.1813.6513.5514.1113.2513.641. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessGCC 11.1Clang 1248121620Min: 13.11 / Avg: 13.18 / Max: 13.22Min: 13.35 / Avg: 13.65 / Max: 13.99Min: 13.44 / Avg: 13.55 / Max: 13.61Min: 14.06 / Avg: 14.11 / Max: 14.2Min: 12.94 / Avg: 13.25 / Max: 13.5Min: 13.27 / Avg: 13.64 / Max: 13.981. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchGCC 11.1Clang 1220406080100SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 399.7296.0997.7495.2197.7193.311. (CC) gcc options: -O3 -march=native -flto -pthread -lhmmer -leasel -lm -lmpi
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchGCC 11.1Clang 1220406080100Min: 99.64 / Avg: 99.72 / Max: 99.78Min: 95.93 / Avg: 96.09 / Max: 96.2Min: 97.68 / Avg: 97.74 / Max: 97.83Min: 95.06 / Avg: 95.21 / Max: 95.42Min: 97.56 / Avg: 97.71 / Max: 97.8Min: 93.16 / Avg: 93.31 / Max: 93.441. (CC) gcc options: -O3 -march=native -flto -pthread -lhmmer -leasel -lm -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18GCC 11.1Clang 1248121620SE +/- 0.11, N = 15SE +/- 0.23, N = 3SE +/- 0.09, N = 15SE +/- 0.13, N = 3SE +/- 0.22, N = 3SE +/- 0.13, N = 314.5014.0514.3313.9014.8013.861. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18GCC 11.1Clang 1248121620Min: 13.98 / Avg: 14.5 / Max: 15.18Min: 13.61 / Avg: 14.05 / Max: 14.36Min: 13.85 / Avg: 14.33 / Max: 14.88Min: 13.64 / Avg: 13.9 / Max: 14.05Min: 14.36 / Avg: 14.8 / Max: 15.06Min: 13.62 / Avg: 13.86 / Max: 14.081. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisGCC 11.1Clang 1220406080100SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 3SE +/- 0.19, N = 396.7993.5392.9790.7993.3492.411. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -flto -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisGCC 11.1Clang 1220406080100Min: 96.53 / Avg: 96.79 / Max: 96.92Min: 93.16 / Avg: 93.53 / Max: 93.87Min: 92.46 / Avg: 92.97 / Max: 93.46Min: 90.46 / Avg: 90.79 / Max: 91.09Min: 93.11 / Avg: 93.34 / Max: 93.56Min: 92.06 / Avg: 92.4 / Max: 92.721. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -flto -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1GCC 11.1Clang 1250100150200250SE +/- 0.08, N = 3SE +/- 0.23, N = 3SE +/- 0.50, N = 3SE +/- 0.69, N = 3SE +/- 0.11, N = 3SE +/- 0.41, N = 3208.78208.56214.28215.78202.57207.931. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1GCC 11.1Clang 124080120160200Min: 208.64 / Avg: 208.78 / Max: 208.92Min: 208.13 / Avg: 208.55 / Max: 208.94Min: 213.29 / Avg: 214.28 / Max: 214.78Min: 214.47 / Avg: 215.78 / Max: 216.83Min: 202.46 / Avg: 202.57 / Max: 202.79Min: 207.16 / Avg: 207.93 / Max: 208.551. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionGCC 11.1Clang 12714212835SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 327.4229.1228.3328.6427.9527.401. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionGCC 11.1Clang 12612182430Min: 27.36 / Avg: 27.42 / Max: 27.46Min: 28.9 / Avg: 29.12 / Max: 29.34Min: 28.3 / Avg: 28.33 / Max: 28.35Min: 28.54 / Avg: 28.64 / Max: 28.71Min: 27.81 / Avg: 27.95 / Max: 28.03Min: 27.37 / Avg: 27.4 / Max: 27.441. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEGCC 11.1Clang 1210002000300040005000SE +/- 16.33, N = 3SE +/- 42.78, N = 3SE +/- 32.67, N = 15SE +/- 66.36, N = 3SE +/- 65.77, N = 3481545754671470846161. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEGCC 11.1Clang 128001600240032004000Min: 4784 / Avg: 4815.33 / Max: 4839Min: 4509 / Avg: 4574.67 / Max: 4655Min: 4419 / Avg: 4671 / Max: 4908Min: 4575 / Avg: 4707.67 / Max: 4777Min: 4518 / Avg: 4616 / Max: 47411. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57GCC 11.1Clang 12200M400M600M800M1000MSE +/- 4650806.38, N = 3SE +/- 2690724.81, N = 3SE +/- 1252996.41, N = 3SE +/- 493288.29, N = 3SE +/- 783865.07, N = 3SE +/- 3219903.38, N = 31041200000105640000010850000001061900000109326666710734666671. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57GCC 11.1Clang 12200M400M600M800M1000MMin: 1034000000 / Avg: 1041200000 / Max: 1049900000Min: 1052600000 / Avg: 1056400000 / Max: 1061600000Min: 1083600000 / Avg: 1085000000 / Max: 1087500000Min: 1061000000 / Avg: 1061900000 / Max: 1062700000Min: 1092000000 / Avg: 1093266666.67 / Max: 1094700000Min: 1067600000 / Avg: 1073466666.67 / Max: 10787000001. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinGCC 11.1Clang 123691215SE +/- 0.11, N = 8SE +/- 0.08, N = 3SE +/- 0.14, N = 4SE +/- 0.02, N = 3SE +/- 0.09, N = 15SE +/- 0.01, N = 312.8613.0912.7513.3012.8113.341. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinGCC 11.1Clang 1248121620Min: 12.24 / Avg: 12.86 / Max: 13.11Min: 12.95 / Avg: 13.09 / Max: 13.23Min: 12.32 / Avg: 12.75 / Max: 12.91Min: 13.26 / Avg: 13.3 / Max: 13.33Min: 12.17 / Avg: 12.81 / Max: 13.34Min: 13.33 / Avg: 13.34 / Max: 13.351. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread -lm

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000GCC 11.1Clang 121122334455SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.50, N = 3SE +/- 0.54, N = 3SE +/- 0.18, N = 347.4047.7946.9247.6346.4848.581. (CC) gcc options: -O3 -march=native -flto -ldl -lz -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000GCC 11.1Clang 121020304050Min: 47.37 / Avg: 47.4 / Max: 47.44Min: 47.56 / Avg: 47.79 / Max: 48.01Min: 46.66 / Avg: 46.92 / Max: 47.21Min: 46.63 / Avg: 47.63 / Max: 48.2Min: 45.41 / Avg: 46.48 / Max: 47.12Min: 48.28 / Avg: 48.58 / Max: 48.891. (CC) gcc options: -O3 -march=native -flto -ldl -lz -lpthread

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessGCC 11.1Clang 1250K100K150K200K250KSE +/- 586.86, N = 3SE +/- 1827.48, N = 3SE +/- 1749.48, N = 3SE +/- 962.04, N = 3SE +/- 705.50, N = 32305992211072217592225722225811. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessGCC 11.1Clang 1240K80K120K160K200KMin: 229519 / Avg: 230599 / Max: 231537Min: 218415 / Avg: 221107 / Max: 224594Min: 218922 / Avg: 221758.67 / Max: 224951Min: 220779 / Avg: 222572 / Max: 224073Min: 221174 / Avg: 222581.33 / Max: 2233731. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputGCC 11.1Clang 1260120180240300SE +/- 1.92, N = 3SE +/- 1.65, N = 3SE +/- 1.27, N = 3SE +/- 0.59, N = 3SE +/- 0.45, N = 3SE +/- 0.54, N = 3277.50267.55268.11272.51270.91266.231. (CC) gcc options: -O3 -march=native -flto -rdynamic -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputGCC 11.1Clang 1250100150200250Min: 273.74 / Avg: 277.5 / Max: 280.02Min: 265.7 / Avg: 267.55 / Max: 270.85Min: 266.22 / Avg: 268.11 / Max: 270.53Min: 271.91 / Avg: 272.51 / Max: 273.7Min: 270.05 / Avg: 270.91 / Max: 271.55Min: 265.18 / Avg: 266.23 / Max: 267.011. (CC) gcc options: -O3 -march=native -flto -rdynamic -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16GCC 11.1Clang 121326395265SE +/- 0.08, N = 15SE +/- 0.08, N = 3SE +/- 0.08, N = 15SE +/- 0.53, N = 3SE +/- 0.23, N = 3SE +/- 0.45, N = 356.8257.2657.6156.9357.8456.111. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16GCC 11.1Clang 121122334455Min: 56.47 / Avg: 56.82 / Max: 57.5Min: 57.1 / Avg: 57.26 / Max: 57.38Min: 57.19 / Avg: 57.61 / Max: 58.13Min: 56.12 / Avg: 56.93 / Max: 57.92Min: 57.45 / Avg: 57.84 / Max: 58.24Min: 55.54 / Avg: 56.11 / Max: 571. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pGCC 11.1Clang 1280160240320400SE +/- 1.24, N = 3SE +/- 0.23, N = 3SE +/- 1.07, N = 3SE +/- 1.66, N = 3SE +/- 0.57, N = 3SE +/- 0.90, N = 3365.79368.55369.54375.95368.55375.081. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pGCC 11.1Clang 1270140210280350Min: 363.86 / Avg: 365.79 / Max: 368.1Min: 368.32 / Avg: 368.55 / Max: 369Min: 367.87 / Avg: 369.54 / Max: 371.52Min: 374.06 / Avg: 375.95 / Max: 379.27Min: 367.42 / Avg: 368.55 / Max: 369.23Min: 373.37 / Avg: 375.08 / Max: 376.411. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pGCC 11.1Clang 1250100150200250SE +/- 0.82, N = 3SE +/- 1.28, N = 3SE +/- 0.87, N = 3SE +/- 0.75, N = 3SE +/- 0.97, N = 3SE +/- 0.89, N = 3222.63225.57223.11227.73222.33227.661. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pGCC 11.1Clang 124080120160200Min: 221.55 / Avg: 222.63 / Max: 224.25Min: 223.09 / Avg: 225.57 / Max: 227.38Min: 221.38 / Avg: 223.11 / Max: 224.01Min: 226.84 / Avg: 227.73 / Max: 229.22Min: 220.76 / Avg: 222.33 / Max: 224.09Min: 226.33 / Avg: 227.66 / Max: 229.361. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetGCC 11.1Clang 123691215SE +/- 0.06, N = 15SE +/- 0.03, N = 3SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 311.1611.0910.9911.1611.1311.251. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetGCC 11.1Clang 123691215Min: 10.73 / Avg: 11.16 / Max: 11.53Min: 11.06 / Avg: 11.09 / Max: 11.15Min: 10.67 / Avg: 10.99 / Max: 11.3Min: 11.09 / Avg: 11.16 / Max: 11.26Min: 11.03 / Avg: 11.13 / Max: 11.25Min: 11.08 / Avg: 11.25 / Max: 11.441. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pGCC 11.1Clang 1250100150200250SE +/- 0.37, N = 3SE +/- 0.18, N = 3SE +/- 0.23, N = 3SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.48, N = 3235.27235.64235.59239.88234.70238.131. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pGCC 11.1Clang 124080120160200Min: 234.53 / Avg: 235.27 / Max: 235.66Min: 235.41 / Avg: 235.64 / Max: 235.99Min: 235.36 / Avg: 235.59 / Max: 236.05Min: 239.09 / Avg: 239.88 / Max: 240.36Min: 234.33 / Avg: 234.7 / Max: 235.35Min: 237.28 / Avg: 238.13 / Max: 238.951. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pGCC 11.1Clang 1250100150200250SE +/- 0.09, N = 3SE +/- 0.33, N = 3SE +/- 0.46, N = 3SE +/- 0.32, N = 3SE +/- 0.18, N = 3SE +/- 0.70, N = 3230.53231.35231.16234.71230.43232.661. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pGCC 11.1Clang 124080120160200Min: 230.38 / Avg: 230.53 / Max: 230.69Min: 230.7 / Avg: 231.35 / Max: 231.74Min: 230.32 / Avg: 231.16 / Max: 231.91Min: 234.14 / Avg: 234.71 / Max: 235.25Min: 230.09 / Avg: 230.43 / Max: 230.68Min: 231.27 / Avg: 232.66 / Max: 233.411. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulGCC 11.1Clang 122K4K6K8K10KSE +/- 72.34, N = 3SE +/- 27.78, N = 3SE +/- 21.79, N = 3SE +/- 47.54, N = 3SE +/- 56.75, N = 3786079427982795879161. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulGCC 11.1Clang 1214002800420056007000Min: 7717 / Avg: 7860.33 / Max: 7949Min: 7896 / Avg: 7942 / Max: 7992Min: 7956 / Avg: 7981.67 / Max: 8025Min: 7880 / Avg: 7957.67 / Max: 8044Min: 7814 / Avg: 7916.33 / Max: 80101. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverGCC 11.1Clang 1212002400360048006000SE +/- 83.12, N = 15SE +/- 62.44, N = 15SE +/- 90.91, N = 15SE +/- 66.92, N = 15SE +/- 68.97, N = 15SE +/- 96.57, N = 135120.944877.315314.735192.195445.645289.921. (CC) gcc options: -O3 -march=native -flto -mavx2
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverGCC 11.1Clang 129001800270036004500Min: 4712.52 / Avg: 5120.94 / Max: 5619.83Min: 4561.71 / Avg: 4877.31 / Max: 5218.53Min: 4768.11 / Avg: 5314.73 / Max: 5885.31Min: 4803.14 / Avg: 5192.19 / Max: 5645.17Min: 4944.37 / Avg: 5445.64 / Max: 5835.5Min: 4873.13 / Avg: 5289.92 / Max: 5793.991. (CC) gcc options: -O3 -march=native -flto -mavx2

Geometric Mean Of All Test Results

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgGeometric Mean, More Is BetterGeometric Mean Of All Test ResultsResult Composite - Ryzen 9 5950X Clang 12 vs. GCC 11 BenchmarksGCC 11.1Clang 12122436486050.7152.5153.5153.7852.5154.73