Ryzen 9 5950X Clang 12 vs. GCC 11 Benchmarks

GCC 11.1 versus LLVM Clang 12 on AMD Ryzen 9 5950X. Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105198-IB-11900KCOM08
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
Bioinformatics 3 Tests
C/C++ Compiler Tests 12 Tests
CPU Massive 12 Tests
Creator Workloads 12 Tests
Encoding 6 Tests
HPC - High Performance Computing 6 Tests
Imaging 3 Tests
Machine Learning 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 8 Tests
OpenMPI Tests 3 Tests
Renderers 2 Tests
Scientific Computing 4 Tests
Server CPU Tests 6 Tests
Single-Threaded 3 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 11.1: -O2
May 18 2021
  4 Hours, 27 Minutes
GCC 11.1: -O3 -march=native
May 18 2021
  4 Hours, 36 Minutes
GCC 11.1: -O3 -march=native -flto
May 18 2021
  1 Hour, 47 Minutes
Clang 12: -O2
May 17 2021
  1 Hour, 43 Minutes
Clang 12: -O3 -march=native
May 17 2021
  1 Hour, 42 Minutes
Clang 12: -O3 -march=native -flto
May 18 2021
  1 Hour, 32 Minutes
Invert Hiding All Results Option
  2 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X Clang 12 vs. GCC 11 BenchmarksOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3302 BIOS)AMD Starship/Matisse32GB500GB Western Digital WDS500G3X0C-00SJG0AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (2100/875MHz)AMD Navi 10 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Fedora 345.11.20-300.fc34.x86_64 (x86_64)GNOME Shell 40.1X Server + Wayland4.6 Mesa 21.0.3 (LLVM 12.0.0)GCC 11.1.1 20210428 + Clang 12.0.0Clang 12.0.0btrfs3840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilersFile-SystemScreen ResolutionRyzen 9 5950X Clang 12 Vs. GCC 11 Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- GCC 11.1: -O2: CXXFLAGS=-O2 CFLAGS=-O2- GCC 11.1: -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 11.1: -O3 -march=native -flto: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto" - Clang 12: -O2: CXXFLAGS=-O2 CFLAGS=-O2- Clang 12: -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 12: -O3 -march=native -flto: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto" - GCC 11.1: -O2, GCC 11.1: -O3 -march=native, GCC 11.1: -O3 -march=native -flto: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

c-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenncnn: CPU - blazefacetnn: CPU - MobileNet v2ncnn: CPU - shufflenet-v2ncnn: CPU - regnety_400mastcenc: Mediumncnn: CPU-v3-v3 - mobilenet-v3encode-flac: WAV To FLACncnn: CPU-v2-v2 - mobilenet-v2aobench: 2048 x 2048 - Total Timencnn: CPU - mnasnetencode-mp3: WAV To MP3ncnn: CPU - efficientnet-b0astcenc: Thoroughgraphics-magick: Resizingncnn: CPU - mobilenetencode-opus: WAV To Opus Encodecoremark: CoreMark Size 666 - Iterations Per Secondncnn: CPU - squeezenet_ssdncnn: CPU - resnet50ncnn: CPU - googlenetncnn: CPU - yolov4-tinywebp: Quality 100, Highest Compressiongraphics-magick: Rotateastcenc: Exhaustivegraphics-magick: Enhancedx265: Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080pliquid-dsp: 8 - 256 - 57webp: Quality 100, Losslesshmmer: Pfam Database Searchncnn: CPU - resnet18mrbayes: Primate Phylogeny Analysistnn: CPU - SqueezeNet v1.1webp: Quality 100, Lossless, Highest Compressionpjsip: INVITEliquid-dsp: 16 - 256 - 57lammps: Rhodopsin Proteinsqlite-speedtest: Timed Time - Size 1,000pjsip: OPTIONS, Statelesstjbench: Decompression Throughputncnn: CPU - vgg16svt-hevc: 10 - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pncnn: CPU - alexnetsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080ppjsip: OPTIONS, Statefulhimeno: Poisson Pressure SolverGCC 11.1Clang 12 -O2 -O3 -march=native -O3 -march=native -flto -O2 -O3 -march=native -O3 -march=native -flto60.9312261.85219.1304.3317.244.62964.155.8744.5431.7323.976.8025.417.6841180312.956.515830811.53211614.6126.1813.1521.065.35598557.144542226.24217.3259485000013.18199.72214.5096.785208.78027.4244815104120000012.85547.404230599277.49680256.82365.79222.6311.16235.27230.5378605120.94026225.4053701.80227.7684.3917.184.62773.846.2374.4325.4793.955.5035.377.5718212012.705.387808580.86265013.9325.3312.8521.005.200103656.641144925.91221.7161156000013.55097.74414.3392.973214.27728.3274671108500000012.74946.917221759268.11345657.61369.54223.1110.99235.59231.1679825314.73390425.4823822.55215.3416.0118.534.69953.866.1954.4626.3123.925.4035.367.5299208513.745.475849671.95944214.5025.8413.2623.945.18496456.190044926.17227.0461939333313.25497.71414.8093.337202.57227.9494616109326666712.81346.479222581270.90915057.84368.55222.3311.13234.70230.4379165445.63522149.2752061.54271.0283.8112.933.33673.367.6873.5633.7783.266.5654.529.3950169311.585.874712758.35782612.6122.9911.8821.974.77892152.204841127.77225.1557666000013.64796.09214.0593.529208.55529.1194575105640000013.08747.792221107267.55306857.26368.55225.5711.09235.64231.3579424877.30641045.0102351.52355.9253.7412.593.31793.075.6663.5730.2343.266.0984.519.3549176711.465.508722694.00627512.3523.2511.7721.734.70398151.823045227.75229.2757605000014.10595.21413.9090.785215.78128.6404708106190000013.29947.630222572272.51355956.93375.95227.7311.16239.88234.7179585192.19004244.7422371.46261.8363.6912.143.37282.935.7563.4229.7303.085.8104.309.4432176211.055.558714933.85898312.4322.9011.6221.584.72296151.897445128.47236.0558780000013.63693.31113.8692.405207.92527.400107346666713.34248.584266.22982956.11375.08227.6611.25238.13232.665289.917571OpenBenchmarking.org

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12GCC 11.11428425670SE +/- 0.16, N = 3SE +/- 0.18, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 349.2860.9345.0125.4144.7425.481. (CC) gcc options: -lm -lpthread -O3 -march=native -flto
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12GCC 11.11224364860Min: 48.97 / Avg: 49.27 / Max: 49.51Min: 60.58 / Avg: 60.93 / Max: 61.17Min: 44.82 / Avg: 45.01 / Max: 45.16Min: 25.26 / Avg: 25.41 / Max: 25.53Min: 44.45 / Avg: 44.74 / Max: 45.03Min: 25.33 / Avg: 25.48 / Max: 25.61. (CC) gcc options: -lm -lpthread -O3 -march=native -flto

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenClang 12GCC 11.180160240320400SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 1.20, N = 3SE +/- 0.67, N = 3SE +/- 0.88, N = 32062262353702373821. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenClang 12GCC 11.170140210280350Min: 205 / Avg: 206 / Max: 207Min: 226 / Avg: 226.33 / Max: 227Min: 235 / Avg: 235.33 / Max: 236Min: 368 / Avg: 369.67 / Max: 372Min: 236 / Avg: 236.67 / Max: 238Min: 381 / Avg: 382.33 / Max: 3841. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClang 12GCC 11.10.57381.14761.72142.29522.869SE +/- 0.04, N = 3SE +/- 0.01, N = 15SE +/- 0.04, N = 3SE +/- 0.01, N = 15SE +/- 0.05, N = 3SE +/- 0.01, N = 31.541.851.521.801.462.551. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceClang 12GCC 11.1246810Min: 1.49 / Avg: 1.54 / Max: 1.61Min: 1.81 / Avg: 1.85 / Max: 1.99Min: 1.44 / Avg: 1.52 / Max: 1.58Min: 1.77 / Avg: 1.8 / Max: 1.87Min: 1.36 / Avg: 1.46 / Max: 1.52Min: 2.53 / Avg: 2.55 / Max: 2.561. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clang 12GCC 11.180160240320400SE +/- 0.41, N = 3SE +/- 1.57, N = 3SE +/- 0.48, N = 3SE +/- 0.86, N = 3SE +/- 0.55, N = 3SE +/- 1.27, N = 3271.03219.13355.93227.77261.84215.341. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Clang 12GCC 11.160120180240300Min: 270.29 / Avg: 271.03 / Max: 271.72Min: 216.8 / Avg: 219.13 / Max: 222.13Min: 355 / Avg: 355.93 / Max: 356.62Min: 226.54 / Avg: 227.77 / Max: 229.43Min: 260.77 / Avg: 261.84 / Max: 262.6Min: 212.93 / Avg: 215.34 / Max: 217.241. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clang 12GCC 11.1246810SE +/- 0.05, N = 3SE +/- 0.01, N = 14SE +/- 0.05, N = 3SE +/- 0.15, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 33.814.333.744.393.696.011. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Clang 12GCC 11.1246810Min: 3.74 / Avg: 3.81 / Max: 3.91Min: 4.29 / Avg: 4.33 / Max: 4.36Min: 3.67 / Avg: 3.74 / Max: 3.83Min: 4.2 / Avg: 4.39 / Max: 6.45Min: 3.66 / Avg: 3.69 / Max: 3.73Min: 5.96 / Avg: 6.01 / Max: 6.031. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClang 12GCC 11.1510152025SE +/- 0.20, N = 3SE +/- 0.06, N = 15SE +/- 0.16, N = 3SE +/- 0.05, N = 15SE +/- 0.14, N = 3SE +/- 0.09, N = 312.9317.2412.5917.1812.1418.531. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mClang 12GCC 11.1510152025Min: 12.53 / Avg: 12.93 / Max: 13.17Min: 16.99 / Avg: 17.24 / Max: 17.86Min: 12.26 / Avg: 12.59 / Max: 12.79Min: 17.05 / Avg: 17.18 / Max: 17.66Min: 11.97 / Avg: 12.14 / Max: 12.42Min: 18.36 / Avg: 18.53 / Max: 18.651. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12GCC 11.11.05742.11483.17224.22965.287SE +/- 0.0156, N = 3SE +/- 0.0337, N = 3SE +/- 0.0243, N = 3SE +/- 0.0292, N = 3SE +/- 0.0302, N = 3SE +/- 0.0045, N = 33.33674.62963.31794.62773.37284.69951. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12GCC 11.1246810Min: 3.31 / Avg: 3.34 / Max: 3.36Min: 4.59 / Avg: 4.63 / Max: 4.7Min: 3.29 / Avg: 3.32 / Max: 3.37Min: 4.57 / Avg: 4.63 / Max: 4.67Min: 3.31 / Avg: 3.37 / Max: 3.41Min: 4.69 / Avg: 4.7 / Max: 4.711. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clang 12GCC 11.10.93381.86762.80143.73524.669SE +/- 0.07, N = 3SE +/- 0.03, N = 15SE +/- 0.05, N = 3SE +/- 0.02, N = 15SE +/- 0.01, N = 3SE +/- 0.03, N = 33.364.153.073.842.933.861. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Clang 12GCC 11.1246810Min: 3.29 / Avg: 3.36 / Max: 3.5Min: 4.06 / Avg: 4.15 / Max: 4.42Min: 3 / Avg: 3.07 / Max: 3.16Min: 3.79 / Avg: 3.84 / Max: 4.05Min: 2.92 / Avg: 2.93 / Max: 2.94Min: 3.81 / Avg: 3.86 / Max: 3.911. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12GCC 11.1246810SE +/- 0.024, N = 5SE +/- 0.045, N = 5SE +/- 0.005, N = 5SE +/- 0.020, N = 5SE +/- 0.030, N = 5SE +/- 0.036, N = 57.6875.8745.6666.2375.7566.1951. (CXX) g++ options: -O3 -march=native -flto -logg -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12GCC 11.13691215Min: 7.61 / Avg: 7.69 / Max: 7.74Min: 5.79 / Avg: 5.87 / Max: 5.99Min: 5.65 / Avg: 5.67 / Max: 5.68Min: 6.16 / Avg: 6.24 / Max: 6.27Min: 5.69 / Avg: 5.76 / Max: 5.83Min: 6.09 / Avg: 6.2 / Max: 6.281. (CXX) g++ options: -O3 -march=native -flto -logg -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clang 12GCC 11.11.02152.0433.06454.0865.1075SE +/- 0.11, N = 3SE +/- 0.04, N = 15SE +/- 0.04, N = 3SE +/- 0.01, N = 15SE +/- 0.08, N = 3SE +/- 0.01, N = 33.564.543.574.433.424.461. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Clang 12GCC 11.1246810Min: 3.42 / Avg: 3.56 / Max: 3.77Min: 4.45 / Avg: 4.54 / Max: 5.09Min: 3.49 / Avg: 3.57 / Max: 3.64Min: 4.39 / Avg: 4.43 / Max: 4.51Min: 3.27 / Avg: 3.42 / Max: 3.51Min: 4.44 / Avg: 4.46 / Max: 4.491. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeClang 12GCC 11.1816243240SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.43, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.01, N = 333.7831.7330.2325.4829.7326.311. (CC) gcc options: -lm -O3 -march=native -flto
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeClang 12GCC 11.1714212835Min: 33.72 / Avg: 33.78 / Max: 33.85Min: 31.54 / Avg: 31.73 / Max: 32Min: 29.42 / Avg: 30.23 / Max: 30.88Min: 25.45 / Avg: 25.48 / Max: 25.53Min: 29.62 / Avg: 29.73 / Max: 29.91Min: 26.29 / Avg: 26.31 / Max: 26.331. (CC) gcc options: -lm -O3 -march=native -flto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClang 12GCC 11.10.89331.78662.67993.57324.4665SE +/- 0.08, N = 3SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 15SE +/- 0.04, N = 3SE +/- 0.02, N = 33.263.973.263.953.083.921. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetClang 12GCC 11.1246810Min: 3.18 / Avg: 3.26 / Max: 3.42Min: 3.88 / Avg: 3.97 / Max: 4.48Min: 3.18 / Avg: 3.26 / Max: 3.36Min: 3.85 / Avg: 3.95 / Max: 4.35Min: 3.02 / Avg: 3.08 / Max: 3.16Min: 3.88 / Avg: 3.92 / Max: 3.951. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12GCC 11.1246810SE +/- 0.068, N = 4SE +/- 0.084, N = 3SE +/- 0.045, N = 3SE +/- 0.071, N = 3SE +/- 0.056, N = 3SE +/- 0.010, N = 36.5656.8026.0985.5035.8105.4031. (CC) gcc options: -O3 -pipe -march=native -flto -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12GCC 11.13691215Min: 6.38 / Avg: 6.57 / Max: 6.67Min: 6.64 / Avg: 6.8 / Max: 6.93Min: 6.05 / Avg: 6.1 / Max: 6.19Min: 5.37 / Avg: 5.5 / Max: 5.61Min: 5.7 / Avg: 5.81 / Max: 5.89Min: 5.39 / Avg: 5.4 / Max: 5.421. (CC) gcc options: -O3 -pipe -march=native -flto -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clang 12GCC 11.11.21732.43463.65194.86926.0865SE +/- 0.06, N = 3SE +/- 0.05, N = 15SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 34.525.414.515.374.305.361. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Clang 12GCC 11.1246810Min: 4.41 / Avg: 4.52 / Max: 4.63Min: 5.29 / Avg: 5.41 / Max: 6.01Min: 4.49 / Avg: 4.51 / Max: 4.53Min: 5.29 / Avg: 5.37 / Max: 5.68Min: 4.25 / Avg: 4.3 / Max: 4.36Min: 5.32 / Avg: 5.36 / Max: 5.391. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12GCC 11.13691215SE +/- 0.0279, N = 3SE +/- 0.0109, N = 3SE +/- 0.0407, N = 3SE +/- 0.0351, N = 3SE +/- 0.0114, N = 3SE +/- 0.0109, N = 39.39507.68419.35497.57189.44327.52991. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12GCC 11.13691215Min: 9.36 / Avg: 9.39 / Max: 9.45Min: 7.67 / Avg: 7.68 / Max: 7.7Min: 9.28 / Avg: 9.35 / Max: 9.41Min: 7.5 / Avg: 7.57 / Max: 7.61Min: 9.42 / Avg: 9.44 / Max: 9.46Min: 7.51 / Avg: 7.53 / Max: 7.551. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12GCC 11.15001000150020002500SE +/- 4.48, N = 3SE +/- 0.88, N = 3SE +/- 3.61, N = 3SE +/- 1.53, N = 3SE +/- 2.85, N = 3SE +/- 3.71, N = 31693180317672120176220851. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12GCC 11.1400800120016002000Min: 1687 / Avg: 1693.33 / Max: 1702Min: 1802 / Avg: 1803.33 / Max: 1805Min: 1760 / Avg: 1767 / Max: 1772Min: 2117 / Avg: 2120 / Max: 2122Min: 1759 / Avg: 1762.33 / Max: 1768Min: 2080 / Avg: 2084.67 / Max: 20921. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClang 12GCC 11.148121620SE +/- 0.03, N = 3SE +/- 0.11, N = 15SE +/- 0.11, N = 3SE +/- 0.13, N = 15SE +/- 0.12, N = 3SE +/- 0.17, N = 311.5812.9511.4612.7011.0513.741. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetClang 12GCC 11.148121620Min: 11.53 / Avg: 11.58 / Max: 11.64Min: 12.28 / Avg: 12.95 / Max: 13.68Min: 11.34 / Avg: 11.46 / Max: 11.68Min: 12.05 / Avg: 12.7 / Max: 14.02Min: 10.82 / Avg: 11.05 / Max: 11.18Min: 13.51 / Avg: 13.74 / Max: 14.071. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12GCC 11.1246810SE +/- 0.052, N = 5SE +/- 0.035, N = 5SE +/- 0.045, N = 5SE +/- 0.048, N = 5SE +/- 0.016, N = 5SE +/- 0.023, N = 55.8746.5155.5085.3875.5585.4751. (CXX) g++ options: -O3 -march=native -flto -logg -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12GCC 11.13691215Min: 5.68 / Avg: 5.87 / Max: 5.98Min: 6.43 / Avg: 6.52 / Max: 6.6Min: 5.35 / Avg: 5.51 / Max: 5.6Min: 5.25 / Avg: 5.39 / Max: 5.48Min: 5.52 / Avg: 5.56 / Max: 5.6Min: 5.42 / Avg: 5.48 / Max: 5.531. (CXX) g++ options: -O3 -march=native -flto -logg -lm

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12GCC 11.1200K400K600K800K1000KSE +/- 789.22, N = 3SE +/- 989.68, N = 3SE +/- 845.83, N = 3SE +/- 2230.96, N = 3SE +/- 581.76, N = 3SE +/- 836.68, N = 3712758.36830811.53722694.01808580.86714933.86849671.961. (CC) gcc options: -O2 -O3 -march=native -flto -lrt" -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12GCC 11.1150K300K450K600K750KMin: 711532.76 / Avg: 712758.36 / Max: 714232.57Min: 829015.54 / Avg: 830811.53 / Max: 832430.09Min: 721451.92 / Avg: 722694.01 / Max: 724309.64Min: 805707.09 / Avg: 808580.86 / Max: 812973.71Min: 714338.86 / Avg: 714933.86 / Max: 716097.27Min: 848019.08 / Avg: 849671.96 / Max: 850724.451. (CC) gcc options: -O2 -O3 -march=native -flto -lrt" -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClang 12GCC 11.148121620SE +/- 0.31, N = 3SE +/- 0.10, N = 15SE +/- 0.20, N = 3SE +/- 0.09, N = 15SE +/- 0.20, N = 3SE +/- 0.06, N = 312.6114.6112.3513.9312.4314.501. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdClang 12GCC 11.148121620Min: 12.27 / Avg: 12.61 / Max: 13.22Min: 13.63 / Avg: 14.61 / Max: 15.03Min: 12.13 / Avg: 12.35 / Max: 12.75Min: 13.16 / Avg: 13.93 / Max: 14.57Min: 12.05 / Avg: 12.43 / Max: 12.73Min: 14.38 / Avg: 14.5 / Max: 14.581. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clang 12GCC 11.1612182430SE +/- 0.11, N = 3SE +/- 0.18, N = 15SE +/- 0.06, N = 3SE +/- 0.10, N = 15SE +/- 0.09, N = 3SE +/- 0.21, N = 322.9926.1823.2525.3322.9025.841. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Clang 12GCC 11.1612182430Min: 22.79 / Avg: 22.99 / Max: 23.18Min: 24.98 / Avg: 26.18 / Max: 27.12Min: 23.14 / Avg: 23.25 / Max: 23.34Min: 24.76 / Avg: 25.33 / Max: 25.86Min: 22.79 / Avg: 22.9 / Max: 23.07Min: 25.42 / Avg: 25.84 / Max: 26.11. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClang 12GCC 11.13691215SE +/- 0.13, N = 3SE +/- 0.22, N = 15SE +/- 0.16, N = 3SE +/- 0.12, N = 15SE +/- 0.09, N = 3SE +/- 0.44, N = 311.8813.1511.7712.8511.6213.261. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetClang 12GCC 11.148121620Min: 11.65 / Avg: 11.88 / Max: 12.1Min: 12.43 / Avg: 13.15 / Max: 14.57Min: 11.54 / Avg: 11.77 / Max: 12.07Min: 12.38 / Avg: 12.85 / Max: 13.81Min: 11.45 / Avg: 11.62 / Max: 11.77Min: 12.7 / Avg: 13.26 / Max: 14.121. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClang 12GCC 11.1612182430SE +/- 0.18, N = 3SE +/- 0.11, N = 15SE +/- 0.12, N = 3SE +/- 0.14, N = 15SE +/- 0.10, N = 3SE +/- 0.41, N = 321.9721.0621.7321.0021.5823.941. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyClang 12GCC 11.1612182430Min: 21.72 / Avg: 21.97 / Max: 22.32Min: 20.37 / Avg: 21.06 / Max: 21.65Min: 21.5 / Avg: 21.73 / Max: 21.9Min: 20.34 / Avg: 21 / Max: 22.04Min: 21.43 / Avg: 21.58 / Max: 21.78Min: 23.12 / Avg: 23.94 / Max: 24.41. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12GCC 11.11.20492.40983.61474.81966.0245SE +/- 0.059, N = 4SE +/- 0.021, N = 3SE +/- 0.026, N = 3SE +/- 0.023, N = 3SE +/- 0.027, N = 3SE +/- 0.036, N = 34.7785.3554.7035.2004.7225.1841. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12GCC 11.1246810Min: 4.7 / Avg: 4.78 / Max: 4.95Min: 5.32 / Avg: 5.35 / Max: 5.39Min: 4.67 / Avg: 4.7 / Max: 4.75Min: 5.18 / Avg: 5.2 / Max: 5.25Min: 4.67 / Avg: 4.72 / Max: 4.76Min: 5.15 / Avg: 5.18 / Max: 5.261. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12GCC 11.12004006008001000SE +/- 5.55, N = 3SE +/- 11.70, N = 3SE +/- 0.58, N = 3SE +/- 5.57, N = 3SE +/- 4.37, N = 3SE +/- 3.48, N = 392198598110369619641. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12GCC 11.12004006008001000Min: 911 / Avg: 921.33 / Max: 930Min: 969 / Avg: 985.33 / Max: 1008Min: 980 / Avg: 981 / Max: 982Min: 1029 / Avg: 1036 / Max: 1047Min: 956 / Avg: 961.33 / Max: 970Min: 958 / Avg: 963.67 / Max: 9701. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12GCC 11.11326395265SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 352.2057.1451.8256.6451.9056.191. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12GCC 11.11122334455Min: 51.99 / Avg: 52.2 / Max: 52.35Min: 56.97 / Avg: 57.14 / Max: 57.28Min: 51.61 / Avg: 51.82 / Max: 51.96Min: 56.48 / Avg: 56.64 / Max: 56.73Min: 51.75 / Avg: 51.9 / Max: 51.98Min: 56.01 / Avg: 56.19 / Max: 56.331. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12GCC 11.1100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34114224524494514491. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12GCC 11.180160240320400Min: 411 / Avg: 411.33 / Max: 412Min: 422 / Avg: 422.33 / Max: 423Min: 452 / Avg: 452.33 / Max: 453Min: 449 / Avg: 449.33 / Max: 4501. (CC) gcc options: -fopenmp -O3 -march=native -flto -pthread -ljpeg -lz -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12GCC 11.1714212835SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 327.7726.2427.7525.9128.4726.171. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread -lrt -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12GCC 11.1612182430Min: 27.52 / Avg: 27.77 / Max: 27.91Min: 26.14 / Avg: 26.24 / Max: 26.34Min: 27.68 / Avg: 27.75 / Max: 27.8Min: 25.84 / Avg: 25.91 / Max: 25.96Min: 28.36 / Avg: 28.47 / Max: 28.67Min: 25.99 / Avg: 26.17 / Max: 26.261. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread -lrt -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12GCC 11.150100150200250SE +/- 0.95, N = 3SE +/- 0.49, N = 3SE +/- 0.32, N = 3SE +/- 0.83, N = 3SE +/- 1.15, N = 3SE +/- 0.03, N = 3225.15217.32229.27221.71236.05227.041. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12GCC 11.14080120160200Min: 223.3 / Avg: 225.15 / Max: 226.42Min: 216.61 / Avg: 217.32 / Max: 218.26Min: 228.83 / Avg: 229.27 / Max: 229.89Min: 220.51 / Avg: 221.71 / Max: 223.3Min: 233.74 / Avg: 236.05 / Max: 237.25Min: 227.01 / Avg: 227.04 / Max: 227.11. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Clang 12GCC 11.1130M260M390M520M650MSE +/- 1824399.44, N = 3SE +/- 185202.59, N = 3SE +/- 4072558.57, N = 3SE +/- 4455270.29, N = 3SE +/- 1855020.22, N = 3SE +/- 1343755.10, N = 35766600005948500005760500006115600005878000006193933331. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Clang 12GCC 11.1110M220M330M440M550MMin: 574300000 / Avg: 576660000 / Max: 580250000Min: 594500000 / Avg: 594850000 / Max: 595130000Min: 567910000 / Avg: 576050000 / Max: 580370000Min: 603250000 / Avg: 611560000 / Max: 618500000Min: 585930000 / Avg: 587800000 / Max: 591510000Min: 617130000 / Avg: 619393333.33 / Max: 6217800001. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12GCC 11.148121620SE +/- 0.19, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.15, N = 4SE +/- 0.17, N = 313.6513.1814.1113.5513.6413.251. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12GCC 11.148121620Min: 13.35 / Avg: 13.65 / Max: 13.99Min: 13.11 / Avg: 13.18 / Max: 13.22Min: 14.06 / Avg: 14.11 / Max: 14.2Min: 13.44 / Avg: 13.55 / Max: 13.61Min: 13.27 / Avg: 13.64 / Max: 13.98Min: 12.94 / Avg: 13.25 / Max: 13.51. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchClang 12GCC 11.120406080100SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 396.0999.7295.2197.7493.3197.711. (CC) gcc options: -O3 -march=native -flto -pthread -lhmmer -leasel -lm -lmpi
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchClang 12GCC 11.120406080100Min: 95.93 / Avg: 96.09 / Max: 96.2Min: 99.64 / Avg: 99.72 / Max: 99.78Min: 95.06 / Avg: 95.21 / Max: 95.42Min: 97.68 / Avg: 97.74 / Max: 97.83Min: 93.16 / Avg: 93.31 / Max: 93.44Min: 97.56 / Avg: 97.71 / Max: 97.81. (CC) gcc options: -O3 -march=native -flto -pthread -lhmmer -leasel -lm -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clang 12GCC 11.148121620SE +/- 0.23, N = 3SE +/- 0.11, N = 15SE +/- 0.13, N = 3SE +/- 0.09, N = 15SE +/- 0.13, N = 3SE +/- 0.22, N = 314.0514.5013.9014.3313.8614.801. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Clang 12GCC 11.148121620Min: 13.61 / Avg: 14.05 / Max: 14.36Min: 13.98 / Avg: 14.5 / Max: 15.18Min: 13.64 / Avg: 13.9 / Max: 14.05Min: 13.85 / Avg: 14.33 / Max: 14.88Min: 13.62 / Avg: 13.86 / Max: 14.08Min: 14.36 / Avg: 14.8 / Max: 15.061. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12GCC 11.120406080100SE +/- 0.21, N = 3SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.29, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 393.5396.7990.7992.9792.4193.341. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -flto -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12GCC 11.120406080100Min: 93.16 / Avg: 93.53 / Max: 93.87Min: 96.53 / Avg: 96.79 / Max: 96.92Min: 90.46 / Avg: 90.79 / Max: 91.09Min: 92.46 / Avg: 92.97 / Max: 93.46Min: 92.06 / Avg: 92.4 / Max: 92.72Min: 93.11 / Avg: 93.34 / Max: 93.561. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -flto -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clang 12GCC 11.150100150200250SE +/- 0.23, N = 3SE +/- 0.08, N = 3SE +/- 0.69, N = 3SE +/- 0.50, N = 3SE +/- 0.41, N = 3SE +/- 0.11, N = 3208.56208.78215.78214.28207.93202.571. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Clang 12GCC 11.14080120160200Min: 208.13 / Avg: 208.55 / Max: 208.94Min: 208.64 / Avg: 208.78 / Max: 208.92Min: 214.47 / Avg: 215.78 / Max: 216.83Min: 213.29 / Avg: 214.28 / Max: 214.78Min: 207.16 / Avg: 207.93 / Max: 208.55Min: 202.46 / Avg: 202.57 / Max: 202.791. (CXX) g++ options: -O3 -march=native -flto -pthread -fvisibility=hidden -O2 -rdynamic -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12GCC 11.1714212835SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 329.1227.4228.6428.3327.4027.951. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12GCC 11.1612182430Min: 28.9 / Avg: 29.12 / Max: 29.34Min: 27.36 / Avg: 27.42 / Max: 27.46Min: 28.54 / Avg: 28.64 / Max: 28.71Min: 28.3 / Avg: 28.33 / Max: 28.35Min: 27.37 / Avg: 27.4 / Max: 27.44Min: 27.81 / Avg: 27.95 / Max: 28.031. (CC) gcc options: -fvisibility=hidden -O3 -march=native -flto -pthread -lm -ljpeg

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEClang 12GCC 11.110002000300040005000SE +/- 42.78, N = 3SE +/- 16.33, N = 3SE +/- 66.36, N = 3SE +/- 32.67, N = 15SE +/- 65.77, N = 3457548154708467146161. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEClang 12GCC 11.18001600240032004000Min: 4509 / Avg: 4574.67 / Max: 4655Min: 4784 / Avg: 4815.33 / Max: 4839Min: 4575 / Avg: 4707.67 / Max: 4777Min: 4419 / Avg: 4671 / Max: 4908Min: 4518 / Avg: 4616 / Max: 47411. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Clang 12GCC 11.1200M400M600M800M1000MSE +/- 2690724.81, N = 3SE +/- 4650806.38, N = 3SE +/- 493288.29, N = 3SE +/- 1252996.41, N = 3SE +/- 3219903.38, N = 3SE +/- 783865.07, N = 31056400000104120000010619000001085000000107346666710932666671. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Clang 12GCC 11.1200M400M600M800M1000MMin: 1052600000 / Avg: 1056400000 / Max: 1061600000Min: 1034000000 / Avg: 1041200000 / Max: 1049900000Min: 1061000000 / Avg: 1061900000 / Max: 1062700000Min: 1083600000 / Avg: 1085000000 / Max: 1087500000Min: 1067600000 / Avg: 1073466666.67 / Max: 1078700000Min: 1092000000 / Avg: 1093266666.67 / Max: 10947000001. (CC) gcc options: -O3 -march=native -flto -pthread -lm -lc -lliquid

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClang 12GCC 11.13691215SE +/- 0.08, N = 3SE +/- 0.11, N = 8SE +/- 0.02, N = 3SE +/- 0.14, N = 4SE +/- 0.01, N = 3SE +/- 0.09, N = 1513.0912.8613.3012.7513.3412.811. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinClang 12GCC 11.148121620Min: 12.95 / Avg: 13.09 / Max: 13.23Min: 12.24 / Avg: 12.86 / Max: 13.11Min: 13.26 / Avg: 13.3 / Max: 13.33Min: 12.32 / Avg: 12.75 / Max: 12.91Min: 13.33 / Avg: 13.34 / Max: 13.35Min: 12.17 / Avg: 12.81 / Max: 13.341. (CXX) g++ options: -O3 -march=native -flto -O2 -pthread -lm

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clang 12GCC 11.11122334455SE +/- 0.13, N = 3SE +/- 0.02, N = 3SE +/- 0.50, N = 3SE +/- 0.16, N = 3SE +/- 0.18, N = 3SE +/- 0.54, N = 347.7947.4047.6346.9248.5846.481. (CC) gcc options: -O3 -march=native -flto -ldl -lz -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clang 12GCC 11.11020304050Min: 47.56 / Avg: 47.79 / Max: 48.01Min: 47.37 / Avg: 47.4 / Max: 47.44Min: 46.63 / Avg: 47.63 / Max: 48.2Min: 46.66 / Avg: 46.92 / Max: 47.21Min: 48.28 / Avg: 48.58 / Max: 48.89Min: 45.41 / Avg: 46.48 / Max: 47.121. (CC) gcc options: -O3 -march=native -flto -ldl -lz -lpthread

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessClang 12GCC 11.150K100K150K200K250KSE +/- 1827.48, N = 3SE +/- 586.86, N = 3SE +/- 962.04, N = 3SE +/- 1749.48, N = 3SE +/- 705.50, N = 32211072305992225722217592225811. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessClang 12GCC 11.140K80K120K160K200KMin: 218415 / Avg: 221107 / Max: 224594Min: 229519 / Avg: 230599 / Max: 231537Min: 220779 / Avg: 222572 / Max: 224073Min: 218922 / Avg: 221758.67 / Max: 224951Min: 221174 / Avg: 222581.33 / Max: 2233731. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputClang 12GCC 11.160120180240300SE +/- 1.65, N = 3SE +/- 1.92, N = 3SE +/- 0.59, N = 3SE +/- 1.27, N = 3SE +/- 0.54, N = 3SE +/- 0.45, N = 3267.55277.50272.51268.11266.23270.911. (CC) gcc options: -O3 -march=native -flto -rdynamic -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputClang 12GCC 11.150100150200250Min: 265.7 / Avg: 267.55 / Max: 270.85Min: 273.74 / Avg: 277.5 / Max: 280.02Min: 271.91 / Avg: 272.51 / Max: 273.7Min: 266.22 / Avg: 268.11 / Max: 270.53Min: 265.18 / Avg: 266.23 / Max: 267.01Min: 270.05 / Avg: 270.91 / Max: 271.551. (CC) gcc options: -O3 -march=native -flto -rdynamic -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clang 12GCC 11.11326395265SE +/- 0.08, N = 3SE +/- 0.08, N = 15SE +/- 0.53, N = 3SE +/- 0.08, N = 15SE +/- 0.45, N = 3SE +/- 0.23, N = 357.2656.8256.9357.6156.1157.841. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Clang 12GCC 11.11122334455Min: 57.1 / Avg: 57.26 / Max: 57.38Min: 56.47 / Avg: 56.82 / Max: 57.5Min: 56.12 / Avg: 56.93 / Max: 57.92Min: 57.19 / Avg: 57.61 / Max: 58.13Min: 55.54 / Avg: 56.11 / Max: 57Min: 57.45 / Avg: 57.84 / Max: 58.241. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12GCC 11.180160240320400SE +/- 0.23, N = 3SE +/- 1.24, N = 3SE +/- 1.66, N = 3SE +/- 1.07, N = 3SE +/- 0.90, N = 3SE +/- 0.57, N = 3368.55365.79375.95369.54375.08368.551. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12GCC 11.170140210280350Min: 368.32 / Avg: 368.55 / Max: 369Min: 363.86 / Avg: 365.79 / Max: 368.1Min: 374.06 / Avg: 375.95 / Max: 379.27Min: 367.87 / Avg: 369.54 / Max: 371.52Min: 373.37 / Avg: 375.08 / Max: 376.41Min: 367.42 / Avg: 368.55 / Max: 369.231. (CC) gcc options: -O3 -march=native -flto -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12GCC 11.150100150200250SE +/- 1.28, N = 3SE +/- 0.82, N = 3SE +/- 0.75, N = 3SE +/- 0.87, N = 3SE +/- 0.89, N = 3SE +/- 0.97, N = 3225.57222.63227.73223.11227.66222.331. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12GCC 11.14080120160200Min: 223.09 / Avg: 225.57 / Max: 227.38Min: 221.55 / Avg: 222.63 / Max: 224.25Min: 226.84 / Avg: 227.73 / Max: 229.22Min: 221.38 / Avg: 223.11 / Max: 224.01Min: 226.33 / Avg: 227.66 / Max: 229.36Min: 220.76 / Avg: 222.33 / Max: 224.091. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClang 12GCC 11.13691215SE +/- 0.03, N = 3SE +/- 0.06, N = 15SE +/- 0.05, N = 3SE +/- 0.04, N = 15SE +/- 0.10, N = 3SE +/- 0.06, N = 311.0911.1611.1610.9911.2511.131. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetClang 12GCC 11.13691215Min: 11.06 / Avg: 11.09 / Max: 11.15Min: 10.73 / Avg: 11.16 / Max: 11.53Min: 11.09 / Avg: 11.16 / Max: 11.26Min: 10.67 / Avg: 10.99 / Max: 11.3Min: 11.08 / Avg: 11.25 / Max: 11.44Min: 11.03 / Avg: 11.13 / Max: 11.251. (CXX) g++ options: -O3 -march=native -flto -O2 -rdynamic -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12GCC 11.150100150200250SE +/- 0.18, N = 3SE +/- 0.37, N = 3SE +/- 0.40, N = 3SE +/- 0.23, N = 3SE +/- 0.48, N = 3SE +/- 0.33, N = 3235.64235.27239.88235.59238.13234.701. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12GCC 11.14080120160200Min: 235.41 / Avg: 235.64 / Max: 235.99Min: 234.53 / Avg: 235.27 / Max: 235.66Min: 239.09 / Avg: 239.88 / Max: 240.36Min: 235.36 / Avg: 235.59 / Max: 236.05Min: 237.28 / Avg: 238.13 / Max: 238.95Min: 234.33 / Avg: 234.7 / Max: 235.351. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12GCC 11.150100150200250SE +/- 0.33, N = 3SE +/- 0.09, N = 3SE +/- 0.32, N = 3SE +/- 0.46, N = 3SE +/- 0.70, N = 3SE +/- 0.18, N = 3231.35230.53234.71231.16232.66230.431. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12GCC 11.14080120160200Min: 230.7 / Avg: 231.35 / Max: 231.74Min: 230.38 / Avg: 230.53 / Max: 230.69Min: 234.14 / Avg: 234.71 / Max: 235.25Min: 230.32 / Avg: 231.16 / Max: 231.91Min: 231.27 / Avg: 232.66 / Max: 233.41Min: 230.09 / Avg: 230.43 / Max: 230.681. (CC) gcc options: -O3 -fcommon -march=native -flto -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulClang 12GCC 11.12K4K6K8K10KSE +/- 27.78, N = 3SE +/- 72.34, N = 3SE +/- 47.54, N = 3SE +/- 21.79, N = 3SE +/- 56.75, N = 3794278607958798279161. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulClang 12GCC 11.114002800420056007000Min: 7896 / Avg: 7942 / Max: 7992Min: 7717 / Avg: 7860.33 / Max: 7949Min: 7880 / Avg: 7957.67 / Max: 8044Min: 7956 / Avg: 7981.67 / Max: 8025Min: 7814 / Avg: 7916.33 / Max: 80101. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClang 12GCC 11.112002400360048006000SE +/- 62.44, N = 15SE +/- 83.12, N = 15SE +/- 66.92, N = 15SE +/- 90.91, N = 15SE +/- 96.57, N = 13SE +/- 68.97, N = 154877.315120.945192.195314.735289.925445.641. (CC) gcc options: -O3 -march=native -flto -mavx2
-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverClang 12GCC 11.19001800270036004500Min: 4561.71 / Avg: 4877.31 / Max: 5218.53Min: 4712.52 / Avg: 5120.94 / Max: 5619.83Min: 4803.14 / Avg: 5192.19 / Max: 5645.17Min: 4768.11 / Avg: 5314.73 / Max: 5885.31Min: 4873.13 / Avg: 5289.92 / Max: 5793.99Min: 4944.37 / Avg: 5445.64 / Max: 5835.51. (CC) gcc options: -O3 -march=native -flto -mavx2

Geometric Mean Of All Test Results

-O2-O3 -march=native-O3 -march=native -fltoOpenBenchmarking.orgGeometric Mean, More Is BetterGeometric Mean Of All Test ResultsResult Composite - Ryzen 9 5950X Clang 12 vs. GCC 11 BenchmarksClang 12GCC 11.1122436486052.5150.7153.7853.5154.7352.51