Ryzen 9 5950X AOCC 3.0 Compiler Benchmarking

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103167-PTS-RYZEN95988
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 4 Tests
C++ Boost Tests 3 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 16 Tests
Compression Tests 2 Tests
CPU Massive 16 Tests
Creator Workloads 28 Tests
Cryptography 2 Tests
Database Test Suite 2 Tests
Encoding 10 Tests
Game Development 4 Tests
HPC - High Performance Computing 8 Tests
Imaging 7 Tests
Machine Learning 6 Tests
Multi-Core 17 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 4 Tests
Raytracing 3 Tests
Renderers 4 Tests
Scientific Computing 2 Tests
Software Defined Radio 2 Tests
Server 3 Tests
Server CPU Tests 12 Tests
Single-Threaded 2 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 10.2
March 14 2021
  14 Hours, 26 Minutes
LLVM Clang 12
March 15 2021
  11 Hours, 8 Minutes
AMD AOCC 2.3
March 14 2021
  10 Hours, 49 Minutes
AMD AOCC 3.0
March 15 2021
  10 Hours, 54 Minutes
Invert Hiding All Results Option
  11 Hours, 49 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X AOCC 3.0 Compiler BenchmarkingOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD NAVY_FLOUNDER 12GB (2855/1000MHz)AMD Device ab28ASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.6-051106-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.168GCC 10.2.0Clang 12.0.0-++rc3-1~exp1~oibaf~gClang 11.0.0Clang 12.0.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilersFile-SystemScreen ResolutionRyzen 9 5950X AOCC 3.0 Compiler Benchmarking PerformanceSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 10.2: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - AMD AOCC 2.3: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - AMD AOCC 3.0: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0Logarithmic Result OverviewPhoronix Test SuiteSysbenchTimed LLVM CompilationC-RayLibRawEtcpakOgg Audio EncodingGoogle SynthMarkGraphicsMagickSVT-AV1TSCPQuantLibNCNNTNNJPEG XL DecodingPOV-RayZstd CompressionSVT-VP9libavif avifencJPEG XLONNX RuntimeASTC EncoderWebP2 Image EncodeWebP Image EncodeBasis Universaldav1dLZ4 CompressionTachyonx265simdjsonTimed MrBayes AnalysisRNNoiseNgspiceRedisGcrypt LibraryLiquid-DSPCrypto++x264WavPack Audio EncodingGNU RadioTimed Godot Game Engine Compilation

Ryzen 9 5950X AOCC 3.0 Compiler Benchmarkingsysbench: CPUetcpak: DXT1build-llvm: Time To Compilec-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenlibraw: Post-Processing Benchmarkncnn: CPU - regnety_400mgraphics-magick: HWB Color Spaceastcenc: Thoroughetcpak: ETC1svt-av1: Enc Mode 8 - 1080pgraphics-magick: Resizingncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3tnn: CPU - MobileNet v2ncnn: CPU - mnasnetencode-ogg: WAV To Oggastcenc: Mediumsynthmark: VoiceMark_100compress-zstd: 3, Long Mode - Compression Speedgraphics-magick: Rotatencnn: CPU - efficientnet-b0tscp: AI Chess Performancencnn: CPU - blazefacequantlib: graphics-magick: Noise-Gaussianetcpak: ETC2liquid-dsp: 32 - 256 - 57onnx: shufflenet-v2-10 - OpenMP CPUjpegxl-decode: 1webp: Quality 100, Highest Compressionaom-av1: Speed 0 Two-Passncnn: CPU - mobilenetsvt-av1: Enc Mode 4 - 1080pavifenc: 6, Losslessncnn: CPU - shufflenet-v2jpegxl-decode: Allncnn: CPU - squeezenet_ssdjpegxl: JPEG - 8compress-lz4: 9 - Compression Speedncnn: CPU - resnet50compress-zstd: 19, Long Mode - Decompression Speedsimdjson: LargeRandcompress-zstd: 8, Long Mode - Compression Speedjpegxl: PNG - 8simdjson: PartialTweetsbasis: UASTC Level 0simdjson: DistinctUserIDonnx: yolov4 - OpenMP CPUliquid-dsp: 1 - 256 - 57povray: Trace Timewebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5avifenc: 2graphics-magick: Swirlbasis: ETC1Savifenc: 6jpegxl: JPEG - 5compress-zstd: 8 - Compression Speedavifenc: 0ncnn: CPU - googlenetjpegxl: JPEG - 7aom-av1: Speed 6 Realtimedav1d: Summer Nature 4Kwebp: Defaultwebp2: Defaultsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pcompress-zstd: 3, Long Mode - Decompression Speedredis: LPUSHredis: LPOPcompress-zstd: 8 - Decompression Speedcompress-lz4: 3 - Compression Speedaom-av1: Speed 4 Two-Passonnx: bertsquad-10 - OpenMP CPUncnn: CPU - yolov4-tinysimdjson: Kostyawebp2: Quality 100, Lossless Compressionsvt-vp9: Visual Quality Optimized - Bosphorus 1080ponnx: fcn-resnet101-11 - OpenMP CPUgraphics-magick: Enhancedcompress-lz4: 1 - Decompression Speedredis: SETastcenc: Exhaustiveliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUx265: Bosphorus 4Ktachyon: Total Timex265: Bosphorus 1080ptnn: CPU - SqueezeNet v1.1gnuradio: Hilbert Transformonednn: IP Shapes 3D - f32 - CPUsqlite-speedtest: Timed Time - Size 1,000ngspice: C7552mrbayes: Primate Phylogeny Analysiscompress-lz4: 9 - Decompression Speedredis: SADDrnnoise: compress-lz4: 3 - Decompression Speedncnn: CPU - alexnetcompress-lz4: 1 - Compression Speedgcrypt: webp: Quality 100, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - vgg16cryptopp: Unkeyed Algorithmsopenfoam: Motorbike 30Maom-av1: Speed 8 Realtimengspice: C2670gnuradio: Signal Source (Cosine)x264: H.264 Video Encodingwebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 19 - Compression Speedjpegxl: PNG - 7gnuradio: IIR Filterwebp: Quality 100dav1d: Summer Nature 1080paom-av1: Speed 6 Two-Passencode-opus: WAV To Opus Encodeencode-wavpack: WAV To WavPackgnuradio: FIR Filterncnn: CPU - resnet18onednn: Recurrent Neural Network Inference - f32 - CPUcompress-zstd: 8, Long Mode - Decompression Speedjpegxl: PNG - 5avifenc: 10, Losslesscompress-zstd: 19, Long Mode - Compression Speedbasis: UASTC Level 2build-godot: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 10gnuradio: FM Deemphasis Filteronednn: Convolution Batch Shapes Auto - f32 - CPUbasis: UASTC Level 3onednn: Recurrent Neural Network Training - f32 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0smallpt: Global Illumination Renderer; 128 Samplescrafty: Elapsed Timeonnx: super-resolution-10 - OpenMP CPUredis: GETonednn: Deconvolution Batch shapes_1d - f32 - CPUgnuradio: Five Back to Back FIR Filterscompress-zstd: 19 - Decompression SpeedGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.091743.721546.299370.57125.08937578.6617.6111156.9922386.56151.77421654.433.85216.2813.9313.5784.0524966.2981425.910565.3219657731.833196.9454245.04111649666671504956.535.2420.3712.426.13730.9774.23210.9913.7738.1371.1325.674350.91.221122.61.145.645.1575.734338184400024.093111.802203.8116.41423.538116619.8968.92787.351057.443.61512.7687.0735.13243.691.0422.274235.044737.12222217.523549910.504617.172.369.2061420.773.72367.371228.969943913771.12640316.1752.926811112000003.9597927.8344.394189.80211.567515.89.2596742.59962.81659.86913397.73041527.3714.19713400.110.8212330.56171.18613.9903.5546757.89545.91460997.75121.1371.6034715.4208.9328.81351.611.20843.11.652971.7929.435.48410.1491063.514.111773.674886.274.124.87536.615.90279.5230.6386642.9341055.017.290528.1262757.5232.3442.3513.24025.0655.0814.6741173124967213470419.904.46777920.84251.72445437.633669.185302.69844.88723754.1417.068449.4996383.47265.31117893.793.33270.7903.4513.3723.5076795.8081191.610164.8021481541.733538.5398272.99413352333331497262.274.6740.4211.536.82327.8544.04196.3412.5036.4464.4323.543957.91.141039.11.066.185.5226.264267779433322.122103.011188.3106.78922.073110821.3778.34285.671043.241.07812.5385.8137.5244.370.9772.134223.504456.62212779.003649832.584352.468.309.7463421.703.71349.022219.1210445713305.32762047.551.662910676666674.1193028.0245.073192.16206.360522.89.5944243.26164.51959.23413129.42954866.814.33113082.411.1412227.47172.90213.9103.6448557.51552.383954100.16118.2272.4614769.8213.5728.67351.311.41835.41.638979.3430.035.58910.3111060.113.911792.2774.774.80736.815.90780.2110.6412312.9521054.917.338028.2222757.7559373624414.372.46561911.24000.9210804861.922986.706576.18744.53124150.3712.198489.2012285.28965.75418243.523.06252.4493.1616.5643.2899807.3731166.49284.5323142251.573710.4402236.46613323333331710564.344.60910.966.85927.6223.79213.6712.7435.8768.8023.314024.71.111026.11.045.925.4536.124567503166722.535106.383193.6006.28821.834113121.2168.38489.511117.640.74811.9489.32244.150.9792.144238.114586.22351340.563589202.594468.272.2164621.663.53357.897230.1910346113595.32719539.8350.8140106726666727.4946.132189.74203.636534.844.11064.89459.06813188.42961165.6714.03913212.611.0112456.17173.31413.64058.11550.45706372.7824661.6210.7228.43350.711.31853.31.673976.9310.2931080.314.084805.674.614.83236.715.98479.9432.9331061.028.14960673658044.77931.64097.3210533984.513583.008610.96144.33224052.6812.308059.3493286.92764.27917203.533.07260.6633.1816.5403.4040789.2241186.08674.5022835121.563646.4392242.03313349000001547459.924.93711.286.91727.9143.87191.9112.4034.3468.4023.293978.21.121023.51.046.045.6396.234657873400022.537105.721193.4266.36422.005108321.3688.30983.541096.341.02811.9683.63229.031.0072.165225.174543.82345671.033766645.924463.172.3064921.933.61356.815221.5910245213144.82719036.251.454510860333334.0966326.9644.998188.70204.603523.69.5719464.91157.99012981.82948093.6814.47113010.711.1112124.81175.69413.8413.5836458.96538.87620473.2794704.7210.3528.19250.511.17838.31.669959.2910.3431065.413.861760.5773.644.83736.416.05579.8670.6430932.9411055.817.363028.1662760.9459763545388.922.46850929.13608.8OpenBenchmarking.org

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250M100M150M200M250MSE +/- 301436.60, N = 3SE +/- 204338.54, N = 3SE +/- 115.96, N = 3SE +/- 5355.39, N = 3210804861.92210533984.5191743.722445437.631. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1240M80M120M160M200MMin: 210438938.31 / Avg: 210804861.92 / Max: 211402753.84Min: 210264286.55 / Avg: 210533984.51 / Max: 210934746.95Min: 91605.17 / Avg: 91743.72 / Max: 91974.07Min: 2439119.78 / Avg: 2445437.63 / Max: 2456086.821. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000SE +/- 5.75, N = 3SE +/- 7.06, N = 3SE +/- 2.21, N = 3SE +/- 26.84, N = 32986.713583.011546.303669.191. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 126001200180024003000Min: 2979.89 / Avg: 2986.71 / Max: 2998.13Min: 3575.69 / Avg: 3583.01 / Max: 3597.12Min: 1541.92 / Avg: 1546.3 / Max: 1548.99Min: 3616.09 / Avg: 3669.19 / Max: 3702.561. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12130260390520650SE +/- 5.39, N = 3SE +/- 5.16, N = 3SE +/- 2.79, N = 3SE +/- 1.23, N = 3576.19610.96370.57302.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12110220330440550Min: 565.77 / Avg: 576.19 / Max: 583.78Min: 600.74 / Avg: 610.96 / Max: 617.32Min: 365.42 / Avg: 370.57 / Max: 375.01Min: 300.24 / Avg: 302.7 / Max: 304.02

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121020304050SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 344.5344.3325.0944.891. (CC) gcc options: -lm -lpthread -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12918273645Min: 44.42 / Avg: 44.53 / Max: 44.62Min: 44.17 / Avg: 44.33 / Max: 44.45Min: 24.94 / Avg: 25.09 / Max: 25.19Min: 44.73 / Avg: 44.89 / Max: 451. (CC) gcc options: -lm -lpthread -O3 -march=native

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 0.58, N = 32412403752371. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1270140210280350Min: 240 / Avg: 241 / Max: 242Min: 239 / Avg: 239.67 / Max: 241Min: 374 / Avg: 375 / Max: 377Min: 236 / Avg: 237 / Max: 2381. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.11, N = 350.3752.6878.6654.141. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121530456075Min: 50.24 / Avg: 50.37 / Max: 50.52Min: 52.5 / Avg: 52.68 / Max: 52.79Min: 78.34 / Avg: 78.66 / Max: 78.85Min: 54.02 / Avg: 54.14 / Max: 54.361. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 15SE +/- 0.06, N = 312.1912.3017.6117.06-lomp - MIN: 11.89 / MAX: 13.6-lomp - MIN: 11.96 / MAX: 17.61-lgomp - MIN: 16.94 / MAX: 25.97-lomp - MIN: 16.85 / MAX: 20.531. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 11.98 / Avg: 12.19 / Max: 12.36Min: 12.11 / Avg: 12.3 / Max: 12.41Min: 17.07 / Avg: 17.61 / Max: 18.02Min: 16.94 / Avg: 17.06 / Max: 17.161. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 3.06, N = 3SE +/- 1.33, N = 3SE +/- 1.00, N = 384880511158441. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 799 / Avg: 805 / Max: 809Min: 1114 / Avg: 1115.33 / Max: 1118Min: 842 / Avg: 844 / Max: 8451. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.0090, N = 3SE +/- 0.0148, N = 3SE +/- 0.0057, N = 3SE +/- 0.0075, N = 39.20129.34936.99229.49961. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 9.19 / Avg: 9.2 / Max: 9.22Min: 9.32 / Avg: 9.35 / Max: 9.37Min: 6.98 / Avg: 6.99 / Max: 7Min: 9.48 / Avg: 9.5 / Max: 9.511. (CXX) g++ options: -O3 -march=native -flto -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400SE +/- 1.16, N = 3SE +/- 0.06, N = 3SE +/- 0.37, N = 3SE +/- 0.14, N = 3285.29286.93386.56383.471. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1270140210280350Min: 283.87 / Avg: 285.29 / Max: 287.58Min: 286.82 / Avg: 286.93 / Max: 287.03Min: 385.83 / Avg: 386.56 / Max: 387Min: 383.2 / Avg: 383.47 / Max: 383.691. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121530456075SE +/- 0.13, N = 3SE +/- 0.66, N = 3SE +/- 0.24, N = 3SE +/- 0.13, N = 365.7564.2851.7765.311. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121326395265Min: 65.51 / Avg: 65.75 / Max: 65.91Min: 62.97 / Avg: 64.28 / Max: 65.07Min: 51.31 / Avg: 51.77 / Max: 52.08Min: 65.08 / Avg: 65.31 / Max: 65.521. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 125001000150020002500SE +/- 2.65, N = 3SE +/- 1.45, N = 3SE +/- 1.15, N = 318241720216517891. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12400800120016002000Min: 1716 / Avg: 1720 / Max: 1725Min: 2162 / Avg: 2164.67 / Max: 2167Min: 1787 / Avg: 1789 / Max: 17911. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.99681.99362.99043.98724.984SE +/- 0.03, N = 3SE +/- 0.06, N = 4SE +/- 0.01, N = 15SE +/- 0.04, N = 33.523.534.433.79-lomp - MIN: 3.34 / MAX: 4.84-lomp - MIN: 3.27 / MAX: 4.75-lgomp - MIN: 4.19 / MAX: 11.09-lomp - MIN: 3.63 / MAX: 5.21. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.48 / Avg: 3.52 / Max: 3.58Min: 3.42 / Avg: 3.53 / Max: 3.65Min: 4.38 / Avg: 4.43 / Max: 4.55Min: 3.75 / Avg: 3.79 / Max: 3.871. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.86631.73262.59893.46524.3315SE +/- 0.03, N = 3SE +/- 0.07, N = 4SE +/- 0.02, N = 15SE +/- 0.05, N = 33.063.073.853.33-lomp - MIN: 2.98 / MAX: 4.3-lomp - MIN: 2.9 / MAX: 4.41-lgomp - MIN: 3.74 / MAX: 10.85-lomp - MIN: 3.19 / MAX: 5.61. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.02 / Avg: 3.06 / Max: 3.12Min: 2.95 / Avg: 3.07 / Max: 3.2Min: 3.77 / Avg: 3.85 / Max: 4.06Min: 3.24 / Avg: 3.33 / Max: 3.411. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1260120180240300SE +/- 0.35, N = 3SE +/- 0.69, N = 3SE +/- 0.56, N = 3SE +/- 0.58, N = 3252.45260.66216.28270.79-fopenmp=libomp - MIN: 250.25 / MAX: 255.53-fopenmp=libomp - MIN: 257.51 / MAX: 262.88-fopenmp - MIN: 215.1 / MAX: 218.26-fopenmp=libomp - MIN: 268.42 / MAX: 272.221. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250Min: 251.83 / Avg: 252.45 / Max: 253.03Min: 259.89 / Avg: 260.66 / Max: 262.04Min: 215.48 / Avg: 216.28 / Max: 217.36Min: 269.69 / Avg: 270.79 / Max: 271.681. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.88431.76862.65293.53724.4215SE +/- 0.03, N = 3SE +/- 0.03, N = 4SE +/- 0.02, N = 15SE +/- 0.02, N = 33.163.183.933.45-lomp - MIN: 3.06 / MAX: 4.05-lomp - MIN: 3.04 / MAX: 4.48-lgomp - MIN: 3.71 / MAX: 6.06-lomp - MIN: 3.37 / MAX: 4.61. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.11 / Avg: 3.16 / Max: 3.21Min: 3.1 / Avg: 3.18 / Max: 3.22Min: 3.86 / Avg: 3.93 / Max: 4.17Min: 3.42 / Avg: 3.45 / Max: 3.491. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 316.5616.5413.5813.371. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 16.43 / Avg: 16.56 / Max: 16.71Min: 16.44 / Avg: 16.54 / Max: 16.71Min: 13.52 / Avg: 13.58 / Max: 13.66Min: 13.17 / Avg: 13.37 / Max: 13.581. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.91181.82362.73543.64724.559SE +/- 0.0273, N = 3SE +/- 0.0017, N = 3SE +/- 0.0178, N = 3SE +/- 0.0018, N = 33.28993.40404.05243.50761. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.24 / Avg: 3.29 / Max: 3.32Min: 3.4 / Avg: 3.4 / Max: 3.41Min: 4.03 / Avg: 4.05 / Max: 4.09Min: 3.51 / Avg: 3.51 / Max: 3.511. (CXX) g++ options: -O3 -march=native -flto -pthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 5.04, N = 3SE +/- 5.41, N = 3SE +/- 1.26, N = 3SE +/- 4.01, N = 3807.37789.22966.30795.811. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 797.29 / Avg: 807.37 / Max: 812.44Min: 778.41 / Avg: 789.22 / Max: 794.93Min: 963.78 / Avg: 966.3 / Max: 967.6Min: 790.58 / Avg: 795.81 / Max: 803.681. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1230060090012001500SE +/- 4.71, N = 3SE +/- 2.80, N = 3SE +/- 2.43, N = 3SE +/- 1.19, N = 31166.41186.01425.91191.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 1157.4 / Avg: 1166.4 / Max: 1173.3Min: 1181.6 / Avg: 1186 / Max: 1191.2Min: 1421.8 / Avg: 1425.87 / Max: 1430.2Min: 1189.9 / Avg: 1191.63 / Max: 1193.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 8.67, N = 3SE +/- 2.03, N = 3SE +/- 3.51, N = 3SE +/- 1.86, N = 3928867105610161. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 919 / Avg: 927.67 / Max: 945Min: 864 / Avg: 867.33 / Max: 871Min: 1049 / Avg: 1056 / Max: 1060Min: 1012 / Avg: 1015.67 / Max: 10181. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121.1972.3943.5914.7885.985SE +/- 0.07, N = 3SE +/- 0.03, N = 4SE +/- 0.02, N = 15SE +/- 0.02, N = 34.534.505.324.80-lomp - MIN: 4.35 / MAX: 6.86-lomp - MIN: 4.34 / MAX: 5.7-lgomp - MIN: 5.15 / MAX: 13.83-lomp - MIN: 4.71 / MAX: 6.611. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 4.42 / Avg: 4.53 / Max: 4.67Min: 4.41 / Avg: 4.5 / Max: 4.55Min: 5.21 / Avg: 5.32 / Max: 5.58Min: 4.77 / Avg: 4.8 / Max: 4.821. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12500K1000K1500K2000K2500KSE +/- 4348.49, N = 5SE +/- 3546.09, N = 5SE +/- 7442.75, N = 5SE +/- 4267.44, N = 523142252283512196577321481541. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12400K800K1200K1600K2000KMin: 2304510 / Avg: 2314225.4 / Max: 2323957Min: 2275942 / Avg: 2283512.4 / Max: 2294908Min: 1939359 / Avg: 1965773.2 / Max: 1981215Min: 2134798 / Avg: 2148153.6 / Max: 21599131. (CC) gcc options: -O3 -march=native

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.41180.82361.23541.64722.059SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 15SE +/- 0.02, N = 31.571.561.831.73-lomp - MIN: 1.54 / MAX: 1.75-lomp - MIN: 1.46 / MAX: 6.9-lgomp - MIN: 1.77 / MAX: 3.9-lomp - MIN: 1.68 / MAX: 1.791. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 1.56 / Avg: 1.57 / Max: 1.58Min: 1.51 / Avg: 1.56 / Max: 1.58Min: 1.79 / Avg: 1.83 / Max: 1.94Min: 1.7 / Avg: 1.73 / Max: 1.771. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000SE +/- 28.46, N = 10SE +/- 27.64, N = 10SE +/- 33.41, N = 5SE +/- 49.56, N = 33710.43646.43196.93538.51. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 126001200180024003000Min: 3458.5 / Avg: 3710.4 / Max: 3759.7Min: 3399.9 / Avg: 3646.41 / Max: 3691.9Min: 3083.7 / Avg: 3196.9 / Max: 3285.9Min: 3439.8 / Avg: 3538.53 / Max: 3595.41. (CXX) g++ options: -O3 -march=native -rdynamic

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 34023924543981. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400Min: 401 / Avg: 401.67 / Max: 402Min: 392 / Avg: 392.33 / Max: 393Min: 453 / Avg: 453.67 / Max: 4551. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1260120180240300SE +/- 2.43, N = 3SE +/- 0.45, N = 3SE +/- 1.65, N = 3SE +/- 0.09, N = 3236.47242.03245.04272.991. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250Min: 232.78 / Avg: 236.47 / Max: 241.05Min: 241.14 / Avg: 242.03 / Max: 242.49Min: 241.77 / Avg: 245.04 / Max: 247.06Min: 272.87 / Avg: 272.99 / Max: 273.161. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12300M600M900M1200M1500MSE +/- 1125956.38, N = 3SE +/- 1422439.22, N = 3SE +/- 497772.82, N = 3SE +/- 240370.09, N = 313323333331334900000116496666713352333331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12200M400M600M800M1000MMin: 1330400000 / Avg: 1332333333.33 / Max: 1334300000Min: 1332300000 / Avg: 1334900000 / Max: 1337200000Min: 1164200000 / Avg: 1164966666.67 / Max: 1165900000Min: 1334900000 / Avg: 1335233333.33 / Max: 13357000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124K8K12K16K20KSE +/- 193.85, N = 4SE +/- 177.70, N = 3SE +/- 134.84, N = 3SE +/- 123.46, N = 1217105154741504914972-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123K6K9K12K15KMin: 16611 / Avg: 17104.88 / Max: 17543Min: 15163.5 / Avg: 15473.67 / Max: 15779Min: 14860 / Avg: 15048.83 / Max: 15310Min: 14420.5 / Avg: 14972.25 / Max: 15778.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 364.3459.9256.5362.27
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121326395265Min: 64.19 / Avg: 64.34 / Max: 64.56Min: 59.86 / Avg: 59.92 / Max: 59.98Min: 56.43 / Avg: 56.53 / Max: 56.61Min: 62.19 / Avg: 62.27 / Max: 62.32

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121.17952.3593.53854.7185.8975SE +/- 0.016, N = 3SE +/- 0.020, N = 3SE +/- 0.018, N = 3SE +/- 0.055, N = 34.6094.9375.2424.6741. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 4.58 / Avg: 4.61 / Max: 4.63Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 5.22 / Avg: 5.24 / Max: 5.28Min: 4.57 / Avg: 4.67 / Max: 4.741. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassGCC 10.2LLVM Clang 120.09450.1890.28350.3780.4725SE +/- 0.00, N = 3SE +/- 0.00, N = 30.370.421. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassGCC 10.2LLVM Clang 1212345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.42 / Avg: 0.42 / Max: 0.431. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.09, N = 3SE +/- 0.12, N = 4SE +/- 0.16, N = 15SE +/- 0.05, N = 310.9611.2812.4211.53-lomp - MIN: 10.51 / MAX: 16.79-lomp - MIN: 10.61 / MAX: 20.99-lgomp - MIN: 11.7 / MAX: 20.08-lomp - MIN: 11.09 / MAX: 12.21. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 10.86 / Avg: 10.96 / Max: 11.14Min: 10.94 / Avg: 11.28 / Max: 11.53Min: 11.83 / Avg: 12.42 / Max: 14.09Min: 11.43 / Avg: 11.53 / Max: 11.611. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.004, N = 3SE +/- 0.055, N = 3SE +/- 0.014, N = 3SE +/- 0.033, N = 36.8596.9176.1376.8231. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 6.85 / Avg: 6.86 / Max: 6.87Min: 6.81 / Avg: 6.92 / Max: 6.98Min: 6.11 / Avg: 6.14 / Max: 6.16Min: 6.79 / Avg: 6.82 / Max: 6.891. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12714212835SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 327.6227.9130.9827.851. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12714212835Min: 27.5 / Avg: 27.62 / Max: 27.83Min: 27.86 / Avg: 27.91 / Max: 27.98Min: 30.9 / Avg: 30.98 / Max: 31.1Min: 27.73 / Avg: 27.85 / Max: 28.071. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.95181.90362.85543.80724.759SE +/- 0.05, N = 3SE +/- 0.06, N = 4SE +/- 0.01, N = 15SE +/- 0.05, N = 33.793.874.234.04-lomp - MIN: 3.64 / MAX: 4.86-lomp - MIN: 3.67 / MAX: 12.94-lgomp - MIN: 4.15 / MAX: 9.05-lomp - MIN: 3.88 / MAX: 5.031. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.69 / Avg: 3.79 / Max: 3.87Min: 3.74 / Avg: 3.87 / Max: 4Min: 4.19 / Avg: 4.23 / Max: 4.34Min: 3.94 / Avg: 4.04 / Max: 4.11. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 0.40, N = 3SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.05, N = 3213.67191.91210.99196.34
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 213.13 / Avg: 213.67 / Max: 214.44Min: 191.61 / Avg: 191.91 / Max: 192.31Min: 210.62 / Avg: 210.99 / Max: 211.57Min: 196.24 / Avg: 196.34 / Max: 196.42

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.10, N = 3SE +/- 0.24, N = 4SE +/- 0.06, N = 15SE +/- 0.13, N = 312.7412.4013.7712.50-lomp - MIN: 12 / MAX: 19.89-lomp - MIN: 11.72 / MAX: 15.69-lgomp - MIN: 13.25 / MAX: 23.45-lomp - MIN: 12.23 / MAX: 16.61. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 12.53 / Avg: 12.74 / Max: 12.87Min: 11.87 / Avg: 12.4 / Max: 12.9Min: 13.55 / Avg: 13.77 / Max: 14.36Min: 12.36 / Avg: 12.5 / Max: 12.761. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12918273645SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 335.8734.3438.1336.44-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12816243240Min: 35.7 / Avg: 35.87 / Max: 36.01Min: 34.24 / Avg: 34.34 / Max: 34.41Min: 38.09 / Avg: 38.13 / Max: 38.16Min: 36.33 / Avg: 36.44 / Max: 36.541. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121632486480SE +/- 0.51, N = 3SE +/- 0.68, N = 6SE +/- 0.68, N = 6SE +/- 0.14, N = 368.8068.4071.1364.431. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670Min: 68.25 / Avg: 68.8 / Max: 69.82Min: 67.15 / Avg: 68.4 / Max: 71.7Min: 68.45 / Avg: 71.13 / Max: 72.94Min: 64.15 / Avg: 64.43 / Max: 64.581. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430SE +/- 0.09, N = 3SE +/- 0.20, N = 4SE +/- 0.21, N = 15SE +/- 0.20, N = 323.3123.2925.6723.54-lomp - MIN: 22.75 / MAX: 33.51-lomp - MIN: 22.43 / MAX: 25.17-lgomp - MIN: 24.52 / MAX: 35.96-lomp - MIN: 22.92 / MAX: 26.571. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430Min: 23.21 / Avg: 23.31 / Max: 23.49Min: 22.77 / Avg: 23.29 / Max: 23.65Min: 24.77 / Avg: 25.67 / Max: 27.15Min: 23.15 / Avg: 23.54 / Max: 23.841. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 129001800270036004500SE +/- 16.46, N = 3SE +/- 39.89, N = 3SE +/- 72.38, N = 3SE +/- 25.47, N = 34024.73978.24350.93957.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000Min: 4001.7 / Avg: 4024.7 / Max: 4056.6Min: 3936 / Avg: 3978.17 / Max: 4057.9Min: 4259.3 / Avg: 4350.93 / Max: 4493.8Min: 3907.4 / Avg: 3957.9 / Max: 3988.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.27450.5490.82351.0981.3725SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.111.121.221.141. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 1.1 / Avg: 1.11 / Max: 1.12Min: 1.1 / Avg: 1.12 / Max: 1.14Min: 1.2 / Avg: 1.22 / Max: 1.24Min: 1.12 / Avg: 1.14 / Max: 1.161. (CXX) g++ options: -O3 -march=native -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 6.07, N = 3SE +/- 5.99, N = 3SE +/- 2.15, N = 3SE +/- 2.69, N = 31025.41023.51122.61034.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 1017.3 / Avg: 1025.43 / Max: 1037.3Min: 1014.6 / Avg: 1023.5 / Max: 1034.9Min: 1119.2 / Avg: 1122.63 / Max: 1126.6Min: 1029.4 / Avg: 1034.53 / Max: 1038.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.25650.5130.76951.0261.2825SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.041.041.141.06-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 1.03 / Avg: 1.04 / Max: 1.05Min: 1.04 / Avg: 1.04 / Max: 1.04Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.05 / Avg: 1.06 / Max: 1.071. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 35.926.045.646.181. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 5.88 / Avg: 5.92 / Max: 5.99Min: 6.01 / Avg: 6.04 / Max: 6.06Min: 5.57 / Avg: 5.64 / Max: 5.74Min: 6.11 / Avg: 6.18 / Max: 6.211. (CXX) g++ options: -O3 -march=native -pthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121.26882.53763.80645.07526.344SE +/- 0.009, N = 3SE +/- 0.015, N = 3SE +/- 0.023, N = 3SE +/- 0.015, N = 35.4535.6395.1575.5221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 5.44 / Avg: 5.45 / Max: 5.47Min: 5.62 / Avg: 5.64 / Max: 5.67Min: 5.11 / Avg: 5.16 / Max: 5.18Min: 5.5 / Avg: 5.52 / Max: 5.551. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 36.126.235.736.261. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 6.01 / Avg: 6.12 / Max: 6.19Min: 6.19 / Avg: 6.23 / Max: 6.26Min: 5.69 / Avg: 5.73 / Max: 5.75Min: 6.19 / Avg: 6.26 / Max: 6.311. (CXX) g++ options: -O3 -march=native -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12100200300400500SE +/- 2.95, N = 3SE +/- 1.36, N = 3SE +/- 1.96, N = 3SE +/- 2.13, N = 3456465433426-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400Min: 450.5 / Avg: 456.33 / Max: 460Min: 462 / Avg: 464.67 / Max: 466.5Min: 429 / Avg: 432.83 / Max: 435.5Min: 423.5 / Avg: 426.33 / Max: 430.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220M40M60M80M100MSE +/- 78876.13, N = 3SE +/- 601612.28, N = 3SE +/- 828458.69, N = 5SE +/- 171803.51, N = 3750316677873400081844000777943331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1214M28M42M56M70MMin: 74875000 / Avg: 75031666.67 / Max: 75126000Min: 77542000 / Avg: 78734000 / Max: 79472000Min: 78871000 / Avg: 81844000 / Max: 83565000Min: 77601000 / Avg: 77794333.33 / Max: 781370001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 322.5422.5424.0922.121. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430Min: 22.52 / Avg: 22.54 / Max: 22.55Min: 22.5 / Avg: 22.54 / Max: 22.58Min: 23.96 / Avg: 24.09 / Max: 24.27Min: 22.06 / Avg: 22.12 / Max: 22.191. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12306090120150SE +/- 0.42, N = 3SE +/- 0.88, N = 3SE +/- 1.06, N = 3SE +/- 0.95, N = 3106.38105.72111.80103.011. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100Min: 105.72 / Avg: 106.38 / Max: 107.17Min: 104.4 / Avg: 105.72 / Max: 107.38Min: 110.28 / Avg: 111.8 / Max: 113.84Min: 101.43 / Avg: 103.01 / Max: 104.721. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200SE +/- 0.03, N = 3SE +/- 0.56, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3193.60193.43203.81188.311. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 193.54 / Avg: 193.6 / Max: 193.64Min: 192.52 / Avg: 193.43 / Max: 194.45Min: 203.76 / Avg: 203.81 / Max: 203.88Min: 188.15 / Avg: 188.31 / Max: 188.451. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 3SE +/- 0.010, N = 36.2886.3646.4146.7891. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 6.27 / Avg: 6.29 / Max: 6.32Min: 6.35 / Avg: 6.36 / Max: 6.37Min: 6.4 / Avg: 6.41 / Max: 6.43Min: 6.77 / Avg: 6.79 / Max: 6.811. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 321.8322.0123.5422.071. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12510152025Min: 21.63 / Avg: 21.83 / Max: 21.96Min: 21.84 / Avg: 22.01 / Max: 22.16Min: 23.5 / Avg: 23.54 / Max: 23.58Min: 22.01 / Avg: 22.07 / Max: 22.141. (CXX) g++ options: -O3 -fPIC -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1230060090012001500SE +/- 3.71, N = 3SE +/- 3.84, N = 3SE +/- 3.67, N = 3SE +/- 4.48, N = 311311083116611081. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 1126 / Avg: 1130.67 / Max: 1138Min: 1077 / Avg: 1082.67 / Max: 1090Min: 1162 / Avg: 1165.67 / Max: 1173Min: 1102 / Avg: 1108.33 / Max: 11171. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12510152025SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 321.2221.3719.9021.381. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12510152025Min: 20.88 / Avg: 21.22 / Max: 21.51Min: 20.95 / Avg: 21.37 / Max: 21.6Min: 19.83 / Avg: 19.9 / Max: 19.96Min: 21.27 / Avg: 21.38 / Max: 21.51. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.027, N = 3SE +/- 0.012, N = 3SE +/- 0.048, N = 3SE +/- 0.055, N = 38.3848.3098.9278.3421. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 8.35 / Avg: 8.38 / Max: 8.44Min: 8.29 / Avg: 8.31 / Max: 8.33Min: 8.85 / Avg: 8.93 / Max: 9.02Min: 8.27 / Avg: 8.34 / Max: 8.451. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.25, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.24, N = 389.5183.5487.3585.67-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100Min: 89 / Avg: 89.51 / Max: 89.78Min: 83.37 / Avg: 83.54 / Max: 83.67Min: 87.09 / Avg: 87.35 / Max: 87.58Min: 85.22 / Avg: 85.67 / Max: 86.031. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 9.29, N = 3SE +/- 9.53, N = 15SE +/- 3.93, N = 3SE +/- 11.61, N = 31117.61096.31057.41043.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 1099.4 / Avg: 1117.63 / Max: 1129.8Min: 1043.6 / Avg: 1096.27 / Max: 1176.3Min: 1050.6 / Avg: 1057.37 / Max: 1064.2Min: 1020.4 / Avg: 1043.23 / Max: 1058.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121020304050SE +/- 0.15, N = 3SE +/- 0.05, N = 3SE +/- 0.21, N = 3SE +/- 0.18, N = 340.7541.0343.6241.081. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12918273645Min: 40.58 / Avg: 40.75 / Max: 41.05Min: 40.92 / Avg: 41.03 / Max: 41.09Min: 43.31 / Avg: 43.62 / Max: 44Min: 40.87 / Avg: 41.08 / Max: 41.451. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.08, N = 3SE +/- 0.13, N = 4SE +/- 0.06, N = 15SE +/- 0.12, N = 311.9411.9612.7612.53-lomp - MIN: 11.62 / MAX: 12.42-lomp - MIN: 11.46 / MAX: 13.27-lgomp - MIN: 12.19 / MAX: 19.36-lomp - MIN: 12.12 / MAX: 17.121. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 11.84 / Avg: 11.94 / Max: 12.1Min: 11.63 / Avg: 11.96 / Max: 12.23Min: 12.5 / Avg: 12.76 / Max: 13.05Min: 12.4 / Avg: 12.53 / Max: 12.771. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.25, N = 3SE +/- 0.01, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 389.3283.6387.0785.81-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100Min: 88.91 / Avg: 89.32 / Max: 89.78Min: 83.61 / Avg: 83.63 / Max: 83.65Min: 86.7 / Avg: 87.07 / Max: 87.34Min: 85.55 / Avg: 85.81 / Max: 86.191. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeGCC 10.2LLVM Clang 12918273645SE +/- 0.16, N = 3SE +/- 0.31, N = 335.1337.501. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeGCC 10.2LLVM Clang 12816243240Min: 34.81 / Avg: 35.13 / Max: 35.35Min: 37.1 / Avg: 37.5 / Max: 38.11. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 0.32, N = 3SE +/- 0.43, N = 3SE +/- 0.47, N = 3SE +/- 0.04, N = 3244.15229.03243.69244.37MIN: 180.82 / MAX: 252.96MIN: 171.52 / MAX: 237.17-lm - MIN: 181.29 / MAX: 252.3MIN: 182.08 / MAX: 252.221. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 243.51 / Avg: 244.15 / Max: 244.52Min: 228.5 / Avg: 229.03 / Max: 229.88Min: 242.82 / Avg: 243.69 / Max: 244.45Min: 244.31 / Avg: 244.37 / Max: 244.451. (CC) gcc options: -O3 -march=native -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.23450.4690.70350.9381.1725SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 3SE +/- 0.014, N = 30.9791.0071.0420.9771. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 0.97 / Avg: 0.98 / Max: 0.99Min: 1 / Avg: 1.01 / Max: 1.02Min: 1.03 / Avg: 1.04 / Max: 1.06Min: 0.95 / Avg: 0.98 / Max: 11. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.51171.02341.53512.04682.5585SE +/- 0.024, N = 3SE +/- 0.025, N = 3SE +/- 0.005, N = 3SE +/- 0.011, N = 32.1442.1652.2742.1341. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 2.1 / Avg: 2.14 / Max: 2.18Min: 2.12 / Avg: 2.16 / Max: 2.21Min: 2.27 / Avg: 2.27 / Max: 2.28Min: 2.11 / Avg: 2.13 / Max: 2.151. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 2.24, N = 13SE +/- 2.47, N = 12SE +/- 2.40, N = 12SE +/- 2.40, N = 12238.11225.17235.04223.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 211.64 / Avg: 238.11 / Max: 242.03Min: 198.09 / Avg: 225.17 / Max: 229.1Min: 208.84 / Avg: 235.04 / Max: 238.66Min: 197.17 / Avg: 223.5 / Max: 226.931. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1210002000300040005000SE +/- 31.63, N = 3SE +/- 33.94, N = 3SE +/- 46.74, N = 3SE +/- 2.17, N = 34586.24543.84737.14456.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000Min: 4529 / Avg: 4586.2 / Max: 4638.2Min: 4496.2 / Avg: 4543.8 / Max: 4609.5Min: 4651.7 / Avg: 4737.13 / Max: 4812.7Min: 4452.4 / Avg: 4456.57 / Max: 4459.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12500K1000K1500K2000K2500KSE +/- 27675.97, N = 4SE +/- 35143.82, N = 15SE +/- 23396.73, N = 15SE +/- 30760.12, N = 32351340.562345671.032222217.522212779.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12400K800K1200K1600K2000KMin: 2271178.75 / Avg: 2351340.56 / Max: 2394085.75Min: 2089870.88 / Avg: 2345671.03 / Max: 2580653.5Min: 2111533.75 / Avg: 2222217.52 / Max: 2387805Min: 2175805 / Avg: 2212779 / Max: 2273848.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12800K1600K2400K3200K4000KSE +/- 30792.29, N = 8SE +/- 45854.80, N = 15SE +/- 26197.04, N = 3SE +/- 31635.54, N = 33589202.593766645.923549910.503649832.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12700K1400K2100K2800K3500KMin: 3432930.75 / Avg: 3589202.59 / Max: 3710670.25Min: 3628470.25 / Avg: 3766645.92 / Max: 4351610Min: 3505149.75 / Avg: 3549910.5 / Max: 3595875Min: 3608805.5 / Avg: 3649832.58 / Max: 3712059.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1210002000300040005000SE +/- 26.73, N = 3SE +/- 8.16, N = 11SE +/- 37.90, N = 24468.24463.14617.14352.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000Min: 4427.4 / Avg: 4468.17 / Max: 4518.5Min: 4401.9 / Avg: 4463.13 / Max: 4493.6Min: 4314.5 / Avg: 4352.4 / Max: 4390.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121632486480SE +/- 0.77, N = 5SE +/- 0.21, N = 3SE +/- 0.86, N = 3SE +/- 0.19, N = 372.2172.3072.3668.301. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670Min: 70.48 / Avg: 72.21 / Max: 74.66Min: 72.08 / Avg: 72.3 / Max: 72.72Min: 71.16 / Avg: 72.36 / Max: 74.02Min: 68.01 / Avg: 68.3 / Max: 68.661. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassGCC 10.2LLVM Clang 123691215SE +/- 0.02, N = 3SE +/- 0.05, N = 39.209.741. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassGCC 10.2LLVM Clang 123691215Min: 9.16 / Avg: 9.2 / Max: 9.23Min: 9.64 / Avg: 9.74 / Max: 9.791. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12140280420560700SE +/- 6.07, N = 12SE +/- 5.59, N = 12SE +/- 6.71, N = 3SE +/- 5.80, N = 3646649614634-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12110220330440550Min: 616.5 / Avg: 645.79 / Max: 677Min: 620.5 / Avg: 648.96 / Max: 677.5Min: 605.5 / Avg: 614.33 / Max: 627.5Min: 623.5 / Avg: 634 / Max: 643.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12510152025SE +/- 0.14, N = 3SE +/- 0.11, N = 4SE +/- 0.17, N = 15SE +/- 0.13, N = 321.6621.9320.7721.70-lomp - MIN: 21.17 / MAX: 27.18-lomp - MIN: 21.28 / MAX: 24.81-lgomp - MIN: 19.69 / MAX: 43.19-lomp - MIN: 21.21 / MAX: 30.181. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12510152025Min: 21.51 / Avg: 21.66 / Max: 21.94Min: 21.6 / Avg: 21.93 / Max: 22.07Min: 19.89 / Avg: 20.77 / Max: 21.52Min: 21.54 / Avg: 21.7 / Max: 21.951. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.8371.6742.5113.3484.185SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 33.533.613.723.711. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.49 / Avg: 3.53 / Max: 3.58Min: 3.59 / Avg: 3.61 / Max: 3.62Min: 3.69 / Avg: 3.72 / Max: 3.78Min: 3.66 / Avg: 3.71 / Max: 3.761. (CXX) g++ options: -O3 -march=native -pthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400SE +/- 1.28, N = 3SE +/- 1.38, N = 3SE +/- 0.42, N = 3SE +/- 0.58, N = 3357.90356.82367.37349.021. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1270140210280350Min: 355.35 / Avg: 357.9 / Max: 359.18Min: 354.06 / Avg: 356.82 / Max: 358.32Min: 366.56 / Avg: 367.37 / Max: 367.97Min: 348.09 / Avg: 349.02 / Max: 350.081. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 0.90, N = 3SE +/- 0.39, N = 3SE +/- 0.68, N = 3SE +/- 0.82, N = 3230.19221.59228.96219.121. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 228.75 / Avg: 230.19 / Max: 231.84Min: 220.83 / Avg: 221.59 / Max: 222.14Min: 227.62 / Avg: 228.96 / Max: 229.8Min: 217.47 / Avg: 219.12 / Max: 220.021. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.44, N = 3SE +/- 0.44, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 310310299104-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100Min: 102 / Avg: 102.67 / Max: 103.5Min: 101.5 / Avg: 102.33 / Max: 103Min: 99 / Avg: 99.33 / Max: 99.5Min: 103 / Avg: 103.5 / Max: 1041. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34614524394571. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1280160240320400Min: 461 / Avg: 461.33 / Max: 462Min: 439 / Avg: 439.33 / Max: 440Min: 457 / Avg: 457.33 / Max: 4581. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123K6K9K12K15KSE +/- 59.50, N = 3SE +/- 106.53, N = 3SE +/- 38.97, N = 3SE +/- 93.71, N = 313595.313144.813771.113305.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122K4K6K8K10KMin: 13476.9 / Avg: 13595.3 / Max: 13664.8Min: 12943.5 / Avg: 13144.8 / Max: 13305.9Min: 13703.6 / Avg: 13771.07 / Max: 13838.6Min: 13118.5 / Avg: 13305.27 / Max: 13412.21. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12600K1200K1800K2400K3000KSE +/- 23132.25, N = 3SE +/- 14014.88, N = 3SE +/- 26145.63, N = 15SE +/- 28596.87, N = 32719539.832719036.202640316.172762047.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12500K1000K1500K2000K2500KMin: 2675986 / Avg: 2719539.83 / Max: 2754829.75Min: 2698335.5 / Avg: 2719036.17 / Max: 2745753Min: 2484480 / Avg: 2640316.17 / Max: 2891872.75Min: 2704939 / Avg: 2762047.5 / Max: 27933051. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121224364860SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 350.8151.4552.9351.661. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121122334455Min: 50.67 / Avg: 50.81 / Max: 50.92Min: 51.28 / Avg: 51.45 / Max: 51.59Min: 52.75 / Avg: 52.93 / Max: 53.02Min: 51.51 / Avg: 51.66 / Max: 51.741. (CXX) g++ options: -O3 -march=native -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12200M400M600M800M1000MSE +/- 3628743.28, N = 3SE +/- 3939684.14, N = 3SE +/- 5768882.04, N = 3SE +/- 3699249.17, N = 310672666671086033333111120000010676666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12200M400M600M800M1000MMin: 1061400000 / Avg: 1067266666.67 / Max: 1073900000Min: 1081000000 / Avg: 1086033333.33 / Max: 1093800000Min: 1100000000 / Avg: 1111200000 / Max: 1119200000Min: 1062400000 / Avg: 1067666666.67 / Max: 10748000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 120.92681.85362.78043.70724.634SE +/- 0.01273, N = 3SE +/- 0.00506, N = 3SE +/- 0.01294, N = 34.096633.959794.11930-fopenmp=libomp - MIN: 3.9-fopenmp - MIN: 3.76-fopenmp=libomp - MIN: 3.881. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 4.07 / Avg: 4.1 / Max: 4.12Min: 3.95 / Avg: 3.96 / Max: 3.97Min: 4.09 / Avg: 4.12 / Max: 4.141. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12714212835SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.09, N = 327.4926.9627.8328.021. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430Min: 27.39 / Avg: 27.49 / Max: 27.65Min: 26.79 / Avg: 26.96 / Max: 27.19Min: 27.52 / Avg: 27.83 / Max: 28.04Min: 27.84 / Avg: 28.02 / Max: 28.131. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121020304050SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.09, N = 346.1345.0044.3945.071. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12918273645Min: 45.81 / Avg: 46.13 / Max: 46.5Min: 44.77 / Avg: 45 / Max: 45.27Min: 44.14 / Avg: 44.39 / Max: 44.53Min: 44.93 / Avg: 45.07 / Max: 45.231. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.28, N = 3SE +/- 0.35, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 389.7488.7089.8092.161. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100Min: 89.33 / Avg: 89.74 / Max: 90.28Min: 88.15 / Avg: 88.7 / Max: 89.34Min: 89.44 / Avg: 89.8 / Max: 90.11Min: 91.91 / Avg: 92.16 / Max: 92.341. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 0.85, N = 3SE +/- 0.48, N = 3SE +/- 0.57, N = 3SE +/- 1.02, N = 3203.64204.60211.57206.36-fopenmp=libomp - MIN: 201.91 / MAX: 206.13-fopenmp=libomp - MIN: 203.72 / MAX: 206.33-fopenmp - MIN: 206.88 / MAX: 212.83-fopenmp=libomp - MIN: 204.24 / MAX: 209.241. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 202.24 / Avg: 203.64 / Max: 205.16Min: 203.89 / Avg: 204.6 / Max: 205.52Min: 210.5 / Avg: 211.57 / Max: 212.44Min: 204.43 / Avg: 206.36 / Max: 207.891. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12120240360480600SE +/- 1.95, N = 9SE +/- 1.23, N = 8SE +/- 0.63, N = 9SE +/- 0.58, N = 9534.8523.6515.8522.81. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1290180270360450Min: 523.8 / Avg: 534.82 / Max: 542.7Min: 520.2 / Avg: 523.56 / Max: 530.2Min: 511.7 / Avg: 515.79 / Max: 518.3Min: 520.7 / Avg: 522.82 / Max: 525.81. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.01936, N = 3SE +/- 0.01340, N = 3SE +/- 0.01452, N = 39.571949.259679.59442-fopenmp=libomp - MIN: 9.46-fopenmp - MIN: 9.1-fopenmp=libomp - MIN: 9.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 9.54 / Avg: 9.57 / Max: 9.61Min: 9.24 / Avg: 9.26 / Max: 9.29Min: 9.57 / Avg: 9.59 / Max: 9.621. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000AMD AOCC 2.3GCC 10.2LLVM Clang 121020304050SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 344.1142.6043.261. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000AMD AOCC 2.3GCC 10.2LLVM Clang 12918273645Min: 44.06 / Avg: 44.11 / Max: 44.18Min: 42.35 / Avg: 42.6 / Max: 42.76Min: 43.04 / Avg: 43.26 / Max: 43.51. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670SE +/- 0.54, N = 3SE +/- 0.08, N = 3SE +/- 0.15, N = 3SE +/- 0.20, N = 364.8964.9162.8264.521. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121326395265Min: 63.81 / Avg: 64.89 / Max: 65.52Min: 64.76 / Avg: 64.91 / Max: 65.02Min: 62.53 / Avg: 62.82 / Max: 63.03Min: 64.19 / Avg: 64.52 / Max: 64.891. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121326395265SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 3SE +/- 0.81, N = 359.0757.9959.8759.23-mabm1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121224364860Min: 58.86 / Avg: 59.07 / Max: 59.18Min: 57.85 / Avg: 57.99 / Max: 58.17Min: 59.6 / Avg: 59.87 / Max: 60.14Min: 58.23 / Avg: 59.23 / Max: 60.841. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123K6K9K12K15KSE +/- 53.22, N = 3SE +/- 24.92, N = 6SE +/- 35.65, N = 6SE +/- 21.95, N = 313188.412981.813397.713129.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122K4K6K8K10KMin: 13082.7 / Avg: 13188.37 / Max: 13252.3Min: 12926.3 / Avg: 12981.82 / Max: 13095.1Min: 13226.5 / Avg: 13397.68 / Max: 13476.2Min: 13087.3 / Avg: 13129.37 / Max: 13161.31. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12700K1400K2100K2800K3500KSE +/- 40118.44, N = 3SE +/- 27502.49, N = 15SE +/- 39730.96, N = 15SE +/- 29853.73, N = 32961165.672948093.683041527.372954866.801. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12500K1000K1500K2000K2500KMin: 2887678.75 / Avg: 2961165.67 / Max: 3025805.75Min: 2676676.5 / Avg: 2948093.68 / Max: 3173677Min: 2876879 / Avg: 3041527.37 / Max: 3313474Min: 2898625 / Avg: 2954866.83 / Max: 30003481. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.17, N = 3SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 314.0414.4714.2014.331. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 13.81 / Avg: 14.04 / Max: 14.37Min: 14.19 / Avg: 14.47 / Max: 14.74Min: 14.12 / Avg: 14.2 / Max: 14.26Min: 14.02 / Avg: 14.33 / Max: 14.61. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123K6K9K12K15KSE +/- 15.17, N = 5SE +/- 30.75, N = 3SE +/- 48.22, N = 3SE +/- 46.92, N = 313212.613010.713400.113082.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122K4K6K8K10KMin: 13164.7 / Avg: 13212.56 / Max: 13244Min: 12953.9 / Avg: 13010.7 / Max: 13059.5Min: 13308.6 / Avg: 13400.07 / Max: 13472.3Min: 12988.6 / Avg: 13082.37 / Max: 13132.41. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.00, N = 2SE +/- 0.04, N = 4SE +/- 0.09, N = 15SE +/- 0.04, N = 311.0111.1110.8211.14-lomp - MIN: 10.84 / MAX: 12.26-lomp - MIN: 10.92 / MAX: 15.76-lgomp - MIN: 10.41 / MAX: 17.59-lomp - MIN: 10.96 / MAX: 13.281. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 11 / Avg: 11.01 / Max: 11.01Min: 11.01 / Avg: 11.11 / Max: 11.22Min: 10.49 / Avg: 10.82 / Max: 11.51Min: 11.07 / Avg: 11.14 / Max: 11.191. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123K6K9K12K15KSE +/- 57.72, N = 3SE +/- 49.14, N = 3SE +/- 76.55, N = 3SE +/- 66.76, N = 312456.1712124.8112330.5612227.471. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122K4K6K8K10KMin: 12343.85 / Avg: 12456.17 / Max: 12535.44Min: 12038.25 / Avg: 12124.81 / Max: 12208.4Min: 12196.23 / Avg: 12330.56 / Max: 12461.34Min: 12149.33 / Avg: 12227.47 / Max: 12360.311. (CC) gcc options: -O3

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200SE +/- 1.67, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 1.10, N = 3173.31175.69171.19172.901. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12306090120150Min: 170.29 / Avg: 173.31 / Max: 176.04Min: 175.35 / Avg: 175.69 / Max: 175.87Min: 170.68 / Avg: 171.19 / Max: 171.67Min: 170.87 / Avg: 172.9 / Max: 174.631. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 313.6413.8413.9913.911. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 13.42 / Avg: 13.64 / Max: 13.77Min: 13.78 / Avg: 13.84 / Max: 13.93Min: 13.84 / Avg: 13.99 / Max: 14.21Min: 13.77 / Avg: 13.91 / Max: 14.141. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 120.82011.64022.46033.28044.1005SE +/- 0.00604, N = 3SE +/- 0.00753, N = 3SE +/- 0.01444, N = 33.583643.554673.64485-fopenmp=libomp - MIN: 3.44-fopenmp - MIN: 3.46-fopenmp=libomp - MIN: 3.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 3.58 / Avg: 3.58 / Max: 3.6Min: 3.54 / Avg: 3.55 / Max: 3.57Min: 3.62 / Avg: 3.64 / Max: 3.671. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121326395265SE +/- 0.01, N = 3SE +/- 0.19, N = 4SE +/- 0.12, N = 15SE +/- 0.17, N = 358.1158.9657.8957.51-lomp - MIN: 56.81 / MAX: 67.53-lomp - MIN: 57.64 / MAX: 66.68-lgomp - MIN: 55.89 / MAX: 80.86-lomp - MIN: 56.17 / MAX: 62.531. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121224364860Min: 58.08 / Avg: 58.11 / Max: 58.13Min: 58.58 / Avg: 58.96 / Max: 59.47Min: 56.59 / Avg: 57.89 / Max: 58.72Min: 57.27 / Avg: 57.51 / Max: 57.851. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12120240360480600SE +/- 1.69, N = 3SE +/- 2.13, N = 3SE +/- 3.29, N = 15SE +/- 1.73, N = 3550.46538.88545.91552.381. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12100200300400500Min: 547.13 / Avg: 550.46 / Max: 552.68Min: 534.64 / Avg: 538.88 / Max: 541.35Min: 504.59 / Avg: 545.91 / Max: 564.12Min: 550.61 / Avg: 552.38 / Max: 555.851. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MGCC 10.2LLVM Clang 1220406080100SE +/- 0.08, N = 3SE +/- 0.06, N = 397.75100.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MGCC 10.2LLVM Clang 1220406080100Min: 97.6 / Avg: 97.75 / Max: 97.89Min: 100.08 / Avg: 100.16 / Max: 100.271. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeGCC 10.2LLVM Clang 12306090120150SE +/- 0.75, N = 3SE +/- 1.02, N = 3121.13118.221. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeGCC 10.2LLVM Clang 1220406080100Min: 120.06 / Avg: 121.13 / Max: 122.57Min: 117.16 / Avg: 118.22 / Max: 120.261. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121632486480SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.12, N = 372.7873.2871.6072.461. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670Min: 72.66 / Avg: 72.78 / Max: 72.89Min: 73.08 / Avg: 73.28 / Max: 73.52Min: 71.22 / Avg: 71.6 / Max: 71.94Min: 72.23 / Avg: 72.46 / Max: 72.651. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1210002000300040005000SE +/- 22.36, N = 9SE +/- 60.32, N = 8SE +/- 16.39, N = 9SE +/- 10.26, N = 94661.64704.74715.44769.81. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 128001600240032004000Min: 4529.8 / Avg: 4661.57 / Max: 4759.7Min: 4308.2 / Avg: 4704.65 / Max: 4869.9Min: 4613.4 / Avg: 4715.39 / Max: 4782.1Min: 4709.8 / Avg: 4769.81 / Max: 4814.11. 3.8.1.0

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1250100150200250SE +/- 1.86, N = 8SE +/- 1.75, N = 9SE +/- 1.66, N = 9SE +/- 1.62, N = 12210.72210.35208.93213.57-mstack-alignment=64-mstack-alignment=64-mstack-alignment=641. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124080120160200Min: 197.87 / Avg: 210.72 / Max: 213.95Min: 196.49 / Avg: 210.35 / Max: 213.57Min: 195.89 / Avg: 208.93 / Max: 211.72Min: 195.98 / Avg: 213.57 / Max: 216.261. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12714212835SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 328.4328.1928.8128.671. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430Min: 28.34 / Avg: 28.43 / Max: 28.53Min: 28.12 / Avg: 28.19 / Max: 28.23Min: 28.73 / Avg: 28.81 / Max: 28.97Min: 28.61 / Avg: 28.67 / Max: 28.721. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121224364860SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.20, N = 3SE +/- 0.03, N = 350.750.551.651.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121020304050Min: 50.6 / Avg: 50.67 / Max: 50.8Min: 50.4 / Avg: 50.5 / Max: 50.6Min: 51.2 / Avg: 51.6 / Max: 51.8Min: 51.3 / Avg: 51.33 / Max: 51.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 311.3111.1711.2011.41-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 11.28 / Avg: 11.31 / Max: 11.36Min: 11.1 / Avg: 11.17 / Max: 11.28Min: 11.14 / Avg: 11.2 / Max: 11.26Min: 11.33 / Avg: 11.41 / Max: 11.521. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 2.72, N = 9SE +/- 2.72, N = 8SE +/- 1.32, N = 9SE +/- 1.22, N = 9853.3838.3843.1835.41. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12150300450600750Min: 841.9 / Avg: 853.33 / Max: 869Min: 830.1 / Avg: 838.34 / Max: 851.2Min: 837.8 / Avg: 843.07 / Max: 850.5Min: 829.1 / Avg: 835.41 / Max: 840.71. 3.8.1.0

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.37640.75281.12921.50561.882SE +/- 0.006, N = 3SE +/- 0.011, N = 3SE +/- 0.018, N = 4SE +/- 0.003, N = 31.6731.6691.6521.6381. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 1.66 / Avg: 1.67 / Max: 1.68Min: 1.65 / Avg: 1.67 / Max: 1.68Min: 1.6 / Avg: 1.65 / Max: 1.68Min: 1.63 / Avg: 1.64 / Max: 1.641. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 9.00, N = 3SE +/- 1.29, N = 3SE +/- 1.38, N = 3SE +/- 2.97, N = 3976.93959.29971.79979.34MIN: 633.01 / MAX: 1069.88MIN: 714.89 / MAX: 1039.77-lm - MIN: 732.02 / MAX: 1055.82MIN: 717.55 / MAX: 1062.341. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 958.96 / Avg: 976.93 / Max: 986.91Min: 956.73 / Avg: 959.29 / Max: 960.84Min: 969.08 / Avg: 971.79 / Max: 973.61Min: 973.61 / Avg: 979.34 / Max: 983.581. (CC) gcc options: -O3 -march=native -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassGCC 10.2LLVM Clang 12714212835SE +/- 0.26, N = 3SE +/- 0.13, N = 329.4330.031. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassGCC 10.2LLVM Clang 12714212835Min: 28.92 / Avg: 29.43 / Max: 29.69Min: 29.79 / Avg: 30.03 / Max: 30.241. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 10.2LLVM Clang 121.25752.5153.77255.036.2875SE +/- 0.031, N = 5SE +/- 0.037, N = 55.4845.589-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 10.2LLVM Clang 12246810Min: 5.41 / Avg: 5.48 / Max: 5.57Min: 5.47 / Avg: 5.59 / Max: 5.671. (CXX) g++ options: -O3 -march=native -logg -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215SE +/- 0.03, N = 5SE +/- 0.01, N = 5SE +/- 0.10, N = 5SE +/- 0.11, N = 510.2910.3410.1510.311. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 123691215Min: 10.2 / Avg: 10.29 / Max: 10.37Min: 10.32 / Avg: 10.34 / Max: 10.38Min: 9.8 / Avg: 10.15 / Max: 10.3Min: 9.88 / Avg: 10.31 / Max: 10.51. (CXX) g++ options: -O3 -march=native -rdynamic

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 4.68, N = 9SE +/- 3.65, N = 8SE +/- 2.09, N = 9SE +/- 3.22, N = 91080.31065.41063.51060.11. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 1056.3 / Avg: 1080.33 / Max: 1099Min: 1042.6 / Avg: 1065.35 / Max: 1078.3Min: 1053.3 / Avg: 1063.51 / Max: 1074.7Min: 1037.1 / Avg: 1060.13 / Max: 1071.51. 3.8.1.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.21, N = 3SE +/- 0.11, N = 4SE +/- 0.05, N = 15SE +/- 0.08, N = 314.0813.8614.1113.91-lomp - MIN: 13.56 / MAX: 21.13-lomp - MIN: 13.44 / MAX: 16.35-lgomp - MIN: 13.84 / MAX: 23.15-lomp - MIN: 13.66 / MAX: 14.541. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 13.67 / Avg: 14.08 / Max: 14.34Min: 13.54 / Avg: 13.86 / Max: 14.04Min: 13.92 / Avg: 14.11 / Max: 14.57Min: 13.78 / Avg: 13.91 / Max: 14.041. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12400800120016002000SE +/- 3.74, N = 3SE +/- 5.00, N = 3SE +/- 9.12, N = 31760.571773.671792.27-fopenmp=libomp - MIN: 1745.87-fopenmp - MIN: 1750.26-fopenmp=libomp - MIN: 1766.321. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 1230060090012001500Min: 1755.25 / Avg: 1760.57 / Max: 1767.78Min: 1763.67 / Avg: 1773.67 / Max: 1778.84Min: 1774.89 / Avg: 1792.27 / Max: 1805.741. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedAMD AOCC 2.3GCC 10.210002000300040005000SE +/- 29.99, N = 34805.64886.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedAMD AOCC 2.3GCC 10.28001600240032004000Min: 4831.2 / Avg: 4886.23 / Max: 4934.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.11, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 374.6173.6474.1274.77-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121428425670Min: 74.44 / Avg: 74.61 / Max: 74.82Min: 73.36 / Avg: 73.64 / Max: 73.84Min: 74.07 / Avg: 74.12 / Max: 74.18Min: 74.69 / Avg: 74.77 / Max: 74.831. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121.09692.19383.29074.38765.4845SE +/- 0.041, N = 3SE +/- 0.015, N = 3SE +/- 0.022, N = 3SE +/- 0.038, N = 34.8324.8374.8754.8071. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 4.77 / Avg: 4.83 / Max: 4.91Min: 4.81 / Avg: 4.84 / Max: 4.86Min: 4.84 / Avg: 4.88 / Max: 4.92Min: 4.76 / Avg: 4.81 / Max: 4.881. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12816243240SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 336.736.436.636.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12816243240Min: 36.6 / Avg: 36.67 / Max: 36.7Min: 36.4 / Avg: 36.4 / Max: 36.4Min: 36.5 / Avg: 36.57 / Max: 36.6Min: 36.8 / Avg: 36.83 / Max: 36.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 315.9816.0615.9015.911. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 15.95 / Avg: 15.98 / Max: 16.03Min: 16.01 / Avg: 16.06 / Max: 16.09Min: 15.82 / Avg: 15.9 / Max: 15.98Min: 15.82 / Avg: 15.91 / Max: 16.011. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1220406080100SE +/- 0.28, N = 3SE +/- 0.16, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 379.9479.8779.5280.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 121530456075Min: 79.47 / Avg: 79.94 / Max: 80.46Min: 79.55 / Avg: 79.87 / Max: 80.05Min: 79.22 / Avg: 79.52 / Max: 79.87Min: 80.07 / Avg: 80.21 / Max: 80.37

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 120.14470.28940.43410.57880.7235SE +/- 0.004823, N = 3SE +/- 0.000722, N = 3SE +/- 0.000908, N = 30.6430930.6386640.641231-fopenmp=libomp - MIN: 0.61-fopenmp - MIN: 0.61-fopenmp=libomp - MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 0.64 / Avg: 0.64 / Max: 0.65Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 120.66421.32841.99262.65683.321SE +/- 0.035, N = 3SE +/- 0.006, N = 3SE +/- 0.014, N = 3SE +/- 0.016, N = 32.9332.9412.9342.9521. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 2.88 / Avg: 2.93 / Max: 3Min: 2.93 / Avg: 2.94 / Max: 2.95Min: 2.91 / Avg: 2.93 / Max: 2.95Min: 2.93 / Avg: 2.95 / Max: 2.981. (CXX) g++ options: -O3 -fPIC -lm

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 15.16, N = 9SE +/- 3.09, N = 8SE +/- 0.78, N = 9SE +/- 0.98, N = 91061.01055.81055.01054.91. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000Min: 940.7 / Avg: 1060.96 / Max: 1084.2Min: 1044.5 / Avg: 1055.79 / Max: 1073.4Min: 1050.8 / Avg: 1055 / Max: 1057.9Min: 1052.2 / Avg: 1054.92 / Max: 1061.41. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 1248121620SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.01, N = 317.3617.2917.34-fopenmp=libomp - MIN: 16.83-fopenmp - MIN: 16.58-fopenmp=libomp - MIN: 16.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 1248121620Min: 17.33 / Avg: 17.36 / Max: 17.41Min: 17.18 / Avg: 17.29 / Max: 17.46Min: 17.32 / Avg: 17.34 / Max: 17.361. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12714212835SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 328.1528.1728.1328.221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12612182430Min: 28.07 / Avg: 28.15 / Max: 28.22Min: 28.13 / Avg: 28.17 / Max: 28.23Min: 28.05 / Avg: 28.13 / Max: 28.17Min: 28.15 / Avg: 28.22 / Max: 28.281. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 126001200180024003000SE +/- 17.19, N = 3SE +/- 2.01, N = 3SE +/- 5.95, N = 32760.942757.522757.75-fopenmp=libomp - MIN: 2717.59-fopenmp - MIN: 2719.35-fopenmp=libomp - MIN: 2734.731. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 125001000150020002500Min: 2729.57 / Avg: 2760.94 / Max: 2788.82Min: 2754.84 / Avg: 2757.52 / Max: 2761.46Min: 2746.33 / Avg: 2757.75 / Max: 2766.331. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3GCC 10.2816243240SE +/- 0.09, N = 332.34MIN: 31.33 / MAX: 42.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0GCC 10.20.5291.0581.5872.1162.645SE +/- 0.027, N = 32.351MIN: 2.27 / MAX: 7.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224GCC 10.20.7291.4582.1872.9163.645SE +/- 0.049, N = 33.240MIN: 3.12 / MAX: 11.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50GCC 10.2612182430SE +/- 0.02, N = 325.07MIN: 23.97 / MAX: 39.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0GCC 10.21.14322.28643.42964.57285.716SE +/- 0.010, N = 35.081MIN: 4.92 / MAX: 14.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 10.21.05172.10343.15514.20685.2585SE +/- 0.015, N = 34.6741. (CXX) g++ options: -fopenmp -O3 -march=native

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeGCC 10.23M6M9M12M15MSE +/- 26371.45, N = 3117312491. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1214002800420056007000SE +/- 34.74, N = 3SE +/- 38.28, N = 3SE +/- 215.50, N = 12SE +/- 55.09, N = 126067597667215937-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 1212002400360048006000Min: 5997.5 / Avg: 6066.83 / Max: 6105.5Min: 5902 / Avg: 5976 / Max: 6030Min: 5800.5 / Avg: 6720.71 / Max: 8258Min: 5575 / Avg: 5936.88 / Max: 61991. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12800K1600K2400K3200K4000KSE +/- 58906.79, N = 15SE +/- 11517.61, N = 3SE +/- 36718.95, N = 15SE +/- 47796.61, N = 153658044.773545388.923470419.903624414.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12600K1200K1800K2400K3000KMin: 3385240.25 / Avg: 3658044.77 / Max: 4223040.5Min: 3522367 / Avg: 3545388.92 / Max: 3557578Min: 3243625 / Avg: 3470419.9 / Max: 3859526Min: 3406005.25 / Avg: 3624414.37 / Max: 4108502.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 121.00522.01043.01564.02085.026SE +/- 0.00340, N = 3SE +/- 0.30276, N = 15SE +/- 0.00451, N = 32.468504.467772.46561-fopenmp=libomp - MIN: 2.36-fopenmp - MIN: 2.86-fopenmp=libomp - MIN: 2.331. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12246810Min: 2.46 / Avg: 2.47 / Max: 2.47Min: 3.19 / Avg: 4.47 / Max: 6.41Min: 2.46 / Avg: 2.47 / Max: 2.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 20.04, N = 9SE +/- 20.61, N = 8SE +/- 19.67, N = 9SE +/- 17.91, N = 9931.6929.1920.8911.21. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12160320480640800Min: 820.7 / Avg: 931.62 / Max: 1007.9Min: 813.6 / Avg: 929.09 / Max: 996.7Min: 849.7 / Avg: 920.81 / Max: 1025.5Min: 801.6 / Avg: 911.16 / Max: 985.61. 3.8.1.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 129001800270036004500SE +/- 12.18, N = 3SE +/- 417.40, N = 3SE +/- 6.53, N = 3SE +/- 50.52, N = 34097.33608.84251.74000.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 127001400210028003500Min: 4074.7 / Avg: 4097.27 / Max: 4116.5Min: 2774.4 / Avg: 3608.8 / Max: 4048.4Min: 4239 / Avg: 4251.7 / Max: 4260.7Min: 3900.1 / Avg: 4000.93 / Max: 4056.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

147 Results Shown

Sysbench
Etcpak
Timed LLVM Compilation
C-Ray
GraphicsMagick
LibRaw
NCNN
GraphicsMagick
ASTC Encoder
Etcpak
SVT-AV1
GraphicsMagick
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
TNN
NCNN
Ogg Audio Encoding
ASTC Encoder
Google SynthMark
Zstd Compression
GraphicsMagick
NCNN
TSCP
NCNN
QuantLib
GraphicsMagick
Etcpak
Liquid-DSP
ONNX Runtime
JPEG XL Decoding
WebP Image Encode
AOM AV1
NCNN
SVT-AV1
libavif avifenc
NCNN
JPEG XL Decoding
NCNN
JPEG XL
LZ4 Compression
NCNN
Zstd Compression
simdjson
Zstd Compression
JPEG XL
simdjson
Basis Universal
simdjson
ONNX Runtime
Liquid-DSP
POV-Ray
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
  Quality 100, Compression Effort 5
libavif avifenc
GraphicsMagick
Basis Universal
libavif avifenc
JPEG XL
Zstd Compression
libavif avifenc
NCNN
JPEG XL
AOM AV1
dav1d
WebP Image Encode
WebP2 Image Encode
SVT-VP9
Zstd Compression
Redis:
  LPUSH
  LPOP
Zstd Compression
LZ4 Compression
AOM AV1
ONNX Runtime
NCNN
simdjson
WebP2 Image Encode
SVT-VP9
ONNX Runtime
GraphicsMagick
LZ4 Compression
Redis
ASTC Encoder
Liquid-DSP
oneDNN
x265
Tachyon
x265
TNN
GNU Radio
oneDNN
SQLite Speedtest
Ngspice
Timed MrBayes Analysis
LZ4 Compression
Redis
RNNoise
LZ4 Compression
NCNN
LZ4 Compression
Gcrypt Library
WebP Image Encode
oneDNN
NCNN
Crypto++
OpenFOAM
AOM AV1
Ngspice
GNU Radio
x264
WebP Image Encode
Zstd Compression
JPEG XL
GNU Radio
WebP Image Encode
dav1d
AOM AV1
Opus Codec Encoding
WavPack Audio Encoding
GNU Radio
NCNN
oneDNN
Zstd Compression
JPEG XL
libavif avifenc
Zstd Compression
Basis Universal
Timed Godot Game Engine Compilation
oneDNN
libavif avifenc
GNU Radio
oneDNN
Basis Universal
oneDNN
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Smallpt
Crafty
ONNX Runtime
Redis
oneDNN
GNU Radio
Zstd Compression