Ryzen 9 5950X AOCC 3.0 Compiler Benchmarking

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103167-PTS-RYZEN95988
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 4 Tests
C++ Boost Tests 3 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 16 Tests
Compression Tests 2 Tests
CPU Massive 16 Tests
Creator Workloads 28 Tests
Cryptography 2 Tests
Database Test Suite 2 Tests
Encoding 10 Tests
Game Development 4 Tests
HPC - High Performance Computing 8 Tests
Imaging 7 Tests
Machine Learning 6 Tests
Multi-Core 17 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 4 Tests
Raytracing 3 Tests
Renderers 4 Tests
Scientific Computing 2 Tests
Software Defined Radio 2 Tests
Server 3 Tests
Server CPU Tests 12 Tests
Single-Threaded 2 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 10.2
March 14 2021
  14 Hours, 26 Minutes
LLVM Clang 12
March 15 2021
  11 Hours, 8 Minutes
AMD AOCC 2.3
March 14 2021
  10 Hours, 49 Minutes
AMD AOCC 3.0
March 15 2021
  10 Hours, 54 Minutes
Invert Hiding All Results Option
  11 Hours, 49 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X AOCC 3.0 Compiler BenchmarkingOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD NAVY_FLOUNDER 12GB (2855/1000MHz)AMD Device ab28ASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.6-051106-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.168GCC 10.2.0Clang 12.0.0-++rc3-1~exp1~oibaf~gClang 11.0.0Clang 12.0.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilersFile-SystemScreen ResolutionRyzen 9 5950X AOCC 3.0 Compiler Benchmarking PerformanceSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 10.2: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - AMD AOCC 2.3: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - AMD AOCC 3.0: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0Logarithmic Result OverviewPhoronix Test SuiteSysbenchTimed LLVM CompilationC-RayLibRawEtcpakOgg Audio EncodingGoogle SynthMarkGraphicsMagickSVT-AV1TSCPQuantLibNCNNTNNJPEG XL DecodingPOV-RayZstd CompressionSVT-VP9libavif avifencJPEG XLONNX RuntimeASTC EncoderWebP2 Image EncodeWebP Image EncodeBasis Universaldav1dLZ4 CompressionTachyonx265simdjsonTimed MrBayes AnalysisRNNoiseNgspiceRedisGcrypt LibraryLiquid-DSPCrypto++x264WavPack Audio EncodingGNU RadioTimed Godot Game Engine Compilation

Ryzen 9 5950X AOCC 3.0 Compiler Benchmarkingsysbench: CPUetcpak: DXT1build-llvm: Time To Compilec-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenlibraw: Post-Processing Benchmarkncnn: CPU - regnety_400mgraphics-magick: HWB Color Spaceastcenc: Thoroughetcpak: ETC1svt-av1: Enc Mode 8 - 1080pgraphics-magick: Resizingncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3tnn: CPU - MobileNet v2ncnn: CPU - mnasnetencode-ogg: WAV To Oggastcenc: Mediumsynthmark: VoiceMark_100compress-zstd: 3, Long Mode - Compression Speedgraphics-magick: Rotatencnn: CPU - efficientnet-b0tscp: AI Chess Performancencnn: CPU - blazefacequantlib: graphics-magick: Noise-Gaussianetcpak: ETC2liquid-dsp: 32 - 256 - 57onnx: shufflenet-v2-10 - OpenMP CPUjpegxl-decode: 1webp: Quality 100, Highest Compressionaom-av1: Speed 0 Two-Passncnn: CPU - mobilenetsvt-av1: Enc Mode 4 - 1080pavifenc: 6, Losslessncnn: CPU - shufflenet-v2jpegxl-decode: Allncnn: CPU - squeezenet_ssdjpegxl: JPEG - 8compress-lz4: 9 - Compression Speedncnn: CPU - resnet50compress-zstd: 19, Long Mode - Decompression Speedsimdjson: LargeRandcompress-zstd: 8, Long Mode - Compression Speedjpegxl: PNG - 8simdjson: PartialTweetsbasis: UASTC Level 0simdjson: DistinctUserIDonnx: yolov4 - OpenMP CPUliquid-dsp: 1 - 256 - 57povray: Trace Timewebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5avifenc: 2graphics-magick: Swirlbasis: ETC1Savifenc: 6jpegxl: JPEG - 5compress-zstd: 8 - Compression Speedavifenc: 0ncnn: CPU - googlenetjpegxl: JPEG - 7aom-av1: Speed 6 Realtimedav1d: Summer Nature 4Kwebp: Defaultwebp2: Defaultsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pcompress-zstd: 3, Long Mode - Decompression Speedredis: LPUSHredis: LPOPcompress-zstd: 8 - Decompression Speedcompress-lz4: 3 - Compression Speedaom-av1: Speed 4 Two-Passonnx: bertsquad-10 - OpenMP CPUncnn: CPU - yolov4-tinysimdjson: Kostyawebp2: Quality 100, Lossless Compressionsvt-vp9: Visual Quality Optimized - Bosphorus 1080ponnx: fcn-resnet101-11 - OpenMP CPUgraphics-magick: Enhancedcompress-lz4: 1 - Decompression Speedredis: SETastcenc: Exhaustiveliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUx265: Bosphorus 4Ktachyon: Total Timex265: Bosphorus 1080ptnn: CPU - SqueezeNet v1.1gnuradio: Hilbert Transformonednn: IP Shapes 3D - f32 - CPUsqlite-speedtest: Timed Time - Size 1,000ngspice: C7552mrbayes: Primate Phylogeny Analysiscompress-lz4: 9 - Decompression Speedredis: SADDrnnoise: compress-lz4: 3 - Decompression Speedncnn: CPU - alexnetcompress-lz4: 1 - Compression Speedgcrypt: webp: Quality 100, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - vgg16cryptopp: Unkeyed Algorithmsopenfoam: Motorbike 30Maom-av1: Speed 8 Realtimengspice: C2670gnuradio: Signal Source (Cosine)x264: H.264 Video Encodingwebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 19 - Compression Speedjpegxl: PNG - 7gnuradio: IIR Filterwebp: Quality 100dav1d: Summer Nature 1080paom-av1: Speed 6 Two-Passencode-opus: WAV To Opus Encodeencode-wavpack: WAV To WavPackgnuradio: FIR Filterncnn: CPU - resnet18onednn: Recurrent Neural Network Inference - f32 - CPUcompress-zstd: 8, Long Mode - Decompression Speedjpegxl: PNG - 5avifenc: 10, Losslesscompress-zstd: 19, Long Mode - Compression Speedbasis: UASTC Level 2build-godot: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 10gnuradio: FM Deemphasis Filteronednn: Convolution Batch Shapes Auto - f32 - CPUbasis: UASTC Level 3onednn: Recurrent Neural Network Training - f32 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0smallpt: Global Illumination Renderer; 128 Samplescrafty: Elapsed Timeonnx: super-resolution-10 - OpenMP CPUredis: GETonednn: Deconvolution Batch shapes_1d - f32 - CPUgnuradio: Five Back to Back FIR Filterscompress-zstd: 19 - Decompression SpeedGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.091743.721546.299370.57125.08937578.6617.6111156.9922386.56151.77421654.433.85216.2813.9313.5784.0524966.2981425.910565.3219657731.833196.9454245.04111649666671504956.535.2420.3712.426.13730.9774.23210.9913.7738.1371.1325.674350.91.221122.61.145.645.1575.734338184400024.093111.802203.8116.41423.538116619.8968.92787.351057.443.61512.7687.0735.13243.691.0422.274235.044737.12222217.523549910.504617.172.369.2061420.773.72367.371228.969943913771.12640316.1752.926811112000003.9597927.8344.394189.80211.567515.89.2596742.59962.81659.86913397.73041527.3714.19713400.110.8212330.56171.18613.9903.5546757.89545.91460997.75121.1371.6034715.4208.9328.81351.611.20843.11.652971.7929.435.48410.1491063.514.111773.674886.274.124.87536.615.90279.5230.6386642.9341055.017.290528.1262757.5232.3442.3513.24025.0655.0814.6741173124967213470419.904.46777920.84251.72445437.633669.185302.69844.88723754.1417.068449.4996383.47265.31117893.793.33270.7903.4513.3723.5076795.8081191.610164.8021481541.733538.5398272.99413352333331497262.274.6740.4211.536.82327.8544.04196.3412.5036.4464.4323.543957.91.141039.11.066.185.5226.264267779433322.122103.011188.3106.78922.073110821.3778.34285.671043.241.07812.5385.8137.5244.370.9772.134223.504456.62212779.003649832.584352.468.309.7463421.703.71349.022219.1210445713305.32762047.551.662910676666674.1193028.0245.073192.16206.360522.89.5944243.26164.51959.23413129.42954866.814.33113082.411.1412227.47172.90213.9103.6448557.51552.383954100.16118.2272.4614769.8213.5728.67351.311.41835.41.638979.3430.035.58910.3111060.113.911792.2774.774.80736.815.90780.2110.6412312.9521054.917.338028.2222757.7559373624414.372.46561911.24000.9210804861.922986.706576.18744.53124150.3712.198489.2012285.28965.75418243.523.06252.4493.1616.5643.2899807.3731166.49284.5323142251.573710.4402236.46613323333331710564.344.60910.966.85927.6223.79213.6712.7435.8768.8023.314024.71.111026.11.045.925.4536.124567503166722.535106.383193.6006.28821.834113121.2168.38489.511117.640.74811.9489.32244.150.9792.144238.114586.22351340.563589202.594468.272.2164621.663.53357.897230.1910346113595.32719539.8350.8140106726666727.4946.132189.74203.636534.844.11064.89459.06813188.42961165.6714.03913212.611.0112456.17173.31413.64058.11550.45706372.7824661.6210.7228.43350.711.31853.31.673976.9310.2931080.314.084805.674.614.83236.715.98479.9432.9331061.028.14960673658044.77931.64097.3210533984.513583.008610.96144.33224052.6812.308059.3493286.92764.27917203.533.07260.6633.1816.5403.4040789.2241186.08674.5022835121.563646.4392242.03313349000001547459.924.93711.286.91727.9143.87191.9112.4034.3468.4023.293978.21.121023.51.046.045.6396.234657873400022.537105.721193.4266.36422.005108321.3688.30983.541096.341.02811.9683.63229.031.0072.165225.174543.82345671.033766645.924463.172.3064921.933.61356.815221.5910245213144.82719036.251.454510860333334.0966326.9644.998188.70204.603523.69.5719464.91157.99012981.82948093.6814.47113010.711.1112124.81175.69413.8413.5836458.96538.87620473.2794704.7210.3528.19250.511.17838.31.669959.2910.3431065.413.861760.5773.644.83736.416.05579.8670.6430932.9411055.817.363028.1662760.9459763545388.922.46850929.13608.8OpenBenchmarking.org

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350M100M150M200M250MSE +/- 5355.39, N = 3SE +/- 115.96, N = 3SE +/- 204338.54, N = 3SE +/- 301436.60, N = 32445437.6391743.72210533984.51210804861.921. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.340M80M120M160M200MMin: 2439119.78 / Avg: 2445437.63 / Max: 2456086.82Min: 91605.17 / Avg: 91743.72 / Max: 91974.07Min: 210264286.55 / Avg: 210533984.51 / Max: 210934746.95Min: 210438938.31 / Avg: 210804861.92 / Max: 211402753.841. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000SE +/- 26.84, N = 3SE +/- 2.21, N = 3SE +/- 7.06, N = 3SE +/- 5.75, N = 33669.191546.303583.012986.711. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.36001200180024003000Min: 3616.09 / Avg: 3669.19 / Max: 3702.56Min: 1541.92 / Avg: 1546.3 / Max: 1548.99Min: 3575.69 / Avg: 3583.01 / Max: 3597.12Min: 2979.89 / Avg: 2986.71 / Max: 2998.131. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3130260390520650SE +/- 1.23, N = 3SE +/- 2.79, N = 3SE +/- 5.16, N = 3SE +/- 5.39, N = 3302.70370.57610.96576.19
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3110220330440550Min: 300.24 / Avg: 302.7 / Max: 304.02Min: 365.42 / Avg: 370.57 / Max: 375.01Min: 600.74 / Avg: 610.96 / Max: 617.32Min: 565.77 / Avg: 576.19 / Max: 583.78

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31020304050SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 344.8925.0944.3344.531. (CC) gcc options: -lm -lpthread -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3918273645Min: 44.73 / Avg: 44.89 / Max: 45Min: 24.94 / Avg: 25.09 / Max: 25.19Min: 44.17 / Avg: 44.33 / Max: 44.45Min: 44.42 / Avg: 44.53 / Max: 44.621. (CC) gcc options: -lm -lpthread -O3 -march=native

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400SE +/- 0.58, N = 3SE +/- 1.00, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 32373752402411. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.370140210280350Min: 236 / Avg: 237 / Max: 238Min: 374 / Avg: 375 / Max: 377Min: 239 / Avg: 239.67 / Max: 241Min: 240 / Avg: 241 / Max: 2421. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 354.1478.6652.6850.371. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31530456075Min: 54.02 / Avg: 54.14 / Max: 54.36Min: 78.34 / Avg: 78.66 / Max: 78.85Min: 52.5 / Avg: 52.68 / Max: 52.79Min: 50.24 / Avg: 50.37 / Max: 50.521. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.06, N = 3SE +/- 0.06, N = 15SE +/- 0.09, N = 3SE +/- 0.11, N = 317.0617.6112.3012.19-lomp - MIN: 16.85 / MAX: 20.53-lgomp - MIN: 16.94 / MAX: 25.97-lomp - MIN: 11.96 / MAX: 17.61-lomp - MIN: 11.89 / MAX: 13.61. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 16.94 / Avg: 17.06 / Max: 17.16Min: 17.07 / Avg: 17.61 / Max: 18.02Min: 12.11 / Avg: 12.3 / Max: 12.41Min: 11.98 / Avg: 12.19 / Max: 12.361. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 1.00, N = 3SE +/- 1.33, N = 3SE +/- 3.06, N = 384411158058481. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 842 / Avg: 844 / Max: 845Min: 1114 / Avg: 1115.33 / Max: 1118Min: 799 / Avg: 805 / Max: 8091. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.0075, N = 3SE +/- 0.0057, N = 3SE +/- 0.0148, N = 3SE +/- 0.0090, N = 39.49966.99229.34939.20121. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 9.48 / Avg: 9.5 / Max: 9.51Min: 6.98 / Avg: 6.99 / Max: 7Min: 9.32 / Avg: 9.35 / Max: 9.37Min: 9.19 / Avg: 9.2 / Max: 9.221. (CXX) g++ options: -O3 -march=native -flto -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400SE +/- 0.14, N = 3SE +/- 0.37, N = 3SE +/- 0.06, N = 3SE +/- 1.16, N = 3383.47386.56286.93285.291. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.370140210280350Min: 383.2 / Avg: 383.47 / Max: 383.69Min: 385.83 / Avg: 386.56 / Max: 387Min: 286.82 / Avg: 286.93 / Max: 287.03Min: 283.87 / Avg: 285.29 / Max: 287.581. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31530456075SE +/- 0.13, N = 3SE +/- 0.24, N = 3SE +/- 0.66, N = 3SE +/- 0.13, N = 365.3151.7764.2865.751. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31326395265Min: 65.08 / Avg: 65.31 / Max: 65.52Min: 51.31 / Avg: 51.77 / Max: 52.08Min: 62.97 / Avg: 64.28 / Max: 65.07Min: 65.51 / Avg: 65.75 / Max: 65.911. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.35001000150020002500SE +/- 1.15, N = 3SE +/- 1.45, N = 3SE +/- 2.65, N = 317892165172018241. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3400800120016002000Min: 1787 / Avg: 1789 / Max: 1791Min: 2162 / Avg: 2164.67 / Max: 2167Min: 1716 / Avg: 1720 / Max: 17251. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.99681.99362.99043.98724.984SE +/- 0.04, N = 3SE +/- 0.01, N = 15SE +/- 0.06, N = 4SE +/- 0.03, N = 33.794.433.533.52-lomp - MIN: 3.63 / MAX: 5.2-lgomp - MIN: 4.19 / MAX: 11.09-lomp - MIN: 3.27 / MAX: 4.75-lomp - MIN: 3.34 / MAX: 4.841. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.75 / Avg: 3.79 / Max: 3.87Min: 4.38 / Avg: 4.43 / Max: 4.55Min: 3.42 / Avg: 3.53 / Max: 3.65Min: 3.48 / Avg: 3.52 / Max: 3.581. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.86631.73262.59893.46524.3315SE +/- 0.05, N = 3SE +/- 0.02, N = 15SE +/- 0.07, N = 4SE +/- 0.03, N = 33.333.853.073.06-lomp - MIN: 3.19 / MAX: 5.6-lgomp - MIN: 3.74 / MAX: 10.85-lomp - MIN: 2.9 / MAX: 4.41-lomp - MIN: 2.98 / MAX: 4.31. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.24 / Avg: 3.33 / Max: 3.41Min: 3.77 / Avg: 3.85 / Max: 4.06Min: 2.95 / Avg: 3.07 / Max: 3.2Min: 3.02 / Avg: 3.06 / Max: 3.121. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.360120180240300SE +/- 0.58, N = 3SE +/- 0.56, N = 3SE +/- 0.69, N = 3SE +/- 0.35, N = 3270.79216.28260.66252.45-fopenmp=libomp - MIN: 268.42 / MAX: 272.22-fopenmp - MIN: 215.1 / MAX: 218.26-fopenmp=libomp - MIN: 257.51 / MAX: 262.88-fopenmp=libomp - MIN: 250.25 / MAX: 255.531. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250Min: 269.69 / Avg: 270.79 / Max: 271.68Min: 215.48 / Avg: 216.28 / Max: 217.36Min: 259.89 / Avg: 260.66 / Max: 262.04Min: 251.83 / Avg: 252.45 / Max: 253.031. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.88431.76862.65293.53724.4215SE +/- 0.02, N = 3SE +/- 0.02, N = 15SE +/- 0.03, N = 4SE +/- 0.03, N = 33.453.933.183.16-lomp - MIN: 3.37 / MAX: 4.6-lgomp - MIN: 3.71 / MAX: 6.06-lomp - MIN: 3.04 / MAX: 4.48-lomp - MIN: 3.06 / MAX: 4.051. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.42 / Avg: 3.45 / Max: 3.49Min: 3.86 / Avg: 3.93 / Max: 4.17Min: 3.1 / Avg: 3.18 / Max: 3.22Min: 3.11 / Avg: 3.16 / Max: 3.211. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 313.3713.5816.5416.561. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 13.17 / Avg: 13.37 / Max: 13.58Min: 13.52 / Avg: 13.58 / Max: 13.66Min: 16.44 / Avg: 16.54 / Max: 16.71Min: 16.43 / Avg: 16.56 / Max: 16.711. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.91181.82362.73543.64724.559SE +/- 0.0018, N = 3SE +/- 0.0178, N = 3SE +/- 0.0017, N = 3SE +/- 0.0273, N = 33.50764.05243.40403.28991. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.51 / Avg: 3.51 / Max: 3.51Min: 4.03 / Avg: 4.05 / Max: 4.09Min: 3.4 / Avg: 3.4 / Max: 3.41Min: 3.24 / Avg: 3.29 / Max: 3.321. (CXX) g++ options: -O3 -march=native -flto -pthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 4.01, N = 3SE +/- 1.26, N = 3SE +/- 5.41, N = 3SE +/- 5.04, N = 3795.81966.30789.22807.371. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 790.58 / Avg: 795.81 / Max: 803.68Min: 963.78 / Avg: 966.3 / Max: 967.6Min: 778.41 / Avg: 789.22 / Max: 794.93Min: 797.29 / Avg: 807.37 / Max: 812.441. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.330060090012001500SE +/- 1.19, N = 3SE +/- 2.43, N = 3SE +/- 2.80, N = 3SE +/- 4.71, N = 31191.61425.91186.01166.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1189.9 / Avg: 1191.63 / Max: 1193.9Min: 1421.8 / Avg: 1425.87 / Max: 1430.2Min: 1181.6 / Avg: 1186 / Max: 1191.2Min: 1157.4 / Avg: 1166.4 / Max: 1173.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 1.86, N = 3SE +/- 3.51, N = 3SE +/- 2.03, N = 3SE +/- 8.67, N = 3101610568679281. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1012 / Avg: 1015.67 / Max: 1018Min: 1049 / Avg: 1056 / Max: 1060Min: 864 / Avg: 867.33 / Max: 871Min: 919 / Avg: 927.67 / Max: 9451. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31.1972.3943.5914.7885.985SE +/- 0.02, N = 3SE +/- 0.02, N = 15SE +/- 0.03, N = 4SE +/- 0.07, N = 34.805.324.504.53-lomp - MIN: 4.71 / MAX: 6.61-lgomp - MIN: 5.15 / MAX: 13.83-lomp - MIN: 4.34 / MAX: 5.7-lomp - MIN: 4.35 / MAX: 6.861. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 4.77 / Avg: 4.8 / Max: 4.82Min: 5.21 / Avg: 5.32 / Max: 5.58Min: 4.41 / Avg: 4.5 / Max: 4.55Min: 4.42 / Avg: 4.53 / Max: 4.671. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3500K1000K1500K2000K2500KSE +/- 4267.44, N = 5SE +/- 7442.75, N = 5SE +/- 3546.09, N = 5SE +/- 4348.49, N = 521481541965773228351223142251. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3400K800K1200K1600K2000KMin: 2134798 / Avg: 2148153.6 / Max: 2159913Min: 1939359 / Avg: 1965773.2 / Max: 1981215Min: 2275942 / Avg: 2283512.4 / Max: 2294908Min: 2304510 / Avg: 2314225.4 / Max: 23239571. (CC) gcc options: -O3 -march=native

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.41180.82361.23541.64722.059SE +/- 0.02, N = 3SE +/- 0.01, N = 15SE +/- 0.02, N = 4SE +/- 0.01, N = 31.731.831.561.57-lomp - MIN: 1.68 / MAX: 1.79-lgomp - MIN: 1.77 / MAX: 3.9-lomp - MIN: 1.46 / MAX: 6.9-lomp - MIN: 1.54 / MAX: 1.751. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 1.7 / Avg: 1.73 / Max: 1.77Min: 1.79 / Avg: 1.83 / Max: 1.94Min: 1.51 / Avg: 1.56 / Max: 1.58Min: 1.56 / Avg: 1.57 / Max: 1.581. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000SE +/- 49.56, N = 3SE +/- 33.41, N = 5SE +/- 27.64, N = 10SE +/- 28.46, N = 103538.53196.93646.43710.41. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.36001200180024003000Min: 3439.8 / Avg: 3538.53 / Max: 3595.4Min: 3083.7 / Avg: 3196.9 / Max: 3285.9Min: 3399.9 / Avg: 3646.41 / Max: 3691.9Min: 3458.5 / Avg: 3710.4 / Max: 3759.71. (CXX) g++ options: -O3 -march=native -rdynamic

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3100200300400500SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 33984543924021. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400Min: 453 / Avg: 453.67 / Max: 455Min: 392 / Avg: 392.33 / Max: 393Min: 401 / Avg: 401.67 / Max: 4021. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.360120180240300SE +/- 0.09, N = 3SE +/- 1.65, N = 3SE +/- 0.45, N = 3SE +/- 2.43, N = 3272.99245.04242.03236.471. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250Min: 272.87 / Avg: 272.99 / Max: 273.16Min: 241.77 / Avg: 245.04 / Max: 247.06Min: 241.14 / Avg: 242.03 / Max: 242.49Min: 232.78 / Avg: 236.47 / Max: 241.051. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3300M600M900M1200M1500MSE +/- 240370.09, N = 3SE +/- 497772.82, N = 3SE +/- 1422439.22, N = 3SE +/- 1125956.38, N = 313352333331164966667133490000013323333331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3200M400M600M800M1000MMin: 1334900000 / Avg: 1335233333.33 / Max: 1335700000Min: 1164200000 / Avg: 1164966666.67 / Max: 1165900000Min: 1332300000 / Avg: 1334900000 / Max: 1337200000Min: 1330400000 / Avg: 1332333333.33 / Max: 13343000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34K8K12K16K20KSE +/- 123.46, N = 12SE +/- 134.84, N = 3SE +/- 177.70, N = 3SE +/- 193.85, N = 414972150491547417105-fopenmp=libomp-fopenmp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33K6K9K12K15KMin: 14420.5 / Avg: 14972.25 / Max: 15778.5Min: 14860 / Avg: 15048.83 / Max: 15310Min: 15163.5 / Avg: 15473.67 / Max: 15779Min: 16611 / Avg: 17104.88 / Max: 175431. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 362.2756.5359.9264.34
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31326395265Min: 62.19 / Avg: 62.27 / Max: 62.32Min: 56.43 / Avg: 56.53 / Max: 56.61Min: 59.86 / Avg: 59.92 / Max: 59.98Min: 64.19 / Avg: 64.34 / Max: 64.56

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31.17952.3593.53854.7185.8975SE +/- 0.055, N = 3SE +/- 0.018, N = 3SE +/- 0.020, N = 3SE +/- 0.016, N = 34.6745.2424.9374.6091. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 4.57 / Avg: 4.67 / Max: 4.74Min: 5.22 / Avg: 5.24 / Max: 5.28Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 4.58 / Avg: 4.61 / Max: 4.631. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassLLVM Clang 12GCC 10.20.09450.1890.28350.3780.4725SE +/- 0.00, N = 3SE +/- 0.00, N = 30.420.371. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassLLVM Clang 12GCC 10.212345Min: 0.42 / Avg: 0.42 / Max: 0.43Min: 0.37 / Avg: 0.37 / Max: 0.371. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.05, N = 3SE +/- 0.16, N = 15SE +/- 0.12, N = 4SE +/- 0.09, N = 311.5312.4211.2810.96-lomp - MIN: 11.09 / MAX: 12.2-lgomp - MIN: 11.7 / MAX: 20.08-lomp - MIN: 10.61 / MAX: 20.99-lomp - MIN: 10.51 / MAX: 16.791. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 11.43 / Avg: 11.53 / Max: 11.61Min: 11.83 / Avg: 12.42 / Max: 14.09Min: 10.94 / Avg: 11.28 / Max: 11.53Min: 10.86 / Avg: 10.96 / Max: 11.141. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810SE +/- 0.033, N = 3SE +/- 0.014, N = 3SE +/- 0.055, N = 3SE +/- 0.004, N = 36.8236.1376.9176.8591. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 6.79 / Avg: 6.82 / Max: 6.89Min: 6.11 / Avg: 6.14 / Max: 6.16Min: 6.81 / Avg: 6.92 / Max: 6.98Min: 6.85 / Avg: 6.86 / Max: 6.871. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3714212835SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 327.8530.9827.9127.621. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3714212835Min: 27.73 / Avg: 27.85 / Max: 28.07Min: 30.9 / Avg: 30.98 / Max: 31.1Min: 27.86 / Avg: 27.91 / Max: 27.98Min: 27.5 / Avg: 27.62 / Max: 27.831. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.95181.90362.85543.80724.759SE +/- 0.05, N = 3SE +/- 0.01, N = 15SE +/- 0.06, N = 4SE +/- 0.05, N = 34.044.233.873.79-lomp - MIN: 3.88 / MAX: 5.03-lgomp - MIN: 4.15 / MAX: 9.05-lomp - MIN: 3.67 / MAX: 12.94-lomp - MIN: 3.64 / MAX: 4.861. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.94 / Avg: 4.04 / Max: 4.1Min: 4.19 / Avg: 4.23 / Max: 4.34Min: 3.74 / Avg: 3.87 / Max: 4Min: 3.69 / Avg: 3.79 / Max: 3.871. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 0.05, N = 3SE +/- 0.29, N = 3SE +/- 0.21, N = 3SE +/- 0.40, N = 3196.34210.99191.91213.67
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 196.24 / Avg: 196.34 / Max: 196.42Min: 210.62 / Avg: 210.99 / Max: 211.57Min: 191.61 / Avg: 191.91 / Max: 192.31Min: 213.13 / Avg: 213.67 / Max: 214.44

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.13, N = 3SE +/- 0.06, N = 15SE +/- 0.24, N = 4SE +/- 0.10, N = 312.5013.7712.4012.74-lomp - MIN: 12.23 / MAX: 16.6-lgomp - MIN: 13.25 / MAX: 23.45-lomp - MIN: 11.72 / MAX: 15.69-lomp - MIN: 12 / MAX: 19.891. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 12.36 / Avg: 12.5 / Max: 12.76Min: 13.55 / Avg: 13.77 / Max: 14.36Min: 11.87 / Avg: 12.4 / Max: 12.9Min: 12.53 / Avg: 12.74 / Max: 12.871. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3918273645SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 336.4438.1334.3435.87-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3816243240Min: 36.33 / Avg: 36.44 / Max: 36.54Min: 38.09 / Avg: 38.13 / Max: 38.16Min: 34.24 / Avg: 34.34 / Max: 34.41Min: 35.7 / Avg: 35.87 / Max: 36.011. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31632486480SE +/- 0.14, N = 3SE +/- 0.68, N = 6SE +/- 0.68, N = 6SE +/- 0.51, N = 364.4371.1368.4068.801. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670Min: 64.15 / Avg: 64.43 / Max: 64.58Min: 68.45 / Avg: 71.13 / Max: 72.94Min: 67.15 / Avg: 68.4 / Max: 71.7Min: 68.25 / Avg: 68.8 / Max: 69.821. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430SE +/- 0.20, N = 3SE +/- 0.21, N = 15SE +/- 0.20, N = 4SE +/- 0.09, N = 323.5425.6723.2923.31-lomp - MIN: 22.92 / MAX: 26.57-lgomp - MIN: 24.52 / MAX: 35.96-lomp - MIN: 22.43 / MAX: 25.17-lomp - MIN: 22.75 / MAX: 33.511. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430Min: 23.15 / Avg: 23.54 / Max: 23.84Min: 24.77 / Avg: 25.67 / Max: 27.15Min: 22.77 / Avg: 23.29 / Max: 23.65Min: 23.21 / Avg: 23.31 / Max: 23.491. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.39001800270036004500SE +/- 25.47, N = 3SE +/- 72.38, N = 3SE +/- 39.89, N = 3SE +/- 16.46, N = 33957.94350.93978.24024.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000Min: 3907.4 / Avg: 3957.9 / Max: 3988.9Min: 4259.3 / Avg: 4350.93 / Max: 4493.8Min: 3936 / Avg: 3978.17 / Max: 4057.9Min: 4001.7 / Avg: 4024.7 / Max: 4056.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.27450.5490.82351.0981.3725SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.141.221.121.111. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 1.12 / Avg: 1.14 / Max: 1.16Min: 1.2 / Avg: 1.22 / Max: 1.24Min: 1.1 / Avg: 1.12 / Max: 1.14Min: 1.1 / Avg: 1.11 / Max: 1.121. (CXX) g++ options: -O3 -march=native -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 5.17, N = 3SE +/- 2.15, N = 3SE +/- 5.57, N = 3SE +/- 4.53, N = 31039.11122.61024.51026.11. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1030.8 / Avg: 1039.1 / Max: 1048.6Min: 1119.2 / Avg: 1122.63 / Max: 1126.6Min: 1014 / Avg: 1024.47 / Max: 1033Min: 1019.1 / Avg: 1026.13 / Max: 1034.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.25650.5130.76951.0261.2825SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.061.141.041.04-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 1.05 / Avg: 1.06 / Max: 1.07Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.04 / Avg: 1.04 / Max: 1.04Min: 1.03 / Avg: 1.04 / Max: 1.051. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 36.185.646.045.921. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 6.11 / Avg: 6.18 / Max: 6.21Min: 5.57 / Avg: 5.64 / Max: 5.74Min: 6.01 / Avg: 6.04 / Max: 6.06Min: 5.88 / Avg: 5.92 / Max: 5.991. (CXX) g++ options: -O3 -march=native -pthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31.26882.53763.80645.07526.344SE +/- 0.015, N = 3SE +/- 0.023, N = 3SE +/- 0.015, N = 3SE +/- 0.009, N = 35.5225.1575.6395.4531. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 5.5 / Avg: 5.52 / Max: 5.55Min: 5.11 / Avg: 5.16 / Max: 5.18Min: 5.62 / Avg: 5.64 / Max: 5.67Min: 5.44 / Avg: 5.45 / Max: 5.471. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 36.265.736.236.121. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 6.19 / Avg: 6.26 / Max: 6.31Min: 5.69 / Avg: 5.73 / Max: 5.75Min: 6.19 / Avg: 6.23 / Max: 6.26Min: 6.01 / Avg: 6.12 / Max: 6.191. (CXX) g++ options: -O3 -march=native -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3100200300400500SE +/- 2.13, N = 3SE +/- 1.96, N = 3SE +/- 1.36, N = 3SE +/- 2.95, N = 3426433465456-fopenmp=libomp-fopenmp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400Min: 423.5 / Avg: 426.33 / Max: 430.5Min: 429 / Avg: 432.83 / Max: 435.5Min: 462 / Avg: 464.67 / Max: 466.5Min: 450.5 / Avg: 456.33 / Max: 4601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320M40M60M80M100MSE +/- 171803.51, N = 3SE +/- 828458.69, N = 5SE +/- 601612.28, N = 3SE +/- 78876.13, N = 3777943338184400078734000750316671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.314M28M42M56M70MMin: 77601000 / Avg: 77794333.33 / Max: 78137000Min: 78871000 / Avg: 81844000 / Max: 83565000Min: 77542000 / Avg: 78734000 / Max: 79472000Min: 74875000 / Avg: 75031666.67 / Max: 751260001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 322.1224.0922.5422.541. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430Min: 22.06 / Avg: 22.12 / Max: 22.19Min: 23.96 / Avg: 24.09 / Max: 24.27Min: 22.5 / Avg: 22.54 / Max: 22.58Min: 22.52 / Avg: 22.54 / Max: 22.551. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3306090120150SE +/- 0.95, N = 3SE +/- 1.06, N = 3SE +/- 0.88, N = 3SE +/- 0.42, N = 3103.01111.80105.72106.381. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100Min: 101.43 / Avg: 103.01 / Max: 104.72Min: 110.28 / Avg: 111.8 / Max: 113.84Min: 104.4 / Avg: 105.72 / Max: 107.38Min: 105.72 / Avg: 106.38 / Max: 107.171. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.56, N = 3SE +/- 0.03, N = 3188.31203.81193.43193.601. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 188.15 / Avg: 188.31 / Max: 188.45Min: 203.76 / Avg: 203.81 / Max: 203.88Min: 192.52 / Avg: 193.43 / Max: 194.45Min: 193.54 / Avg: 193.6 / Max: 193.641. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810SE +/- 0.010, N = 3SE +/- 0.011, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 36.7896.4146.3646.2881. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 6.77 / Avg: 6.79 / Max: 6.81Min: 6.4 / Avg: 6.41 / Max: 6.43Min: 6.35 / Avg: 6.36 / Max: 6.37Min: 6.27 / Avg: 6.29 / Max: 6.321. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 322.0723.5422.0121.831. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3510152025Min: 22.01 / Avg: 22.07 / Max: 22.14Min: 23.5 / Avg: 23.54 / Max: 23.58Min: 21.84 / Avg: 22.01 / Max: 22.16Min: 21.63 / Avg: 21.83 / Max: 21.961. (CXX) g++ options: -O3 -fPIC -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.330060090012001500SE +/- 4.48, N = 3SE +/- 3.67, N = 3SE +/- 3.84, N = 3SE +/- 3.71, N = 311081166108311311. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1102 / Avg: 1108.33 / Max: 1117Min: 1162 / Avg: 1165.67 / Max: 1173Min: 1077 / Avg: 1082.67 / Max: 1090Min: 1126 / Avg: 1130.67 / Max: 11381. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 3SE +/- 0.18, N = 321.3819.9021.3721.221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3510152025Min: 21.27 / Avg: 21.38 / Max: 21.5Min: 19.83 / Avg: 19.9 / Max: 19.96Min: 20.95 / Avg: 21.37 / Max: 21.6Min: 20.88 / Avg: 21.22 / Max: 21.511. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810SE +/- 0.055, N = 3SE +/- 0.048, N = 3SE +/- 0.012, N = 3SE +/- 0.027, N = 38.3428.9278.3098.3841. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 8.27 / Avg: 8.34 / Max: 8.45Min: 8.85 / Avg: 8.93 / Max: 9.02Min: 8.29 / Avg: 8.31 / Max: 8.33Min: 8.35 / Avg: 8.38 / Max: 8.441. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.24, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.25, N = 385.6787.3583.5489.51-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100Min: 85.22 / Avg: 85.67 / Max: 86.03Min: 87.09 / Avg: 87.35 / Max: 87.58Min: 83.37 / Avg: 83.54 / Max: 83.67Min: 89 / Avg: 89.51 / Max: 89.781. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 11.95, N = 4SE +/- 3.93, N = 3SE +/- 9.53, N = 15SE +/- 9.29, N = 31078.11057.41096.31117.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1054.2 / Avg: 1078.13 / Max: 1108.8Min: 1050.6 / Avg: 1057.37 / Max: 1064.2Min: 1043.6 / Avg: 1096.27 / Max: 1176.3Min: 1099.4 / Avg: 1117.63 / Max: 1129.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31020304050SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 341.0843.6241.0340.751. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3918273645Min: 40.87 / Avg: 41.08 / Max: 41.45Min: 43.31 / Avg: 43.62 / Max: 44Min: 40.92 / Avg: 41.03 / Max: 41.09Min: 40.58 / Avg: 40.75 / Max: 41.051. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.12, N = 3SE +/- 0.06, N = 15SE +/- 0.13, N = 4SE +/- 0.08, N = 312.5312.7611.9611.94-lomp - MIN: 12.12 / MAX: 17.12-lgomp - MIN: 12.19 / MAX: 19.36-lomp - MIN: 11.46 / MAX: 13.27-lomp - MIN: 11.62 / MAX: 12.421. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 12.4 / Avg: 12.53 / Max: 12.77Min: 12.5 / Avg: 12.76 / Max: 13.05Min: 11.63 / Avg: 11.96 / Max: 12.23Min: 11.84 / Avg: 11.94 / Max: 12.11. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.20, N = 3SE +/- 0.19, N = 3SE +/- 0.01, N = 3SE +/- 0.25, N = 385.8187.0783.6389.32-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100Min: 85.55 / Avg: 85.81 / Max: 86.19Min: 86.7 / Avg: 87.07 / Max: 87.34Min: 83.61 / Avg: 83.63 / Max: 83.65Min: 88.91 / Avg: 89.32 / Max: 89.781. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeLLVM Clang 12GCC 10.2918273645SE +/- 0.31, N = 3SE +/- 0.16, N = 337.5035.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeLLVM Clang 12GCC 10.2816243240Min: 37.1 / Avg: 37.5 / Max: 38.1Min: 34.81 / Avg: 35.13 / Max: 35.351. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 0.04, N = 3SE +/- 0.47, N = 3SE +/- 0.43, N = 3SE +/- 0.32, N = 3244.37243.69229.03244.15MIN: 182.08 / MAX: 252.22-lm - MIN: 181.29 / MAX: 252.3MIN: 171.52 / MAX: 237.17MIN: 180.82 / MAX: 252.961. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 244.31 / Avg: 244.37 / Max: 244.45Min: 242.82 / Avg: 243.69 / Max: 244.45Min: 228.5 / Avg: 229.03 / Max: 229.88Min: 243.51 / Avg: 244.15 / Max: 244.521. (CC) gcc options: -O3 -march=native -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.23450.4690.70350.9381.1725SE +/- 0.014, N = 3SE +/- 0.008, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 30.9771.0421.0070.9791. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 0.95 / Avg: 0.98 / Max: 1Min: 1.03 / Avg: 1.04 / Max: 1.06Min: 1 / Avg: 1.01 / Max: 1.02Min: 0.97 / Avg: 0.98 / Max: 0.991. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.51171.02341.53512.04682.5585SE +/- 0.011, N = 3SE +/- 0.005, N = 3SE +/- 0.025, N = 3SE +/- 0.024, N = 32.1342.2742.1652.1441. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 2.11 / Avg: 2.13 / Max: 2.15Min: 2.27 / Avg: 2.27 / Max: 2.28Min: 2.12 / Avg: 2.16 / Max: 2.21Min: 2.1 / Avg: 2.14 / Max: 2.181. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 2.40, N = 12SE +/- 2.40, N = 12SE +/- 2.47, N = 12SE +/- 2.24, N = 13223.50235.04225.17238.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 197.17 / Avg: 223.5 / Max: 226.93Min: 208.84 / Avg: 235.04 / Max: 238.66Min: 198.09 / Avg: 225.17 / Max: 229.1Min: 211.64 / Avg: 238.11 / Max: 242.031. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.310002000300040005000SE +/- 2.17, N = 3SE +/- 46.74, N = 3SE +/- 33.94, N = 3SE +/- 31.63, N = 34456.64737.14543.84586.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000Min: 4452.4 / Avg: 4456.57 / Max: 4459.7Min: 4651.7 / Avg: 4737.13 / Max: 4812.7Min: 4496.2 / Avg: 4543.8 / Max: 4609.5Min: 4529 / Avg: 4586.2 / Max: 4638.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3500K1000K1500K2000K2500KSE +/- 30760.12, N = 3SE +/- 23396.73, N = 15SE +/- 35143.82, N = 15SE +/- 27675.97, N = 42212779.002222217.522345671.032351340.561. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3400K800K1200K1600K2000KMin: 2175805 / Avg: 2212779 / Max: 2273848.25Min: 2111533.75 / Avg: 2222217.52 / Max: 2387805Min: 2089870.88 / Avg: 2345671.03 / Max: 2580653.5Min: 2271178.75 / Avg: 2351340.56 / Max: 2394085.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3800K1600K2400K3200K4000KSE +/- 31635.54, N = 3SE +/- 26197.04, N = 3SE +/- 45854.80, N = 15SE +/- 30792.29, N = 83649832.583549910.503766645.923589202.591. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3700K1400K2100K2800K3500KMin: 3608805.5 / Avg: 3649832.58 / Max: 3712059.5Min: 3505149.75 / Avg: 3549910.5 / Max: 3595875Min: 3628470.25 / Avg: 3766645.92 / Max: 4351610Min: 3432930.75 / Avg: 3589202.59 / Max: 3710670.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.310002000300040005000SE +/- 37.90, N = 2SE +/- 8.16, N = 11SE +/- 26.73, N = 34352.44617.14463.14468.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000Min: 4314.5 / Avg: 4352.4 / Max: 4390.3Min: 4401.9 / Avg: 4463.13 / Max: 4493.6Min: 4427.4 / Avg: 4468.17 / Max: 4518.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31632486480SE +/- 0.19, N = 3SE +/- 0.86, N = 3SE +/- 0.21, N = 3SE +/- 0.77, N = 568.3072.3672.3072.211. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670Min: 68.01 / Avg: 68.3 / Max: 68.66Min: 71.16 / Avg: 72.36 / Max: 74.02Min: 72.08 / Avg: 72.3 / Max: 72.72Min: 70.48 / Avg: 72.21 / Max: 74.661. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassLLVM Clang 12GCC 10.23691215SE +/- 0.05, N = 3SE +/- 0.02, N = 39.749.201. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassLLVM Clang 12GCC 10.23691215Min: 9.64 / Avg: 9.74 / Max: 9.79Min: 9.16 / Avg: 9.2 / Max: 9.231. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3140280420560700SE +/- 5.80, N = 3SE +/- 6.71, N = 3SE +/- 5.59, N = 12SE +/- 6.07, N = 12634614649646-fopenmp=libomp-fopenmp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3110220330440550Min: 623.5 / Avg: 634 / Max: 643.5Min: 605.5 / Avg: 614.33 / Max: 627.5Min: 620.5 / Avg: 648.96 / Max: 677.5Min: 616.5 / Avg: 645.79 / Max: 6771. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3510152025SE +/- 0.13, N = 3SE +/- 0.17, N = 15SE +/- 0.11, N = 4SE +/- 0.14, N = 321.7020.7721.9321.66-lomp - MIN: 21.21 / MAX: 30.18-lgomp - MIN: 19.69 / MAX: 43.19-lomp - MIN: 21.28 / MAX: 24.81-lomp - MIN: 21.17 / MAX: 27.181. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3510152025Min: 21.54 / Avg: 21.7 / Max: 21.95Min: 19.89 / Avg: 20.77 / Max: 21.52Min: 21.6 / Avg: 21.93 / Max: 22.07Min: 21.51 / Avg: 21.66 / Max: 21.941. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.8371.6742.5113.3484.185SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 33.713.723.613.531. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 3.66 / Avg: 3.71 / Max: 3.76Min: 3.69 / Avg: 3.72 / Max: 3.78Min: 3.59 / Avg: 3.61 / Max: 3.62Min: 3.49 / Avg: 3.53 / Max: 3.581. (CXX) g++ options: -O3 -march=native -pthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400SE +/- 0.58, N = 3SE +/- 0.42, N = 3SE +/- 1.38, N = 3SE +/- 1.28, N = 3349.02367.37356.82357.901. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.370140210280350Min: 348.09 / Avg: 349.02 / Max: 350.08Min: 366.56 / Avg: 367.37 / Max: 367.97Min: 354.06 / Avg: 356.82 / Max: 358.32Min: 355.35 / Avg: 357.9 / Max: 359.181. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 0.82, N = 3SE +/- 0.68, N = 3SE +/- 0.39, N = 3SE +/- 0.90, N = 3219.12228.96221.59230.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 217.47 / Avg: 219.12 / Max: 220.02Min: 227.62 / Avg: 228.96 / Max: 229.8Min: 220.83 / Avg: 221.59 / Max: 222.14Min: 228.75 / Avg: 230.19 / Max: 231.841. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.44, N = 3SE +/- 0.44, N = 310499102103-fopenmp=libomp-fopenmp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100Min: 103 / Avg: 103.5 / Max: 104Min: 99 / Avg: 99.33 / Max: 99.5Min: 101.5 / Avg: 102.33 / Max: 103Min: 102 / Avg: 102.67 / Max: 103.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34574394524611. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.380160240320400Min: 457 / Avg: 457.33 / Max: 458Min: 439 / Avg: 439.33 / Max: 440Min: 461 / Avg: 461.33 / Max: 4621. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33K6K9K12K15KSE +/- 93.71, N = 3SE +/- 38.97, N = 3SE +/- 106.53, N = 3SE +/- 59.50, N = 313305.313771.113144.813595.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32K4K6K8K10KMin: 13118.5 / Avg: 13305.27 / Max: 13412.2Min: 13703.6 / Avg: 13771.07 / Max: 13838.6Min: 12943.5 / Avg: 13144.8 / Max: 13305.9Min: 13476.9 / Avg: 13595.3 / Max: 13664.81. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3600K1200K1800K2400K3000KSE +/- 28596.87, N = 3SE +/- 26145.63, N = 15SE +/- 14014.88, N = 3SE +/- 23132.25, N = 32762047.502640316.172719036.202719539.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3500K1000K1500K2000K2500KMin: 2704939 / Avg: 2762047.5 / Max: 2793305Min: 2484480 / Avg: 2640316.17 / Max: 2891872.75Min: 2698335.5 / Avg: 2719036.17 / Max: 2745753Min: 2675986 / Avg: 2719539.83 / Max: 2754829.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31224364860SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 351.6652.9351.4550.811. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31122334455Min: 51.51 / Avg: 51.66 / Max: 51.74Min: 52.75 / Avg: 52.93 / Max: 53.02Min: 51.28 / Avg: 51.45 / Max: 51.59Min: 50.67 / Avg: 50.81 / Max: 50.921. (CXX) g++ options: -O3 -march=native -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3200M400M600M800M1000MSE +/- 3699249.17, N = 3SE +/- 5768882.04, N = 3SE +/- 3939684.14, N = 3SE +/- 3628743.28, N = 310676666671111200000108603333310672666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3200M400M600M800M1000MMin: 1062400000 / Avg: 1067666666.67 / Max: 1074800000Min: 1100000000 / Avg: 1111200000 / Max: 1119200000Min: 1081000000 / Avg: 1086033333.33 / Max: 1093800000Min: 1061400000 / Avg: 1067266666.67 / Max: 10739000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.00.92681.85362.78043.70724.634SE +/- 0.01294, N = 3SE +/- 0.00506, N = 3SE +/- 0.01273, N = 34.119303.959794.09663-fopenmp=libomp - MIN: 3.88-fopenmp - MIN: 3.76-fopenmp=libomp - MIN: 3.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0246810Min: 4.09 / Avg: 4.12 / Max: 4.14Min: 3.95 / Avg: 3.96 / Max: 3.97Min: 4.07 / Avg: 4.1 / Max: 4.121. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3714212835SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 328.0227.8326.9627.491. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430Min: 27.84 / Avg: 28.02 / Max: 28.13Min: 27.52 / Avg: 27.83 / Max: 28.04Min: 26.79 / Avg: 26.96 / Max: 27.19Min: 27.39 / Avg: 27.49 / Max: 27.651. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31020304050SE +/- 0.09, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.20, N = 345.0744.3945.0046.131. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3918273645Min: 44.93 / Avg: 45.07 / Max: 45.23Min: 44.14 / Avg: 44.39 / Max: 44.53Min: 44.77 / Avg: 45 / Max: 45.27Min: 45.81 / Avg: 46.13 / Max: 46.51. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.13, N = 3SE +/- 0.19, N = 3SE +/- 0.35, N = 3SE +/- 0.28, N = 392.1689.8088.7089.741. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100Min: 91.91 / Avg: 92.16 / Max: 92.34Min: 89.44 / Avg: 89.8 / Max: 90.11Min: 88.15 / Avg: 88.7 / Max: 89.34Min: 89.33 / Avg: 89.74 / Max: 90.281. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 1.02, N = 3SE +/- 0.57, N = 3SE +/- 0.48, N = 3SE +/- 0.85, N = 3206.36211.57204.60203.64-fopenmp=libomp - MIN: 204.24 / MAX: 209.24-fopenmp - MIN: 206.88 / MAX: 212.83-fopenmp=libomp - MIN: 203.72 / MAX: 206.33-fopenmp=libomp - MIN: 201.91 / MAX: 206.131. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 204.43 / Avg: 206.36 / Max: 207.89Min: 210.5 / Avg: 211.57 / Max: 212.44Min: 203.89 / Avg: 204.6 / Max: 205.52Min: 202.24 / Avg: 203.64 / Max: 205.161. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3120240360480600SE +/- 0.58, N = 9SE +/- 0.63, N = 9SE +/- 1.23, N = 8SE +/- 1.95, N = 9522.8515.8523.6534.81. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.390180270360450Min: 520.7 / Avg: 522.82 / Max: 525.8Min: 511.7 / Avg: 515.79 / Max: 518.3Min: 520.2 / Avg: 523.56 / Max: 530.2Min: 523.8 / Avg: 534.82 / Max: 542.71. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.03691215SE +/- 0.01452, N = 3SE +/- 0.01340, N = 3SE +/- 0.01936, N = 39.594429.259679.57194-fopenmp=libomp - MIN: 9.47-fopenmp - MIN: 9.1-fopenmp=libomp - MIN: 9.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.03691215Min: 9.57 / Avg: 9.59 / Max: 9.62Min: 9.24 / Avg: 9.26 / Max: 9.29Min: 9.54 / Avg: 9.57 / Max: 9.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000LLVM Clang 12GCC 10.2AMD AOCC 2.31020304050SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 343.2642.6044.111. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000LLVM Clang 12GCC 10.2AMD AOCC 2.3918273645Min: 43.04 / Avg: 43.26 / Max: 43.5Min: 42.35 / Avg: 42.6 / Max: 42.76Min: 44.06 / Avg: 44.11 / Max: 44.181. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670SE +/- 0.20, N = 3SE +/- 0.15, N = 3SE +/- 0.08, N = 3SE +/- 0.54, N = 364.5262.8264.9164.891. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31326395265Min: 64.19 / Avg: 64.52 / Max: 64.89Min: 62.53 / Avg: 62.82 / Max: 63.03Min: 64.76 / Avg: 64.91 / Max: 65.02Min: 63.81 / Avg: 64.89 / Max: 65.521. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31326395265SE +/- 0.81, N = 3SE +/- 0.15, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 359.2359.8757.9959.07-mabm1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31224364860Min: 58.23 / Avg: 59.23 / Max: 60.84Min: 59.6 / Avg: 59.87 / Max: 60.14Min: 57.85 / Avg: 57.99 / Max: 58.17Min: 58.86 / Avg: 59.07 / Max: 59.181. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33K6K9K12K15KSE +/- 21.95, N = 3SE +/- 35.65, N = 6SE +/- 24.92, N = 6SE +/- 53.22, N = 313129.413397.712981.813188.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32K4K6K8K10KMin: 13087.3 / Avg: 13129.37 / Max: 13161.3Min: 13226.5 / Avg: 13397.68 / Max: 13476.2Min: 12926.3 / Avg: 12981.82 / Max: 13095.1Min: 13082.7 / Avg: 13188.37 / Max: 13252.31. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3700K1400K2100K2800K3500KSE +/- 29853.73, N = 3SE +/- 39730.96, N = 15SE +/- 27502.49, N = 15SE +/- 40118.44, N = 32954866.803041527.372948093.682961165.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3500K1000K1500K2000K2500KMin: 2898625 / Avg: 2954866.83 / Max: 3000348Min: 2876879 / Avg: 3041527.37 / Max: 3313474Min: 2676676.5 / Avg: 2948093.68 / Max: 3173677Min: 2887678.75 / Avg: 2961165.67 / Max: 3025805.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.17, N = 314.3314.2014.4714.041. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 14.02 / Avg: 14.33 / Max: 14.6Min: 14.12 / Avg: 14.2 / Max: 14.26Min: 14.19 / Avg: 14.47 / Max: 14.74Min: 13.81 / Avg: 14.04 / Max: 14.371. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33K6K9K12K15KSE +/- 46.92, N = 3SE +/- 48.22, N = 3SE +/- 30.75, N = 3SE +/- 15.17, N = 513082.413400.113010.713212.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32K4K6K8K10KMin: 12988.6 / Avg: 13082.37 / Max: 13132.4Min: 13308.6 / Avg: 13400.07 / Max: 13472.3Min: 12953.9 / Avg: 13010.7 / Max: 13059.5Min: 13164.7 / Avg: 13212.56 / Max: 132441. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.04, N = 3SE +/- 0.09, N = 15SE +/- 0.04, N = 4SE +/- 0.00, N = 211.1410.8211.1111.01-lomp - MIN: 10.96 / MAX: 13.28-lgomp - MIN: 10.41 / MAX: 17.59-lomp - MIN: 10.92 / MAX: 15.76-lomp - MIN: 10.84 / MAX: 12.261. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 11.07 / Avg: 11.14 / Max: 11.19Min: 10.49 / Avg: 10.82 / Max: 11.51Min: 11.01 / Avg: 11.11 / Max: 11.22Min: 11 / Avg: 11.01 / Max: 11.011. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33K6K9K12K15KSE +/- 66.76, N = 3SE +/- 76.55, N = 3SE +/- 49.14, N = 3SE +/- 57.72, N = 312227.4712330.5612124.8112456.171. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32K4K6K8K10KMin: 12149.33 / Avg: 12227.47 / Max: 12360.31Min: 12196.23 / Avg: 12330.56 / Max: 12461.34Min: 12038.25 / Avg: 12124.81 / Max: 12208.4Min: 12343.85 / Avg: 12456.17 / Max: 12535.441. (CC) gcc options: -O3

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200SE +/- 1.10, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 1.67, N = 3172.90171.19175.69173.311. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3306090120150Min: 170.87 / Avg: 172.9 / Max: 174.63Min: 170.68 / Avg: 171.19 / Max: 171.67Min: 175.35 / Avg: 175.69 / Max: 175.87Min: 170.29 / Avg: 173.31 / Max: 176.041. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 313.9113.9913.8413.641. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 13.77 / Avg: 13.91 / Max: 14.14Min: 13.84 / Avg: 13.99 / Max: 14.21Min: 13.78 / Avg: 13.84 / Max: 13.93Min: 13.42 / Avg: 13.64 / Max: 13.771. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.00.82011.64022.46033.28044.1005SE +/- 0.01444, N = 3SE +/- 0.00753, N = 3SE +/- 0.00604, N = 33.644853.554673.58364-fopenmp=libomp - MIN: 3.5-fopenmp - MIN: 3.46-fopenmp=libomp - MIN: 3.441. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0246810Min: 3.62 / Avg: 3.64 / Max: 3.67Min: 3.54 / Avg: 3.55 / Max: 3.57Min: 3.58 / Avg: 3.58 / Max: 3.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31326395265SE +/- 0.17, N = 3SE +/- 0.12, N = 15SE +/- 0.19, N = 4SE +/- 0.01, N = 357.5157.8958.9658.11-lomp - MIN: 56.17 / MAX: 62.53-lgomp - MIN: 55.89 / MAX: 80.86-lomp - MIN: 57.64 / MAX: 66.68-lomp - MIN: 56.81 / MAX: 67.531. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31224364860Min: 57.27 / Avg: 57.51 / Max: 57.85Min: 56.59 / Avg: 57.89 / Max: 58.72Min: 58.58 / Avg: 58.96 / Max: 59.47Min: 58.08 / Avg: 58.11 / Max: 58.131. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3120240360480600SE +/- 1.73, N = 3SE +/- 3.29, N = 15SE +/- 2.13, N = 3SE +/- 1.69, N = 3552.38545.91538.88550.461. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3100200300400500Min: 550.61 / Avg: 552.38 / Max: 555.85Min: 504.59 / Avg: 545.91 / Max: 564.12Min: 534.64 / Avg: 538.88 / Max: 541.35Min: 547.13 / Avg: 550.46 / Max: 552.681. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MLLVM Clang 12GCC 10.220406080100SE +/- 0.06, N = 3SE +/- 0.08, N = 3100.1697.751. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MLLVM Clang 12GCC 10.220406080100Min: 100.08 / Avg: 100.16 / Max: 100.27Min: 97.6 / Avg: 97.75 / Max: 97.891. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeLLVM Clang 12GCC 10.2306090120150SE +/- 1.02, N = 3SE +/- 0.75, N = 3118.22121.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeLLVM Clang 12GCC 10.220406080100Min: 117.16 / Avg: 118.22 / Max: 120.26Min: 120.06 / Avg: 121.13 / Max: 122.571. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31632486480SE +/- 0.12, N = 3SE +/- 0.21, N = 3SE +/- 0.13, N = 3SE +/- 0.06, N = 372.4671.6073.2872.781. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670Min: 72.23 / Avg: 72.46 / Max: 72.65Min: 71.22 / Avg: 71.6 / Max: 71.94Min: 73.08 / Avg: 73.28 / Max: 73.52Min: 72.66 / Avg: 72.78 / Max: 72.891. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.310002000300040005000SE +/- 10.26, N = 9SE +/- 16.39, N = 9SE +/- 60.32, N = 8SE +/- 22.36, N = 94769.84715.44704.74661.61. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.38001600240032004000Min: 4709.8 / Avg: 4769.81 / Max: 4814.1Min: 4613.4 / Avg: 4715.39 / Max: 4782.1Min: 4308.2 / Avg: 4704.65 / Max: 4869.9Min: 4529.8 / Avg: 4661.57 / Max: 4759.71. 3.8.1.0

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.350100150200250SE +/- 1.62, N = 12SE +/- 1.66, N = 9SE +/- 1.75, N = 9SE +/- 1.86, N = 8213.57208.93210.35210.72-mstack-alignment=64-mstack-alignment=64-mstack-alignment=641. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.34080120160200Min: 195.98 / Avg: 213.57 / Max: 216.26Min: 195.89 / Avg: 208.93 / Max: 211.72Min: 196.49 / Avg: 210.35 / Max: 213.57Min: 197.87 / Avg: 210.72 / Max: 213.951. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3714212835SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 328.6728.8128.1928.431. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430Min: 28.61 / Avg: 28.67 / Max: 28.72Min: 28.73 / Avg: 28.81 / Max: 28.97Min: 28.12 / Avg: 28.19 / Max: 28.23Min: 28.34 / Avg: 28.43 / Max: 28.531. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31224364860SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 351.351.650.550.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31020304050Min: 51.3 / Avg: 51.33 / Max: 51.4Min: 51.2 / Avg: 51.6 / Max: 51.8Min: 50.4 / Avg: 50.5 / Max: 50.6Min: 50.6 / Avg: 50.67 / Max: 50.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 311.4111.2011.1711.31-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 11.33 / Avg: 11.41 / Max: 11.52Min: 11.14 / Avg: 11.2 / Max: 11.26Min: 11.1 / Avg: 11.17 / Max: 11.28Min: 11.28 / Avg: 11.31 / Max: 11.361. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 1.22, N = 9SE +/- 1.32, N = 9SE +/- 2.72, N = 8SE +/- 2.72, N = 9835.4843.1838.3853.31. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3150300450600750Min: 829.1 / Avg: 835.41 / Max: 840.7Min: 837.8 / Avg: 843.07 / Max: 850.5Min: 830.1 / Avg: 838.34 / Max: 851.2Min: 841.9 / Avg: 853.33 / Max: 8691. 3.8.1.0

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.37640.75281.12921.50561.882SE +/- 0.003, N = 3SE +/- 0.018, N = 4SE +/- 0.011, N = 3SE +/- 0.006, N = 31.6381.6521.6691.6731. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 1.63 / Avg: 1.64 / Max: 1.64Min: 1.6 / Avg: 1.65 / Max: 1.68Min: 1.65 / Avg: 1.67 / Max: 1.68Min: 1.66 / Avg: 1.67 / Max: 1.681. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 2.97, N = 3SE +/- 1.38, N = 3SE +/- 1.29, N = 3SE +/- 9.00, N = 3979.34971.79959.29976.93MIN: 717.55 / MAX: 1062.34-lm - MIN: 732.02 / MAX: 1055.82MIN: 714.89 / MAX: 1039.77MIN: 633.01 / MAX: 1069.881. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 973.61 / Avg: 979.34 / Max: 983.58Min: 969.08 / Avg: 971.79 / Max: 973.61Min: 956.73 / Avg: 959.29 / Max: 960.84Min: 958.96 / Avg: 976.93 / Max: 986.911. (CC) gcc options: -O3 -march=native -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassLLVM Clang 12GCC 10.2714212835SE +/- 0.13, N = 3SE +/- 0.26, N = 330.0329.431. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassLLVM Clang 12GCC 10.2714212835Min: 29.79 / Avg: 30.03 / Max: 30.24Min: 28.92 / Avg: 29.43 / Max: 29.691. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLLVM Clang 12GCC 10.21.25752.5153.77255.036.2875SE +/- 0.037, N = 5SE +/- 0.031, N = 55.5895.484-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLLVM Clang 12GCC 10.2246810Min: 5.47 / Avg: 5.59 / Max: 5.67Min: 5.41 / Avg: 5.48 / Max: 5.571. (CXX) g++ options: -O3 -march=native -logg -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215SE +/- 0.11, N = 5SE +/- 0.10, N = 5SE +/- 0.01, N = 5SE +/- 0.03, N = 510.3110.1510.3410.291. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.33691215Min: 9.88 / Avg: 10.31 / Max: 10.5Min: 9.8 / Avg: 10.15 / Max: 10.3Min: 10.32 / Avg: 10.34 / Max: 10.38Min: 10.2 / Avg: 10.29 / Max: 10.371. (CXX) g++ options: -O3 -march=native -rdynamic

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 3.22, N = 9SE +/- 2.09, N = 9SE +/- 3.65, N = 8SE +/- 4.68, N = 91060.11063.51065.41080.31. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1037.1 / Avg: 1060.13 / Max: 1071.5Min: 1053.3 / Avg: 1063.51 / Max: 1074.7Min: 1042.6 / Avg: 1065.35 / Max: 1078.3Min: 1056.3 / Avg: 1080.33 / Max: 10991. 3.8.1.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.08, N = 3SE +/- 0.05, N = 15SE +/- 0.11, N = 4SE +/- 0.21, N = 313.9114.1113.8614.08-lomp - MIN: 13.66 / MAX: 14.54-lgomp - MIN: 13.84 / MAX: 23.15-lomp - MIN: 13.44 / MAX: 16.35-lomp - MIN: 13.56 / MAX: 21.131. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 13.78 / Avg: 13.91 / Max: 14.04Min: 13.92 / Avg: 14.11 / Max: 14.57Min: 13.54 / Avg: 13.86 / Max: 14.04Min: 13.67 / Avg: 14.08 / Max: 14.341. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0400800120016002000SE +/- 9.12, N = 3SE +/- 5.00, N = 3SE +/- 3.74, N = 31792.271773.671760.57-fopenmp=libomp - MIN: 1766.32-fopenmp - MIN: 1750.26-fopenmp=libomp - MIN: 1745.871. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.030060090012001500Min: 1774.89 / Avg: 1792.27 / Max: 1805.74Min: 1763.67 / Avg: 1773.67 / Max: 1778.84Min: 1755.25 / Avg: 1760.57 / Max: 1767.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedGCC 10.2AMD AOCC 2.310002000300040005000SE +/- 29.99, N = 34886.24805.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedGCC 10.2AMD AOCC 2.38001600240032004000Min: 4831.2 / Avg: 4886.23 / Max: 4934.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 374.7774.1273.6474.61-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31428425670Min: 74.69 / Avg: 74.77 / Max: 74.83Min: 74.07 / Avg: 74.12 / Max: 74.18Min: 73.36 / Avg: 73.64 / Max: 73.84Min: 74.44 / Avg: 74.61 / Max: 74.821. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31.09692.19383.29074.38765.4845SE +/- 0.038, N = 3SE +/- 0.022, N = 3SE +/- 0.015, N = 3SE +/- 0.041, N = 34.8074.8754.8374.8321. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 4.76 / Avg: 4.81 / Max: 4.88Min: 4.84 / Avg: 4.88 / Max: 4.92Min: 4.81 / Avg: 4.84 / Max: 4.86Min: 4.77 / Avg: 4.83 / Max: 4.911. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3816243240SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 336.836.636.436.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3816243240Min: 36.8 / Avg: 36.83 / Max: 36.9Min: 36.5 / Avg: 36.57 / Max: 36.6Min: 36.4 / Avg: 36.4 / Max: 36.4Min: 36.6 / Avg: 36.67 / Max: 36.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 315.9115.9016.0615.981. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620Min: 15.82 / Avg: 15.91 / Max: 16.01Min: 15.82 / Avg: 15.9 / Max: 15.98Min: 16.01 / Avg: 16.06 / Max: 16.09Min: 15.95 / Avg: 15.98 / Max: 16.031. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.16, N = 3SE +/- 0.28, N = 380.2179.5279.8779.94
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.31530456075Min: 80.07 / Avg: 80.21 / Max: 80.37Min: 79.22 / Avg: 79.52 / Max: 79.87Min: 79.55 / Avg: 79.87 / Max: 80.05Min: 79.47 / Avg: 79.94 / Max: 80.46

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.00.14470.28940.43410.57880.7235SE +/- 0.000908, N = 3SE +/- 0.000722, N = 3SE +/- 0.004823, N = 30.6412310.6386640.643093-fopenmp=libomp - MIN: 0.61-fopenmp - MIN: 0.61-fopenmp=libomp - MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0246810Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.651. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.66421.32841.99262.65683.321SE +/- 0.016, N = 3SE +/- 0.014, N = 3SE +/- 0.006, N = 3SE +/- 0.035, N = 32.9522.9342.9412.9331. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3246810Min: 2.93 / Avg: 2.95 / Max: 2.98Min: 2.91 / Avg: 2.93 / Max: 2.95Min: 2.93 / Avg: 2.94 / Max: 2.95Min: 2.88 / Avg: 2.93 / Max: 31. (CXX) g++ options: -O3 -fPIC -lm

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 0.98, N = 9SE +/- 0.78, N = 9SE +/- 3.09, N = 8SE +/- 15.16, N = 91054.91055.01055.81061.01. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000Min: 1052.2 / Avg: 1054.92 / Max: 1061.4Min: 1050.8 / Avg: 1055 / Max: 1057.9Min: 1044.5 / Avg: 1055.79 / Max: 1073.4Min: 940.7 / Avg: 1060.96 / Max: 1084.21. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.048121620SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 317.3417.2917.36-fopenmp=libomp - MIN: 16.81-fopenmp - MIN: 16.58-fopenmp=libomp - MIN: 16.831. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.048121620Min: 17.32 / Avg: 17.34 / Max: 17.36Min: 17.18 / Avg: 17.29 / Max: 17.46Min: 17.33 / Avg: 17.36 / Max: 17.411. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3714212835SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 328.2228.1328.1728.151. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3612182430Min: 28.15 / Avg: 28.22 / Max: 28.28Min: 28.05 / Avg: 28.13 / Max: 28.17Min: 28.13 / Avg: 28.17 / Max: 28.23Min: 28.07 / Avg: 28.15 / Max: 28.221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.06001200180024003000SE +/- 5.95, N = 3SE +/- 2.01, N = 3SE +/- 17.19, N = 32757.752757.522760.94-fopenmp=libomp - MIN: 2734.73-fopenmp - MIN: 2719.35-fopenmp=libomp - MIN: 2717.591. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.05001000150020002500Min: 2746.33 / Avg: 2757.75 / Max: 2766.33Min: 2754.84 / Avg: 2757.52 / Max: 2761.46Min: 2729.57 / Avg: 2760.94 / Max: 2788.821. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3GCC 10.2816243240SE +/- 0.09, N = 332.34MIN: 31.33 / MAX: 42.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0GCC 10.20.5291.0581.5872.1162.645SE +/- 0.027, N = 32.351MIN: 2.27 / MAX: 7.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224GCC 10.20.7291.4582.1872.9163.645SE +/- 0.049, N = 33.240MIN: 3.12 / MAX: 11.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50GCC 10.2612182430SE +/- 0.02, N = 325.07MIN: 23.97 / MAX: 39.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0GCC 10.21.14322.28643.42964.57285.716SE +/- 0.010, N = 35.081MIN: 4.92 / MAX: 14.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 10.21.05172.10343.15514.20685.2585SE +/- 0.015, N = 34.6741. (CXX) g++ options: -fopenmp -O3 -march=native

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeGCC 10.23M6M9M12M15MSE +/- 26371.45, N = 3117312491. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.314002800420056007000SE +/- 55.09, N = 12SE +/- 215.50, N = 12SE +/- 38.28, N = 3SE +/- 34.74, N = 35937672159766067-fopenmp=libomp-fopenmp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPULLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.312002400360048006000Min: 5575 / Avg: 5936.88 / Max: 6199Min: 5800.5 / Avg: 6720.71 / Max: 8258Min: 5902 / Avg: 5976 / Max: 6030Min: 5997.5 / Avg: 6066.83 / Max: 6105.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3800K1600K2400K3200K4000KSE +/- 47796.61, N = 15SE +/- 36718.95, N = 15SE +/- 11517.61, N = 3SE +/- 58906.79, N = 153624414.373470419.903545388.923658044.771. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3600K1200K1800K2400K3000KMin: 3406005.25 / Avg: 3624414.37 / Max: 4108502.75Min: 3243625 / Avg: 3470419.9 / Max: 3859526Min: 3522367 / Avg: 3545388.92 / Max: 3557578Min: 3385240.25 / Avg: 3658044.77 / Max: 4223040.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.01.00522.01043.01564.02085.026SE +/- 0.00451, N = 3SE +/- 0.30276, N = 15SE +/- 0.00340, N = 32.465614.467772.46850-fopenmp=libomp - MIN: 2.33-fopenmp - MIN: 2.86-fopenmp=libomp - MIN: 2.361. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULLVM Clang 12GCC 10.2AMD AOCC 3.0246810Min: 2.46 / Avg: 2.47 / Max: 2.47Min: 3.19 / Avg: 4.47 / Max: 6.41Min: 2.46 / Avg: 2.47 / Max: 2.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.32004006008001000SE +/- 17.91, N = 9SE +/- 19.67, N = 9SE +/- 20.61, N = 8SE +/- 20.04, N = 9911.2920.8929.1931.61. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.3160320480640800Min: 801.6 / Avg: 911.16 / Max: 985.6Min: 849.7 / Avg: 920.81 / Max: 1025.5Min: 813.6 / Avg: 929.09 / Max: 996.7Min: 820.7 / Avg: 931.62 / Max: 1007.91. 3.8.1.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.39001800270036004500SE +/- 50.52, N = 3SE +/- 6.53, N = 3SE +/- 417.40, N = 3SE +/- 12.18, N = 34000.94251.73608.84097.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.37001400210028003500Min: 3900.1 / Avg: 4000.93 / Max: 4056.9Min: 4239 / Avg: 4251.7 / Max: 4260.7Min: 2774.4 / Avg: 3608.8 / Max: 4048.4Min: 4074.7 / Avg: 4097.27 / Max: 4116.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

147 Results Shown

Sysbench
Etcpak
Timed LLVM Compilation
C-Ray
GraphicsMagick
LibRaw
NCNN
GraphicsMagick
ASTC Encoder
Etcpak
SVT-AV1
GraphicsMagick
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
TNN
NCNN
Ogg Audio Encoding
ASTC Encoder
Google SynthMark
Zstd Compression
GraphicsMagick
NCNN
TSCP
NCNN
QuantLib
GraphicsMagick
Etcpak
Liquid-DSP
ONNX Runtime
JPEG XL Decoding
WebP Image Encode
AOM AV1
NCNN
SVT-AV1
libavif avifenc
NCNN
JPEG XL Decoding
NCNN
JPEG XL
LZ4 Compression
NCNN
Zstd Compression
simdjson
Zstd Compression
JPEG XL
simdjson
Basis Universal
simdjson
ONNX Runtime
Liquid-DSP
POV-Ray
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
  Quality 100, Compression Effort 5
libavif avifenc
GraphicsMagick
Basis Universal
libavif avifenc
JPEG XL
Zstd Compression
libavif avifenc
NCNN
JPEG XL
AOM AV1
dav1d
WebP Image Encode
WebP2 Image Encode
SVT-VP9
Zstd Compression
Redis:
  LPUSH
  LPOP
Zstd Compression
LZ4 Compression
AOM AV1
ONNX Runtime
NCNN
simdjson
WebP2 Image Encode
SVT-VP9
ONNX Runtime
GraphicsMagick
LZ4 Compression
Redis
ASTC Encoder
Liquid-DSP
oneDNN
x265
Tachyon
x265
TNN
GNU Radio
oneDNN
SQLite Speedtest
Ngspice
Timed MrBayes Analysis
LZ4 Compression
Redis
RNNoise
LZ4 Compression
NCNN
LZ4 Compression
Gcrypt Library
WebP Image Encode
oneDNN
NCNN
Crypto++
OpenFOAM
AOM AV1
Ngspice
GNU Radio
x264
WebP Image Encode
Zstd Compression
JPEG XL
GNU Radio
WebP Image Encode
dav1d
AOM AV1
Opus Codec Encoding
WavPack Audio Encoding
GNU Radio
NCNN
oneDNN
Zstd Compression
JPEG XL
libavif avifenc
Zstd Compression
Basis Universal
Timed Godot Game Engine Compilation
oneDNN
libavif avifenc
GNU Radio
oneDNN
Basis Universal
oneDNN
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Smallpt
Crafty
ONNX Runtime
Redis
oneDNN
GNU Radio
Zstd Compression