Ryzen 9 5950X AOCC 3.0 Compiler Benchmarking

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103167-PTS-RYZEN95988
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 10.2
March 14 2021
  14 Hours, 26 Minutes
LLVM Clang 12
March 15 2021
  11 Hours, 8 Minutes
AMD AOCC 2.3
March 14 2021
  10 Hours, 49 Minutes
AMD AOCC 3.0
March 15 2021
  10 Hours, 54 Minutes
Invert Behavior (Only Show Selected Data)
  11 Hours, 49 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X AOCC 3.0 Compiler BenchmarkingOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD NAVY_FLOUNDER 12GB (2855/1000MHz)AMD Device ab28ASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.6-051106-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.168GCC 10.2.0Clang 12.0.0-++rc3-1~exp1~oibaf~gClang 11.0.0Clang 12.0.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilersFile-SystemScreen ResolutionRyzen 9 5950X AOCC 3.0 Compiler Benchmarking PerformanceSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 10.2: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - AMD AOCC 2.3: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - AMD AOCC 3.0: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0Logarithmic Result OverviewPhoronix Test SuiteSysbenchTimed LLVM CompilationC-RayLibRawEtcpakOgg Audio EncodingGoogle SynthMarkGraphicsMagickSVT-AV1TSCPQuantLibNCNNTNNJPEG XL DecodingPOV-RayZstd CompressionSVT-VP9libavif avifencJPEG XLONNX RuntimeASTC EncoderWebP2 Image EncodeWebP Image EncodeBasis Universaldav1dLZ4 CompressionTachyonx265simdjsonTimed MrBayes AnalysisRNNoiseNgspiceRedisGcrypt LibraryLiquid-DSPCrypto++x264WavPack Audio EncodingGNU RadioTimed Godot Game Engine Compilation

Ryzen 9 5950X AOCC 3.0 Compiler Benchmarkingsysbench: CPUetcpak: DXT1build-llvm: Time To Compilec-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenlibraw: Post-Processing Benchmarkncnn: CPU - regnety_400mgraphics-magick: HWB Color Spaceastcenc: Thoroughetcpak: ETC1svt-av1: Enc Mode 8 - 1080pgraphics-magick: Resizingncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3tnn: CPU - MobileNet v2ncnn: CPU - mnasnetencode-ogg: WAV To Oggastcenc: Mediumsynthmark: VoiceMark_100compress-zstd: 3, Long Mode - Compression Speedgraphics-magick: Rotatencnn: CPU - efficientnet-b0tscp: AI Chess Performancencnn: CPU - blazefacequantlib: graphics-magick: Noise-Gaussianetcpak: ETC2liquid-dsp: 32 - 256 - 57onnx: shufflenet-v2-10 - OpenMP CPUjpegxl-decode: 1webp: Quality 100, Highest Compressionaom-av1: Speed 0 Two-Passncnn: CPU - mobilenetsvt-av1: Enc Mode 4 - 1080pavifenc: 6, Losslessncnn: CPU - shufflenet-v2jpegxl-decode: Allncnn: CPU - squeezenet_ssdjpegxl: JPEG - 8compress-lz4: 9 - Compression Speedncnn: CPU - resnet50compress-zstd: 19, Long Mode - Decompression Speedsimdjson: LargeRandcompress-zstd: 8, Long Mode - Compression Speedjpegxl: PNG - 8simdjson: PartialTweetsbasis: UASTC Level 0simdjson: DistinctUserIDonnx: yolov4 - OpenMP CPUliquid-dsp: 1 - 256 - 57povray: Trace Timewebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5avifenc: 2graphics-magick: Swirlbasis: ETC1Savifenc: 6jpegxl: JPEG - 5compress-zstd: 8 - Compression Speedavifenc: 0ncnn: CPU - googlenetjpegxl: JPEG - 7aom-av1: Speed 6 Realtimedav1d: Summer Nature 4Kwebp: Defaultwebp2: Defaultsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pcompress-zstd: 3, Long Mode - Decompression Speedredis: LPUSHredis: LPOPcompress-zstd: 8 - Decompression Speedcompress-lz4: 3 - Compression Speedaom-av1: Speed 4 Two-Passonnx: bertsquad-10 - OpenMP CPUncnn: CPU - yolov4-tinysimdjson: Kostyawebp2: Quality 100, Lossless Compressionsvt-vp9: Visual Quality Optimized - Bosphorus 1080ponnx: fcn-resnet101-11 - OpenMP CPUgraphics-magick: Enhancedcompress-lz4: 1 - Decompression Speedredis: SETastcenc: Exhaustiveliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUx265: Bosphorus 4Ktachyon: Total Timex265: Bosphorus 1080ptnn: CPU - SqueezeNet v1.1gnuradio: Hilbert Transformonednn: IP Shapes 3D - f32 - CPUsqlite-speedtest: Timed Time - Size 1,000ngspice: C7552mrbayes: Primate Phylogeny Analysiscompress-lz4: 9 - Decompression Speedredis: SADDrnnoise: compress-lz4: 3 - Decompression Speedncnn: CPU - alexnetcompress-lz4: 1 - Compression Speedgcrypt: webp: Quality 100, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - vgg16cryptopp: Unkeyed Algorithmsopenfoam: Motorbike 30Maom-av1: Speed 8 Realtimengspice: C2670gnuradio: Signal Source (Cosine)x264: H.264 Video Encodingwebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 19 - Compression Speedjpegxl: PNG - 7gnuradio: IIR Filterwebp: Quality 100dav1d: Summer Nature 1080paom-av1: Speed 6 Two-Passencode-opus: WAV To Opus Encodeencode-wavpack: WAV To WavPackgnuradio: FIR Filterncnn: CPU - resnet18onednn: Recurrent Neural Network Inference - f32 - CPUcompress-zstd: 8, Long Mode - Decompression Speedjpegxl: PNG - 5avifenc: 10, Losslesscompress-zstd: 19, Long Mode - Compression Speedbasis: UASTC Level 2build-godot: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 10gnuradio: FM Deemphasis Filteronednn: Convolution Batch Shapes Auto - f32 - CPUbasis: UASTC Level 3onednn: Recurrent Neural Network Training - f32 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0smallpt: Global Illumination Renderer; 128 Samplescrafty: Elapsed Timeonnx: super-resolution-10 - OpenMP CPUredis: GETonednn: Deconvolution Batch shapes_1d - f32 - CPUgnuradio: Five Back to Back FIR Filterscompress-zstd: 19 - Decompression SpeedGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.091743.721546.299370.57125.08937578.6617.6111156.9922386.56151.77421654.433.85216.2813.9313.5784.0524966.2981425.910565.3219657731.833196.9454245.04111649666671504956.535.2420.3712.426.13730.9774.23210.9913.7738.1371.1325.674350.91.221122.61.145.645.1575.734338184400024.093111.802203.8116.41423.538116619.8968.92787.351057.443.61512.7687.0735.13243.691.0422.274235.044737.12222217.523549910.504617.172.369.2061420.773.72367.371228.969943913771.12640316.1752.926811112000003.9597927.8344.394189.80211.567515.89.2596742.59962.81659.86913397.73041527.3714.19713400.110.8212330.56171.18613.9903.5546757.89545.91460997.75121.1371.6034715.4208.9328.81351.611.20843.11.652971.7929.435.48410.1491063.514.111773.674886.274.124.87536.615.90279.5230.6386642.9341055.017.290528.1262757.5232.3442.3513.24025.0655.0814.6741173124967213470419.904.46777920.84251.72445437.633669.185302.69844.88723754.1417.068449.4996383.47265.31117893.793.33270.7903.4513.3723.5076795.8081191.610164.8021481541.733538.5398272.99413352333331497262.274.6740.4211.536.82327.8544.04196.3412.5036.4464.4323.543957.91.141039.11.066.185.5226.264267779433322.122103.011188.3106.78922.073110821.3778.34285.671043.241.07812.5385.8137.5244.370.9772.134223.504456.62212779.003649832.584352.468.309.7463421.703.71349.022219.1210445713305.32762047.551.662910676666674.1193028.0245.073192.16206.360522.89.5944243.26164.51959.23413129.42954866.814.33113082.411.1412227.47172.90213.9103.6448557.51552.383954100.16118.2272.4614769.8213.5728.67351.311.41835.41.638979.3430.035.58910.3111060.113.911792.2774.774.80736.815.90780.2110.6412312.9521054.917.338028.2222757.7559373624414.372.46561911.24000.9210804861.922986.706576.18744.53124150.3712.198489.2012285.28965.75418243.523.06252.4493.1616.5643.2899807.3731166.49284.5323142251.573710.4402236.46613323333331710564.344.60910.966.85927.6223.79213.6712.7435.8768.8023.314024.71.111026.11.045.925.4536.124567503166722.535106.383193.6006.28821.834113121.2168.38489.511117.640.74811.9489.32244.150.9792.144238.114586.22351340.563589202.594468.272.2164621.663.53357.897230.1910346113595.32719539.8350.8140106726666727.4946.132189.74203.636534.844.11064.89459.06813188.42961165.6714.03913212.611.0112456.17173.31413.64058.11550.45706372.7824661.6210.7228.43350.711.31853.31.673976.9310.2931080.314.084805.674.614.83236.715.98479.9432.9331061.028.14960673658044.77931.64097.3210533984.513583.008610.96144.33224052.6812.308059.3493286.92764.27917203.533.07260.6633.1816.5403.4040789.2241186.08674.5022835121.563646.4392242.03313349000001547459.924.93711.286.91727.9143.87191.9112.4034.3468.4023.293978.21.121023.51.046.045.6396.234657873400022.537105.721193.4266.36422.005108321.3688.30983.541096.341.02811.9683.63229.031.0072.165225.174543.82345671.033766645.924463.172.3064921.933.61356.815221.5910245213144.82719036.251.454510860333334.0966326.9644.998188.70204.603523.69.5719464.91157.99012981.82948093.6814.47113010.711.1112124.81175.69413.8413.5836458.96538.87620473.2794704.7210.3528.19250.511.17838.31.669959.2910.3431065.413.861760.5773.644.83736.416.05579.8670.6430932.9411055.817.363028.1662760.9459763545388.922.46850929.13608.8OpenBenchmarking.org

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.250M100M150M200M250MSE +/- 301436.60, N = 3SE +/- 204338.54, N = 3SE +/- 5355.39, N = 3SE +/- 115.96, N = 3210804861.92210533984.512445437.6391743.721. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.28001600240032004000SE +/- 26.84, N = 3SE +/- 7.06, N = 3SE +/- 5.75, N = 3SE +/- 2.21, N = 33669.193583.012986.711546.301. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLLVM Clang 12GCC 10.2AMD AOCC 2.3AMD AOCC 3.0130260390520650SE +/- 1.23, N = 3SE +/- 2.79, N = 3SE +/- 5.39, N = 3SE +/- 5.16, N = 3302.70370.57576.19610.96

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelGCC 10.2AMD AOCC 3.0AMD AOCC 2.3LLVM Clang 121020304050SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 325.0944.3344.5344.891. (CC) gcc options: -lm -lpthread -O3 -march=native

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 1280160240320400SE +/- 1.00, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 33752412402371. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkGCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.320406080100SE +/- 0.16, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 378.6654.1452.6850.371. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.248121620SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 1512.1912.3017.0617.61-lomp - MIN: 11.89 / MAX: 13.6-lomp - MIN: 11.96 / MAX: 17.61-lomp - MIN: 16.85 / MAX: 20.53-lgomp - MIN: 16.94 / MAX: 25.971. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.02004006008001000SE +/- 1.33, N = 3SE +/- 1.00, N = 3SE +/- 3.06, N = 311158488448051. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 123691215SE +/- 0.0057, N = 3SE +/- 0.0090, N = 3SE +/- 0.0148, N = 3SE +/- 0.0075, N = 36.99229.20129.34939.49961. (CXX) g++ options: -O3 -march=native -flto -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1GCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.380160240320400SE +/- 0.37, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 3SE +/- 1.16, N = 3386.56383.47286.93285.291. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pAMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.21530456075SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.66, N = 3SE +/- 0.24, N = 365.7565.3164.2851.771. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.05001000150020002500SE +/- 1.45, N = 3SE +/- 1.15, N = 3SE +/- 2.65, N = 321651824178917201. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.20.99681.99362.99043.98724.984SE +/- 0.03, N = 3SE +/- 0.06, N = 4SE +/- 0.04, N = 3SE +/- 0.01, N = 153.523.533.794.43-lomp - MIN: 3.34 / MAX: 4.84-lomp - MIN: 3.27 / MAX: 4.75-lomp - MIN: 3.63 / MAX: 5.2-lgomp - MIN: 4.19 / MAX: 11.091. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.20.86631.73262.59893.46524.3315SE +/- 0.03, N = 3SE +/- 0.07, N = 4SE +/- 0.05, N = 3SE +/- 0.02, N = 153.063.073.333.85-lomp - MIN: 2.98 / MAX: 4.3-lomp - MIN: 2.9 / MAX: 4.41-lomp - MIN: 3.19 / MAX: 5.6-lgomp - MIN: 3.74 / MAX: 10.851. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2GCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 1260120180240300SE +/- 0.56, N = 3SE +/- 0.35, N = 3SE +/- 0.69, N = 3SE +/- 0.58, N = 3216.28252.45260.66270.79-fopenmp - MIN: 215.1 / MAX: 218.26-fopenmp=libomp - MIN: 250.25 / MAX: 255.53-fopenmp=libomp - MIN: 257.51 / MAX: 262.88-fopenmp=libomp - MIN: 268.42 / MAX: 272.221. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.20.88431.76862.65293.53724.4215SE +/- 0.03, N = 3SE +/- 0.03, N = 4SE +/- 0.02, N = 3SE +/- 0.02, N = 153.163.183.453.93-lomp - MIN: 3.06 / MAX: 4.05-lomp - MIN: 3.04 / MAX: 4.48-lomp - MIN: 3.37 / MAX: 4.6-lgomp - MIN: 3.71 / MAX: 6.061. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggLLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.348121620SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 313.3713.5816.5416.561. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.20.91181.82362.73543.64724.559SE +/- 0.0273, N = 3SE +/- 0.0017, N = 3SE +/- 0.0018, N = 3SE +/- 0.0178, N = 33.28993.40403.50764.05241. (CXX) g++ options: -O3 -march=native -flto -pthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100GCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.02004006008001000SE +/- 1.26, N = 3SE +/- 5.04, N = 3SE +/- 4.01, N = 3SE +/- 5.41, N = 3966.30807.37795.81789.221. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedGCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.330060090012001500SE +/- 2.43, N = 3SE +/- 1.19, N = 3SE +/- 2.80, N = 3SE +/- 4.71, N = 31425.91191.61186.01166.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.02004006008001000SE +/- 3.51, N = 3SE +/- 1.86, N = 3SE +/- 8.67, N = 3SE +/- 2.03, N = 3105610169288671. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0AMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.21.1972.3943.5914.7885.985SE +/- 0.03, N = 4SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 154.504.534.805.32-lomp - MIN: 4.34 / MAX: 5.7-lomp - MIN: 4.35 / MAX: 6.86-lomp - MIN: 4.71 / MAX: 6.61-lgomp - MIN: 5.15 / MAX: 13.831. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.2500K1000K1500K2000K2500KSE +/- 4348.49, N = 5SE +/- 3546.09, N = 5SE +/- 4267.44, N = 5SE +/- 7442.75, N = 523142252283512214815419657731. (CC) gcc options: -O3 -march=native

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceAMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.20.41180.82361.23541.64722.059SE +/- 0.02, N = 4SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 151.561.571.731.83-lomp - MIN: 1.46 / MAX: 6.9-lomp - MIN: 1.54 / MAX: 1.75-lomp - MIN: 1.68 / MAX: 1.79-lgomp - MIN: 1.77 / MAX: 3.91. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.28001600240032004000SE +/- 28.46, N = 10SE +/- 27.64, N = 10SE +/- 49.56, N = 3SE +/- 33.41, N = 53710.43646.43538.53196.91. (CXX) g++ options: -O3 -march=native -rdynamic

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0100200300400500SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34544023983921. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.360120180240300SE +/- 0.09, N = 3SE +/- 1.65, N = 3SE +/- 0.45, N = 3SE +/- 2.43, N = 3272.99245.04242.03236.471. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.2300M600M900M1200M1500MSE +/- 240370.09, N = 3SE +/- 1422439.22, N = 3SE +/- 1125956.38, N = 3SE +/- 497772.82, N = 313352333331334900000133233333311649666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 124K8K12K16K20KSE +/- 193.85, N = 4SE +/- 177.70, N = 3SE +/- 134.84, N = 3SE +/- 123.46, N = 1217105154741504914972-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.21428425670SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 364.3462.2759.9256.53

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionAMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.21.17952.3593.53854.7185.8975SE +/- 0.016, N = 3SE +/- 0.055, N = 3SE +/- 0.020, N = 3SE +/- 0.018, N = 34.6094.6744.9375.2421. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassLLVM Clang 12GCC 10.20.09450.1890.28350.3780.4725SE +/- 0.00, N = 3SE +/- 0.00, N = 30.420.371. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.23691215SE +/- 0.09, N = 3SE +/- 0.12, N = 4SE +/- 0.05, N = 3SE +/- 0.16, N = 1510.9611.2811.5312.42-lomp - MIN: 10.51 / MAX: 16.79-lomp - MIN: 10.61 / MAX: 20.99-lomp - MIN: 11.09 / MAX: 12.2-lgomp - MIN: 11.7 / MAX: 20.081. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pAMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.2246810SE +/- 0.055, N = 3SE +/- 0.004, N = 3SE +/- 0.033, N = 3SE +/- 0.014, N = 36.9176.8596.8236.1371. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessAMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.2714212835SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 327.6227.8527.9130.981. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.20.95181.90362.85543.80724.759SE +/- 0.05, N = 3SE +/- 0.06, N = 4SE +/- 0.05, N = 3SE +/- 0.01, N = 153.793.874.044.23-lomp - MIN: 3.64 / MAX: 4.86-lomp - MIN: 3.67 / MAX: 12.94-lomp - MIN: 3.88 / MAX: 5.03-lgomp - MIN: 4.15 / MAX: 9.051. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllAMD AOCC 2.3GCC 10.2LLVM Clang 12AMD AOCC 3.050100150200250SE +/- 0.40, N = 3SE +/- 0.29, N = 3SE +/- 0.05, N = 3SE +/- 0.21, N = 3213.67210.99196.34191.91

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.24, N = 4SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 1512.4012.5012.7413.77-lomp - MIN: 11.72 / MAX: 15.69-lomp - MIN: 12.23 / MAX: 16.6-lomp - MIN: 12 / MAX: 19.89-lgomp - MIN: 13.25 / MAX: 23.451. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0918273645SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 338.1336.4435.8734.34-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 121632486480SE +/- 0.68, N = 6SE +/- 0.51, N = 3SE +/- 0.68, N = 6SE +/- 0.14, N = 371.1368.8068.4064.431. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50AMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.2612182430SE +/- 0.20, N = 4SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.21, N = 1523.2923.3123.5425.67-lomp - MIN: 22.43 / MAX: 25.17-lomp - MIN: 22.75 / MAX: 33.51-lomp - MIN: 22.92 / MAX: 26.57-lgomp - MIN: 24.52 / MAX: 35.961. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 129001800270036004500SE +/- 72.38, N = 3SE +/- 16.46, N = 3SE +/- 39.89, N = 3SE +/- 25.47, N = 34350.94024.73978.23957.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomGCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.30.27450.5490.82351.0981.3725SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.221.141.121.111. (CXX) g++ options: -O3 -march=native -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.02004006008001000SE +/- 2.15, N = 3SE +/- 5.17, N = 3SE +/- 4.53, N = 3SE +/- 5.57, N = 31122.61039.11026.11024.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8GCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.30.25650.5130.76951.0261.2825SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.141.061.041.04-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsLLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.2246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 36.186.045.925.641. (CXX) g++ options: -O3 -march=native -pthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0GCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.01.26882.53763.80645.07526.344SE +/- 0.023, N = 3SE +/- 0.009, N = 3SE +/- 0.015, N = 3SE +/- 0.015, N = 35.1575.4535.5225.6391. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDLLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.2246810SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 36.266.236.125.731. (CXX) g++ options: -O3 -march=native -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUAMD AOCC 3.0AMD AOCC 2.3GCC 10.2LLVM Clang 12100200300400500SE +/- 1.36, N = 3SE +/- 2.95, N = 3SE +/- 1.96, N = 3SE +/- 2.13, N = 3465456433426-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57GCC 10.2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.320M40M60M80M100MSE +/- 828458.69, N = 5SE +/- 601612.28, N = 3SE +/- 171803.51, N = 3SE +/- 78876.13, N = 3818440007873400077794333750316671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.2612182430SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 322.1222.5422.5424.091. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.2306090120150SE +/- 0.95, N = 3SE +/- 0.88, N = 3SE +/- 0.42, N = 3SE +/- 1.06, N = 3103.01105.72106.38111.801. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.24080120160200SE +/- 0.09, N = 3SE +/- 0.56, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3188.31193.43193.60203.811. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5AMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12246810SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 3SE +/- 0.010, N = 36.2886.3646.4146.7891. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.2612182430SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 321.8322.0122.0723.541. (CXX) g++ options: -O3 -fPIC -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.030060090012001500SE +/- 3.67, N = 3SE +/- 3.71, N = 3SE +/- 4.48, N = 3SE +/- 3.84, N = 311661131110810831. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12510152025SE +/- 0.04, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.07, N = 319.9021.2221.3721.381. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.012, N = 3SE +/- 0.055, N = 3SE +/- 0.027, N = 3SE +/- 0.048, N = 38.3098.3428.3848.9271. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5AMD AOCC 2.3GCC 10.2LLVM Clang 12AMD AOCC 3.020406080100SE +/- 0.25, N = 3SE +/- 0.14, N = 3SE +/- 0.24, N = 3SE +/- 0.09, N = 389.5187.3585.6783.54-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.22004006008001000SE +/- 9.29, N = 3SE +/- 9.53, N = 15SE +/- 11.95, N = 4SE +/- 3.93, N = 31117.61096.31078.11057.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.21020304050SE +/- 0.15, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 340.7541.0341.0843.621. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.23691215SE +/- 0.08, N = 3SE +/- 0.13, N = 4SE +/- 0.12, N = 3SE +/- 0.06, N = 1511.9411.9612.5312.76-lomp - MIN: 11.62 / MAX: 12.42-lomp - MIN: 11.46 / MAX: 13.27-lomp - MIN: 12.12 / MAX: 17.12-lgomp - MIN: 12.19 / MAX: 19.361. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7AMD AOCC 2.3GCC 10.2LLVM Clang 12AMD AOCC 3.020406080100SE +/- 0.25, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 3SE +/- 0.01, N = 389.3287.0785.8183.63-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeLLVM Clang 12GCC 10.2918273645SE +/- 0.31, N = 3SE +/- 0.16, N = 337.5035.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KLLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.050100150200250SE +/- 0.04, N = 3SE +/- 0.32, N = 3SE +/- 0.47, N = 3SE +/- 0.43, N = 3244.37244.15243.69229.03MIN: 182.08 / MAX: 252.22MIN: 180.82 / MAX: 252.96-lm - MIN: 181.29 / MAX: 252.3MIN: 171.52 / MAX: 237.171. (CC) gcc options: -O3 -march=native -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.20.23450.4690.70350.9381.1725SE +/- 0.014, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 30.9770.9791.0071.0421. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.20.51171.02341.53512.04682.5585SE +/- 0.011, N = 3SE +/- 0.024, N = 3SE +/- 0.025, N = 3SE +/- 0.005, N = 32.1342.1442.1652.2741. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pAMD AOCC 2.3GCC 10.2AMD AOCC 3.0LLVM Clang 1250100150200250SE +/- 2.24, N = 13SE +/- 2.40, N = 12SE +/- 2.47, N = 12SE +/- 2.40, N = 12238.11235.04225.17223.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 1210002000300040005000SE +/- 46.74, N = 3SE +/- 31.63, N = 3SE +/- 33.94, N = 3SE +/- 2.17, N = 34737.14586.24543.84456.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 12500K1000K1500K2000K2500KSE +/- 27675.97, N = 4SE +/- 35143.82, N = 15SE +/- 23396.73, N = 15SE +/- 30760.12, N = 32351340.562345671.032222217.522212779.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2800K1600K2400K3200K4000KSE +/- 45854.80, N = 15SE +/- 31635.54, N = 3SE +/- 30792.29, N = 8SE +/- 26197.04, N = 33766645.923649832.583589202.593549910.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 1210002000300040005000SE +/- 26.73, N = 3SE +/- 8.16, N = 11SE +/- 37.90, N = 24617.14468.24463.14352.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedGCC 10.2AMD AOCC 3.0AMD AOCC 2.3LLVM Clang 121632486480SE +/- 0.86, N = 3SE +/- 0.21, N = 3SE +/- 0.77, N = 5SE +/- 0.19, N = 372.3672.3072.2168.301. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassLLVM Clang 12GCC 10.23691215SE +/- 0.05, N = 3SE +/- 0.02, N = 39.749.201. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUAMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.2140280420560700SE +/- 5.59, N = 12SE +/- 6.07, N = 12SE +/- 5.80, N = 3SE +/- 6.71, N = 3649646634614-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0510152025SE +/- 0.17, N = 15SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 420.7721.6621.7021.93-lgomp - MIN: 19.69 / MAX: 43.19-lomp - MIN: 21.17 / MAX: 27.18-lomp - MIN: 21.21 / MAX: 30.18-lomp - MIN: 21.28 / MAX: 24.811. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaGCC 10.2LLVM Clang 12AMD AOCC 3.0AMD AOCC 2.30.8371.6742.5113.3484.185SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 33.723.713.613.531. (CXX) g++ options: -O3 -march=native -pthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionLLVM Clang 12AMD AOCC 3.0AMD AOCC 2.3GCC 10.280160240320400SE +/- 0.58, N = 3SE +/- 1.38, N = 3SE +/- 1.28, N = 3SE +/- 0.42, N = 3349.02356.82357.90367.371. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pAMD AOCC 2.3GCC 10.2AMD AOCC 3.0LLVM Clang 1250100150200250SE +/- 0.90, N = 3SE +/- 0.68, N = 3SE +/- 0.39, N = 3SE +/- 0.82, N = 3230.19228.96221.59219.121. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPULLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.220406080100SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.44, N = 3SE +/- 0.17, N = 310410310299-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedAMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.2100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34614574524391. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.03K6K9K12K15KSE +/- 38.97, N = 3SE +/- 59.50, N = 3SE +/- 93.71, N = 3SE +/- 106.53, N = 313771.113595.313305.313144.81. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.2600K1200K1800K2400K3000KSE +/- 28596.87, N = 3SE +/- 23132.25, N = 3SE +/- 14014.88, N = 3SE +/- 26145.63, N = 152762047.502719539.832719036.202640316.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.21224364860SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 350.8151.4551.6652.931. (CXX) g++ options: -O3 -march=native -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57GCC 10.2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3200M400M600M800M1000MSE +/- 5768882.04, N = 3SE +/- 3939684.14, N = 3SE +/- 3699249.17, N = 3SE +/- 3628743.28, N = 311112000001086033333106766666710672666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUGCC 10.2AMD AOCC 3.0LLVM Clang 120.92681.85362.78043.70724.634SE +/- 0.00506, N = 3SE +/- 0.01273, N = 3SE +/- 0.01294, N = 33.959794.096634.11930-fopenmp - MIN: 3.76-fopenmp=libomp - MIN: 3.9-fopenmp=libomp - MIN: 3.881. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLLVM Clang 12GCC 10.2AMD AOCC 2.3AMD AOCC 3.0714212835SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 328.0227.8327.4926.961. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeGCC 10.2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.31020304050SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 344.3945.0045.0746.131. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLLVM Clang 12GCC 10.2AMD AOCC 2.3AMD AOCC 3.020406080100SE +/- 0.13, N = 3SE +/- 0.19, N = 3SE +/- 0.28, N = 3SE +/- 0.35, N = 392.1689.8089.7488.701. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.250100150200250SE +/- 0.85, N = 3SE +/- 0.48, N = 3SE +/- 1.02, N = 3SE +/- 0.57, N = 3203.64204.60206.36211.57-fopenmp=libomp - MIN: 201.91 / MAX: 206.13-fopenmp=libomp - MIN: 203.72 / MAX: 206.33-fopenmp=libomp - MIN: 204.24 / MAX: 209.24-fopenmp - MIN: 206.88 / MAX: 212.831. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.2120240360480600SE +/- 1.95, N = 9SE +/- 1.23, N = 8SE +/- 0.58, N = 9SE +/- 0.63, N = 9534.8523.6522.8515.81. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUGCC 10.2AMD AOCC 3.0LLVM Clang 123691215SE +/- 0.01340, N = 3SE +/- 0.01936, N = 3SE +/- 0.01452, N = 39.259679.571949.59442-fopenmp - MIN: 9.1-fopenmp=libomp - MIN: 9.46-fopenmp=libomp - MIN: 9.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000GCC 10.2LLVM Clang 12AMD AOCC 2.31020304050SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 342.6043.2644.111. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.01428425670SE +/- 0.15, N = 3SE +/- 0.20, N = 3SE +/- 0.54, N = 3SE +/- 0.08, N = 362.8264.5264.8964.911. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.21326395265SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.81, N = 3SE +/- 0.15, N = 357.9959.0759.2359.87-mabm1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.03K6K9K12K15KSE +/- 35.65, N = 6SE +/- 53.22, N = 3SE +/- 21.95, N = 3SE +/- 24.92, N = 613397.713188.413129.412981.81. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0700K1400K2100K2800K3500KSE +/- 39730.96, N = 15SE +/- 40118.44, N = 3SE +/- 29853.73, N = 3SE +/- 27502.49, N = 153041527.372961165.672954866.802948093.681. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AMD AOCC 2.3GCC 10.2LLVM Clang 12AMD AOCC 3.048121620SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 3SE +/- 0.16, N = 314.0414.2014.3314.471. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.03K6K9K12K15KSE +/- 48.22, N = 3SE +/- 15.17, N = 5SE +/- 46.92, N = 3SE +/- 30.75, N = 313400.113212.613082.413010.71. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 123691215SE +/- 0.09, N = 15SE +/- 0.00, N = 2SE +/- 0.04, N = 4SE +/- 0.04, N = 310.8211.0111.1111.14-lgomp - MIN: 10.41 / MAX: 17.59-lomp - MIN: 10.84 / MAX: 12.26-lomp - MIN: 10.92 / MAX: 15.76-lomp - MIN: 10.96 / MAX: 13.281. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedAMD AOCC 2.3GCC 10.2LLVM Clang 12AMD AOCC 3.03K6K9K12K15KSE +/- 57.72, N = 3SE +/- 76.55, N = 3SE +/- 66.76, N = 3SE +/- 49.14, N = 312456.1712330.5612227.4712124.811. (CC) gcc options: -O3

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.04080120160200SE +/- 0.29, N = 3SE +/- 1.10, N = 3SE +/- 1.67, N = 3SE +/- 0.17, N = 3171.19172.90173.31175.691. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessAMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12GCC 10.248121620SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 313.6413.8413.9113.991. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUGCC 10.2AMD AOCC 3.0LLVM Clang 120.82011.64022.46033.28044.1005SE +/- 0.00753, N = 3SE +/- 0.00604, N = 3SE +/- 0.01444, N = 33.554673.583643.64485-fopenmp - MIN: 3.46-fopenmp=libomp - MIN: 3.44-fopenmp=libomp - MIN: 3.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16LLVM Clang 12GCC 10.2AMD AOCC 2.3AMD AOCC 3.01326395265SE +/- 0.17, N = 3SE +/- 0.12, N = 15SE +/- 0.01, N = 3SE +/- 0.19, N = 457.5157.8958.1158.96-lomp - MIN: 56.17 / MAX: 62.53-lgomp - MIN: 55.89 / MAX: 80.86-lomp - MIN: 56.81 / MAX: 67.53-lomp - MIN: 57.64 / MAX: 66.681. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsLLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.0120240360480600SE +/- 1.73, N = 3SE +/- 1.69, N = 3SE +/- 3.29, N = 15SE +/- 2.13, N = 3552.38550.46545.91538.881. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MGCC 10.2LLVM Clang 1220406080100SE +/- 0.08, N = 3SE +/- 0.06, N = 397.75100.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeGCC 10.2LLVM Clang 12306090120150SE +/- 0.75, N = 3SE +/- 1.02, N = 3121.13118.221. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.01632486480SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.13, N = 371.6072.4672.7873.281. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.310002000300040005000SE +/- 10.26, N = 9SE +/- 16.39, N = 9SE +/- 60.32, N = 8SE +/- 22.36, N = 94769.84715.44704.74661.61. 3.8.1.0

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.250100150200250SE +/- 1.62, N = 12SE +/- 1.86, N = 8SE +/- 1.75, N = 9SE +/- 1.66, N = 9213.57210.72210.35208.93-mstack-alignment=64-mstack-alignment=64-mstack-alignment=641. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionAMD AOCC 3.0AMD AOCC 2.3LLVM Clang 12GCC 10.2714212835SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 328.1928.4328.6728.811. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedGCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.01224364860SE +/- 0.20, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 351.651.350.750.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7LLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.03691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 311.4111.3111.2011.17-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAMD AOCC 2.3GCC 10.2AMD AOCC 3.0LLVM Clang 122004006008001000SE +/- 2.72, N = 9SE +/- 1.32, N = 9SE +/- 2.72, N = 8SE +/- 1.22, N = 9853.3843.1838.3835.41. 3.8.1.0

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100LLVM Clang 12GCC 10.2AMD AOCC 3.0AMD AOCC 2.30.37640.75281.12921.50561.882SE +/- 0.003, N = 3SE +/- 0.018, N = 4SE +/- 0.011, N = 3SE +/- 0.006, N = 31.6381.6521.6691.6731. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pLLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.02004006008001000SE +/- 2.97, N = 3SE +/- 9.00, N = 3SE +/- 1.38, N = 3SE +/- 1.29, N = 3979.34976.93971.79959.29MIN: 717.55 / MAX: 1062.34MIN: 633.01 / MAX: 1069.88-lm - MIN: 732.02 / MAX: 1055.82MIN: 714.89 / MAX: 1039.771. (CC) gcc options: -O3 -march=native -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassLLVM Clang 12GCC 10.2714212835SE +/- 0.13, N = 3SE +/- 0.26, N = 330.0329.431. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 10.2LLVM Clang 121.25752.5153.77255.036.2875SE +/- 0.031, N = 5SE +/- 0.037, N = 55.4845.589-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.03691215SE +/- 0.10, N = 5SE +/- 0.03, N = 5SE +/- 0.11, N = 5SE +/- 0.01, N = 510.1510.2910.3110.341. (CXX) g++ options: -O3 -march=native -rdynamic

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 4.68, N = 9SE +/- 3.65, N = 8SE +/- 2.09, N = 9SE +/- 3.22, N = 91080.31065.41063.51060.11. 3.8.1.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.11, N = 4SE +/- 0.08, N = 3SE +/- 0.21, N = 3SE +/- 0.05, N = 1513.8613.9114.0814.11-lomp - MIN: 13.44 / MAX: 16.35-lomp - MIN: 13.66 / MAX: 14.54-lomp - MIN: 13.56 / MAX: 21.13-lgomp - MIN: 13.84 / MAX: 23.151. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAMD AOCC 3.0GCC 10.2LLVM Clang 12400800120016002000SE +/- 3.74, N = 3SE +/- 5.00, N = 3SE +/- 9.12, N = 31760.571773.671792.27-fopenmp=libomp - MIN: 1745.87-fopenmp - MIN: 1750.26-fopenmp=libomp - MIN: 1766.321. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedGCC 10.2AMD AOCC 2.310002000300040005000SE +/- 29.99, N = 34886.24805.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5LLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.020406080100SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 374.7774.6174.1273.64-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessLLVM Clang 12AMD AOCC 2.3AMD AOCC 3.0GCC 10.21.09692.19383.29074.38765.4845SE +/- 0.038, N = 3SE +/- 0.041, N = 3SE +/- 0.015, N = 3SE +/- 0.022, N = 34.8074.8324.8374.8751. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedLLVM Clang 12AMD AOCC 2.3GCC 10.2AMD AOCC 3.0816243240SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 336.836.736.636.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2GCC 10.2LLVM Clang 12AMD AOCC 2.3AMD AOCC 3.048121620SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 315.9015.9115.9816.061. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileGCC 10.2AMD AOCC 3.0AMD AOCC 2.3LLVM Clang 1220406080100SE +/- 0.19, N = 3SE +/- 0.16, N = 3SE +/- 0.28, N = 3SE +/- 0.09, N = 379.5279.8779.9480.21

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUGCC 10.2LLVM Clang 12AMD AOCC 3.00.14470.28940.43410.57880.7235SE +/- 0.000722, N = 3SE +/- 0.000908, N = 3SE +/- 0.004823, N = 30.6386640.6412310.643093-fopenmp - MIN: 0.61-fopenmp=libomp - MIN: 0.61-fopenmp=libomp - MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10AMD AOCC 2.3GCC 10.2AMD AOCC 3.0LLVM Clang 120.66421.32841.99262.65683.321SE +/- 0.035, N = 3SE +/- 0.014, N = 3SE +/- 0.006, N = 3SE +/- 0.016, N = 32.9332.9342.9412.9521. (CXX) g++ options: -O3 -fPIC -lm

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 15.16, N = 9SE +/- 3.09, N = 8SE +/- 0.78, N = 9SE +/- 0.98, N = 91061.01055.81055.01054.91. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUGCC 10.2LLVM Clang 12AMD AOCC 3.048121620SE +/- 0.09, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 317.2917.3417.36-fopenmp - MIN: 16.58-fopenmp=libomp - MIN: 16.81-fopenmp=libomp - MIN: 16.831. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3GCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 12714212835SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 328.1328.1528.1728.221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUGCC 10.2LLVM Clang 12AMD AOCC 3.06001200180024003000SE +/- 2.01, N = 3SE +/- 5.95, N = 3SE +/- 17.19, N = 32757.522757.752760.94-fopenmp - MIN: 2719.35-fopenmp=libomp - MIN: 2734.73-fopenmp=libomp - MIN: 2717.591. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3GCC 10.2816243240SE +/- 0.09, N = 332.34MIN: 31.33 / MAX: 42.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0GCC 10.20.5291.0581.5872.1162.645SE +/- 0.027, N = 32.351MIN: 2.27 / MAX: 7.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224GCC 10.20.7291.4582.1872.9163.645SE +/- 0.049, N = 33.240MIN: 3.12 / MAX: 11.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50GCC 10.2612182430SE +/- 0.02, N = 325.07MIN: 23.97 / MAX: 39.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0GCC 10.21.14322.28643.42964.57285.716SE +/- 0.010, N = 35.081MIN: 4.92 / MAX: 14.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 10.21.05172.10343.15514.20685.2585SE +/- 0.015, N = 34.6741. (CXX) g++ options: -fopenmp -O3 -march=native

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeGCC 10.23M6M9M12M15MSE +/- 26371.45, N = 3117312491. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUGCC 10.2AMD AOCC 2.3AMD AOCC 3.0LLVM Clang 1214002800420056007000SE +/- 215.50, N = 12SE +/- 34.74, N = 3SE +/- 38.28, N = 3SE +/- 55.09, N = 126721606759765937-fopenmp-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETAMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0GCC 10.2800K1600K2400K3200K4000KSE +/- 58906.79, N = 15SE +/- 47796.61, N = 15SE +/- 11517.61, N = 3SE +/- 36718.95, N = 153658044.773624414.373545388.923470419.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULLVM Clang 12AMD AOCC 3.0GCC 10.21.00522.01043.01564.02085.026SE +/- 0.00451, N = 3SE +/- 0.00340, N = 3SE +/- 0.30276, N = 152.465612.468504.46777-fopenmp=libomp - MIN: 2.33-fopenmp=libomp - MIN: 2.36-fopenmp - MIN: 2.861. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD AOCC 2.3AMD AOCC 3.0GCC 10.2LLVM Clang 122004006008001000SE +/- 20.04, N = 9SE +/- 20.61, N = 8SE +/- 19.67, N = 9SE +/- 17.91, N = 9931.6929.1920.8911.21. 3.8.1.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.09001800270036004500SE +/- 6.53, N = 3SE +/- 12.18, N = 3SE +/- 50.52, N = 3SE +/- 417.40, N = 34251.74097.34000.93608.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

147 Results Shown

Sysbench
Etcpak
Timed LLVM Compilation
C-Ray
GraphicsMagick
LibRaw
NCNN
GraphicsMagick
ASTC Encoder
Etcpak
SVT-AV1
GraphicsMagick
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
TNN
NCNN
Ogg Audio Encoding
ASTC Encoder
Google SynthMark
Zstd Compression
GraphicsMagick
NCNN
TSCP
NCNN
QuantLib
GraphicsMagick
Etcpak
Liquid-DSP
ONNX Runtime
JPEG XL Decoding
WebP Image Encode
AOM AV1
NCNN
SVT-AV1
libavif avifenc
NCNN
JPEG XL Decoding
NCNN
JPEG XL
LZ4 Compression
NCNN
Zstd Compression
simdjson
Zstd Compression
JPEG XL
simdjson
Basis Universal
simdjson
ONNX Runtime
Liquid-DSP
POV-Ray
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
  Quality 100, Compression Effort 5
libavif avifenc
GraphicsMagick
Basis Universal
libavif avifenc
JPEG XL
Zstd Compression
libavif avifenc
NCNN
JPEG XL
AOM AV1
dav1d
WebP Image Encode
WebP2 Image Encode
SVT-VP9
Zstd Compression
Redis:
  LPUSH
  LPOP
Zstd Compression
LZ4 Compression
AOM AV1
ONNX Runtime
NCNN
simdjson
WebP2 Image Encode
SVT-VP9
ONNX Runtime
GraphicsMagick
LZ4 Compression
Redis
ASTC Encoder
Liquid-DSP
oneDNN
x265
Tachyon
x265
TNN
GNU Radio
oneDNN
SQLite Speedtest
Ngspice
Timed MrBayes Analysis
LZ4 Compression
Redis
RNNoise
LZ4 Compression
NCNN
LZ4 Compression
Gcrypt Library
WebP Image Encode
oneDNN
NCNN
Crypto++
OpenFOAM
AOM AV1
Ngspice
GNU Radio
x264
WebP Image Encode
Zstd Compression
JPEG XL
GNU Radio
WebP Image Encode
dav1d
AOM AV1
Opus Codec Encoding
WavPack Audio Encoding
GNU Radio
NCNN
oneDNN
Zstd Compression
JPEG XL
libavif avifenc
Zstd Compression
Basis Universal
Timed Godot Game Engine Compilation
oneDNN
libavif avifenc
GNU Radio
oneDNN
Basis Universal
oneDNN
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Smallpt
Crafty
ONNX Runtime
Redis
oneDNN
GNU Radio
Zstd Compression