Ryzen 9 5950X AOCC 3.0 Compiler Benchmarking

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103167-PTS-RYZEN95988
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 10.2
March 14 2021
  14 Hours, 26 Minutes
LLVM Clang 12
March 15 2021
  11 Hours, 8 Minutes
AMD AOCC 2.3
March 14 2021
  10 Hours, 49 Minutes
AMD AOCC 3.0
March 15 2021
  10 Hours, 54 Minutes
Invert Behavior (Only Show Selected Data)
  11 Hours, 49 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 5950X AOCC 3.0 Compiler BenchmarkingOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD NAVY_FLOUNDER 12GB (2855/1000MHz)AMD Device ab28ASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.6-051106-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.168GCC 10.2.0Clang 11.0.0Clang 12.0.0-++rc3-1~exp1~oibaf~gClang 12.0.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilersFile-SystemScreen ResolutionRyzen 9 5950X AOCC 3.0 Compiler Benchmarking PerformanceSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 10.2: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - AMD AOCC 2.3: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - AMD AOCC 3.0: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown) - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

GCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.0Logarithmic Result OverviewPhoronix Test SuiteSysbenchTimed LLVM CompilationC-RayLibRawEtcpakOgg Audio EncodingGoogle SynthMarkGraphicsMagickSVT-AV1TSCPQuantLibNCNNTNNJPEG XL DecodingPOV-RayZstd CompressionSVT-VP9libavif avifencJPEG XLONNX RuntimeASTC EncoderWebP2 Image EncodeWebP Image EncodeBasis Universaldav1dLZ4 CompressionTachyonx265simdjsonTimed MrBayes AnalysisRNNoiseNgspiceRedisGcrypt LibraryLiquid-DSPCrypto++x264WavPack Audio EncodingGNU RadioTimed Godot Game Engine Compilation

Ryzen 9 5950X AOCC 3.0 Compiler Benchmarkingsysbench: CPUetcpak: DXT1build-llvm: Time To Compilec-ray: Total Time - 4K, 16 Rays Per Pixelgraphics-magick: Sharpenlibraw: Post-Processing Benchmarkncnn: CPU - regnety_400mgraphics-magick: HWB Color Spaceastcenc: Thoroughetcpak: ETC1svt-av1: Enc Mode 8 - 1080pgraphics-magick: Resizingncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3tnn: CPU - MobileNet v2ncnn: CPU - mnasnetencode-ogg: WAV To Oggastcenc: Mediumsynthmark: VoiceMark_100compress-zstd: 3, Long Mode - Compression Speedgraphics-magick: Rotatencnn: CPU - efficientnet-b0tscp: AI Chess Performancencnn: CPU - blazefacequantlib: graphics-magick: Noise-Gaussianetcpak: ETC2liquid-dsp: 32 - 256 - 57onnx: shufflenet-v2-10 - OpenMP CPUjpegxl-decode: 1webp: Quality 100, Highest Compressionaom-av1: Speed 0 Two-Passncnn: CPU - mobilenetsvt-av1: Enc Mode 4 - 1080pavifenc: 6, Losslessncnn: CPU - shufflenet-v2jpegxl-decode: Allncnn: CPU - squeezenet_ssdjpegxl: JPEG - 8compress-lz4: 9 - Compression Speedncnn: CPU - resnet50compress-zstd: 19, Long Mode - Decompression Speedsimdjson: LargeRandcompress-zstd: 8, Long Mode - Compression Speedjpegxl: PNG - 8simdjson: PartialTweetsbasis: UASTC Level 0simdjson: DistinctUserIDonnx: yolov4 - OpenMP CPUliquid-dsp: 1 - 256 - 57povray: Trace Timewebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5avifenc: 2graphics-magick: Swirlbasis: ETC1Savifenc: 6jpegxl: JPEG - 5compress-zstd: 8 - Compression Speedavifenc: 0ncnn: CPU - googlenetjpegxl: JPEG - 7aom-av1: Speed 6 Realtimedav1d: Summer Nature 4Kwebp: Defaultwebp2: Defaultsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pcompress-zstd: 3, Long Mode - Decompression Speedredis: LPUSHredis: LPOPcompress-zstd: 8 - Decompression Speedcompress-lz4: 3 - Compression Speedaom-av1: Speed 4 Two-Passonnx: bertsquad-10 - OpenMP CPUncnn: CPU - yolov4-tinysimdjson: Kostyawebp2: Quality 100, Lossless Compressionsvt-vp9: Visual Quality Optimized - Bosphorus 1080ponnx: fcn-resnet101-11 - OpenMP CPUgraphics-magick: Enhancedcompress-lz4: 1 - Decompression Speedredis: SETastcenc: Exhaustiveliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUx265: Bosphorus 4Ktachyon: Total Timex265: Bosphorus 1080ptnn: CPU - SqueezeNet v1.1gnuradio: Hilbert Transformonednn: IP Shapes 3D - f32 - CPUsqlite-speedtest: Timed Time - Size 1,000ngspice: C7552mrbayes: Primate Phylogeny Analysiscompress-lz4: 9 - Decompression Speedredis: SADDrnnoise: compress-lz4: 3 - Decompression Speedncnn: CPU - alexnetcompress-lz4: 1 - Compression Speedgcrypt: webp: Quality 100, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - vgg16cryptopp: Unkeyed Algorithmsopenfoam: Motorbike 30Maom-av1: Speed 8 Realtimengspice: C2670gnuradio: Signal Source (Cosine)x264: H.264 Video Encodingwebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 19 - Compression Speedjpegxl: PNG - 7gnuradio: IIR Filterwebp: Quality 100dav1d: Summer Nature 1080paom-av1: Speed 6 Two-Passencode-opus: WAV To Opus Encodeencode-wavpack: WAV To WavPackgnuradio: FIR Filterncnn: CPU - resnet18onednn: Recurrent Neural Network Inference - f32 - CPUcompress-zstd: 8, Long Mode - Decompression Speedjpegxl: PNG - 5avifenc: 10, Losslesscompress-zstd: 19, Long Mode - Compression Speedbasis: UASTC Level 2build-godot: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 10gnuradio: FM Deemphasis Filteronednn: Convolution Batch Shapes Auto - f32 - CPUbasis: UASTC Level 3onednn: Recurrent Neural Network Training - f32 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0smallpt: Global Illumination Renderer; 128 Samplescrafty: Elapsed Timeonnx: super-resolution-10 - OpenMP CPUredis: GETonednn: Deconvolution Batch shapes_1d - f32 - CPUgnuradio: Five Back to Back FIR Filterscompress-zstd: 19 - Decompression SpeedGCC 10.2AMD AOCC 2.3LLVM Clang 12AMD AOCC 3.091743.721546.299370.57125.08937578.6617.6111156.9922386.56151.77421654.433.85216.2813.9313.5784.0524966.2981425.910565.3219657731.833196.9454245.04111649666671504956.535.2420.3712.426.13730.9774.23210.9913.7738.1371.1325.674350.91.221122.61.145.645.1575.734338184400024.093111.802203.8116.41423.538116619.8968.92787.351057.443.61512.7687.0735.13243.691.0422.274235.044737.12222217.523549910.504617.172.369.2061420.773.72367.371228.969943913771.12640316.1752.926811112000003.9597927.8344.394189.80211.567515.89.2596742.59962.81659.86913397.73041527.3714.19713400.110.8212330.56171.18613.9903.5546757.89545.91460997.75121.1371.6034715.4208.9328.81351.611.20843.11.652971.7929.435.48410.1491063.514.111773.674886.274.124.87536.615.90279.5230.6386642.9341055.017.290528.1262757.5232.3442.3513.24025.0655.0814.6741173124967213470419.904.46777920.84251.7210804861.922986.706576.18744.53124150.3712.198489.2012285.28965.75418243.523.06252.4493.1616.5643.2899807.3731166.49284.5323142251.573710.4402236.46613323333331710564.344.60910.966.85927.6223.79213.6712.7435.8768.8023.314024.71.111025.41.045.925.4536.124567503166722.535106.383193.6006.28821.834113121.2168.38489.511117.640.74811.9489.32244.150.9792.144238.114586.22351340.563589202.594468.272.2164621.663.53357.897230.1910346113595.32719539.8350.8140106726666727.4946.132189.74203.636534.844.11064.89459.06813188.42961165.6714.03913212.611.0112456.17173.31413.64058.11550.45706372.7824661.6210.7228.43350.711.31853.31.673976.9310.2931080.314.084805.674.614.83236.715.98479.9432.9331061.028.14960673658044.77931.64097.32445437.633669.185302.69844.88723754.1417.068449.4996383.47265.31117893.793.33270.7903.4513.3723.5076795.8081191.610164.8021481541.733538.5398272.99413352333331497262.274.6740.4211.536.82327.8544.04196.3412.5036.4464.4323.543957.91.141034.51.066.185.5226.264267779433322.122103.011188.3106.78922.073110821.3778.34285.671078.141.07812.5385.8137.5244.370.9772.134223.504456.62212779.003649832.584352.468.309.7463421.703.71349.022219.1210445713305.32762047.551.662910676666674.1193028.0245.073192.16206.360522.89.5944243.26164.51959.23413129.42954866.814.33113082.411.1412227.47172.90213.9103.6448557.51552.383954100.16118.2272.4614769.8213.5728.67351.311.41835.41.638979.3430.035.58910.3111060.113.911792.2774.774.80736.815.90780.2110.6412312.9521054.917.338028.2222757.7559373624414.372.46561911.24000.9210533984.513583.008610.96144.33224052.6812.308059.3493286.92764.27917203.533.07260.6633.1816.5403.4040789.2241186.08674.5022835121.563646.4392242.03313349000001547459.924.93711.286.91727.9143.87191.9112.4034.3468.4023.293978.21.121024.51.046.045.6396.234657873400022.537105.721193.4266.36422.005108321.3688.30983.541096.341.02811.9683.63229.031.0072.165225.174543.82345671.033766645.924463.172.3064921.933.61356.815221.5910245213144.82719036.251.454510860333334.0966326.9644.998188.70204.603523.69.5719464.91157.99012981.82948093.6814.47113010.711.1112124.81175.69413.8413.5836458.96538.87620473.2794704.7210.3528.19250.511.17838.31.669959.2910.3431065.413.861760.5773.644.83736.416.05579.8670.6430932.9411055.817.363028.1662760.9459763545388.922.46850929.13608.8OpenBenchmarking.org

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250M100M150M200M250MSE +/- 204338.54, N = 3SE +/- 5355.39, N = 3SE +/- 301436.60, N = 3SE +/- 115.96, N = 3210533984.512445437.63210804861.9291743.721. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.28001600240032004000SE +/- 7.06, N = 3SE +/- 26.84, N = 3SE +/- 5.75, N = 3SE +/- 2.21, N = 33583.013669.192986.711546.301. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2130260390520650SE +/- 5.16, N = 3SE +/- 1.23, N = 3SE +/- 5.39, N = 3SE +/- 2.79, N = 3610.96302.70576.19370.57

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21020304050SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 344.3344.8944.5325.091. (CC) gcc options: -lm -lpthread -O3 -march=native

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.280160240320400SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 0.58, N = 3SE +/- 1.00, N = 32402372413751. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.16, N = 352.6854.1450.3778.661. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 1512.3017.0612.1917.61-lomp - MIN: 11.96 / MAX: 17.61-lomp - MIN: 16.85 / MAX: 20.53-lomp - MIN: 11.89 / MAX: 13.6-lgomp - MIN: 16.94 / MAX: 25.971. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 3.06, N = 3SE +/- 1.00, N = 3SE +/- 1.33, N = 380584484811151. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.0148, N = 3SE +/- 0.0075, N = 3SE +/- 0.0090, N = 3SE +/- 0.0057, N = 39.34939.49969.20126.99221. (CXX) g++ options: -O3 -march=native -flto -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.280160240320400SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 1.16, N = 3SE +/- 0.37, N = 3286.93383.47285.29386.561. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21530456075SE +/- 0.66, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.24, N = 364.2865.3165.7551.771. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.25001000150020002500SE +/- 2.65, N = 3SE +/- 1.15, N = 3SE +/- 1.45, N = 317201789182421651. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.99681.99362.99043.98724.984SE +/- 0.06, N = 4SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 153.533.793.524.43-lomp - MIN: 3.27 / MAX: 4.75-lomp - MIN: 3.63 / MAX: 5.2-lomp - MIN: 3.34 / MAX: 4.84-lgomp - MIN: 4.19 / MAX: 11.091. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.86631.73262.59893.46524.3315SE +/- 0.07, N = 4SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 153.073.333.063.85-lomp - MIN: 2.9 / MAX: 4.41-lomp - MIN: 3.19 / MAX: 5.6-lomp - MIN: 2.98 / MAX: 4.3-lgomp - MIN: 3.74 / MAX: 10.851. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.260120180240300SE +/- 0.69, N = 3SE +/- 0.58, N = 3SE +/- 0.35, N = 3SE +/- 0.56, N = 3260.66270.79252.45216.28-fopenmp=libomp - MIN: 257.51 / MAX: 262.88-fopenmp=libomp - MIN: 268.42 / MAX: 272.22-fopenmp=libomp - MIN: 250.25 / MAX: 255.53-fopenmp - MIN: 215.1 / MAX: 218.261. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.88431.76862.65293.53724.4215SE +/- 0.03, N = 4SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 153.183.453.163.93-lomp - MIN: 3.04 / MAX: 4.48-lomp - MIN: 3.37 / MAX: 4.6-lomp - MIN: 3.06 / MAX: 4.05-lgomp - MIN: 3.71 / MAX: 6.061. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 316.5413.3716.5613.581. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.91181.82362.73543.64724.559SE +/- 0.0017, N = 3SE +/- 0.0018, N = 3SE +/- 0.0273, N = 3SE +/- 0.0178, N = 33.40403.50763.28994.05241. (CXX) g++ options: -O3 -march=native -flto -pthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 5.41, N = 3SE +/- 4.01, N = 3SE +/- 5.04, N = 3SE +/- 1.26, N = 3789.22795.81807.37966.301. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.230060090012001500SE +/- 2.80, N = 3SE +/- 1.19, N = 3SE +/- 4.71, N = 3SE +/- 2.43, N = 31186.01191.61166.41425.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 2.03, N = 3SE +/- 1.86, N = 3SE +/- 8.67, N = 3SE +/- 3.51, N = 3867101692810561. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21.1972.3943.5914.7885.985SE +/- 0.03, N = 4SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 154.504.804.535.32-lomp - MIN: 4.34 / MAX: 5.7-lomp - MIN: 4.71 / MAX: 6.61-lomp - MIN: 4.35 / MAX: 6.86-lgomp - MIN: 5.15 / MAX: 13.831. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2500K1000K1500K2000K2500KSE +/- 3546.09, N = 5SE +/- 4267.44, N = 5SE +/- 4348.49, N = 5SE +/- 7442.75, N = 522835122148154231422519657731. (CC) gcc options: -O3 -march=native

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.41180.82361.23541.64722.059SE +/- 0.02, N = 4SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 151.561.731.571.83-lomp - MIN: 1.46 / MAX: 6.9-lomp - MIN: 1.68 / MAX: 1.79-lomp - MIN: 1.54 / MAX: 1.75-lgomp - MIN: 1.77 / MAX: 3.91. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.28001600240032004000SE +/- 27.64, N = 10SE +/- 49.56, N = 3SE +/- 28.46, N = 10SE +/- 33.41, N = 53646.43538.53710.43196.91. (CXX) g++ options: -O3 -march=native -rdynamic

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 33923984024541. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.260120180240300SE +/- 0.45, N = 3SE +/- 0.09, N = 3SE +/- 2.43, N = 3SE +/- 1.65, N = 3242.03272.99236.47245.041. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2300M600M900M1200M1500MSE +/- 1422439.22, N = 3SE +/- 240370.09, N = 3SE +/- 1125956.38, N = 3SE +/- 497772.82, N = 313349000001335233333133233333311649666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.24K8K12K16K20KSE +/- 177.70, N = 3SE +/- 123.46, N = 12SE +/- 193.85, N = 4SE +/- 134.84, N = 315474149721710515049-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21428425670SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 359.9262.2764.3456.53

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21.17952.3593.53854.7185.8975SE +/- 0.020, N = 3SE +/- 0.055, N = 3SE +/- 0.016, N = 3SE +/- 0.018, N = 34.9374.6744.6095.2421. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 0 Two-PassLLVM Clang 12GCC 10.20.09450.1890.28350.3780.4725SE +/- 0.00, N = 3SE +/- 0.00, N = 30.420.371. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.12, N = 4SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.16, N = 1511.2811.5310.9612.42-lomp - MIN: 10.61 / MAX: 20.99-lomp - MIN: 11.09 / MAX: 12.2-lomp - MIN: 10.51 / MAX: 16.79-lgomp - MIN: 11.7 / MAX: 20.081. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.055, N = 3SE +/- 0.033, N = 3SE +/- 0.004, N = 3SE +/- 0.014, N = 36.9176.8236.8596.1371. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2714212835SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 327.9127.8527.6230.981. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.95181.90362.85543.80724.759SE +/- 0.06, N = 4SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 153.874.043.794.23-lomp - MIN: 3.67 / MAX: 12.94-lomp - MIN: 3.88 / MAX: 5.03-lomp - MIN: 3.64 / MAX: 4.86-lgomp - MIN: 4.15 / MAX: 9.051. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 0.21, N = 3SE +/- 0.05, N = 3SE +/- 0.40, N = 3SE +/- 0.29, N = 3191.91196.34213.67210.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.24, N = 4SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 1512.4012.5012.7413.77-lomp - MIN: 11.72 / MAX: 15.69-lomp - MIN: 12.23 / MAX: 16.6-lomp - MIN: 12 / MAX: 19.89-lgomp - MIN: 13.25 / MAX: 23.451. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2918273645SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 334.3436.4435.8738.13-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21632486480SE +/- 0.68, N = 6SE +/- 0.14, N = 3SE +/- 0.51, N = 3SE +/- 0.68, N = 668.4064.4368.8071.131. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2612182430SE +/- 0.20, N = 4SE +/- 0.20, N = 3SE +/- 0.09, N = 3SE +/- 0.21, N = 1523.2923.5423.3125.67-lomp - MIN: 22.43 / MAX: 25.17-lomp - MIN: 22.92 / MAX: 26.57-lomp - MIN: 22.75 / MAX: 33.51-lgomp - MIN: 24.52 / MAX: 35.961. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.29001800270036004500SE +/- 39.89, N = 3SE +/- 25.47, N = 3SE +/- 16.46, N = 3SE +/- 72.38, N = 33978.23957.94024.74350.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.27450.5490.82351.0981.3725SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.121.141.111.221. (CXX) g++ options: -O3 -march=native -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 5.57, N = 3SE +/- 2.69, N = 3SE +/- 6.07, N = 3SE +/- 2.15, N = 31024.51034.51025.41122.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.25650.5130.76951.0261.2825SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.041.061.041.14-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 36.046.185.925.641. (CXX) g++ options: -O3 -march=native -pthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21.26882.53763.80645.07526.344SE +/- 0.015, N = 3SE +/- 0.015, N = 3SE +/- 0.009, N = 3SE +/- 0.023, N = 35.6395.5225.4535.1571. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 36.236.266.125.731. (CXX) g++ options: -O3 -march=native -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2100200300400500SE +/- 1.36, N = 3SE +/- 2.13, N = 3SE +/- 2.95, N = 3SE +/- 1.96, N = 3465426456433-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220M40M60M80M100MSE +/- 601612.28, N = 3SE +/- 171803.51, N = 3SE +/- 78876.13, N = 3SE +/- 828458.69, N = 5787340007779433375031667818440001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2612182430SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 322.5422.1222.5424.091. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2306090120150SE +/- 0.88, N = 3SE +/- 0.95, N = 3SE +/- 0.42, N = 3SE +/- 1.06, N = 3105.72103.01106.38111.801. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.24080120160200SE +/- 0.56, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3193.43188.31193.60203.811. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.010, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 3SE +/- 0.011, N = 36.3646.7896.2886.4141. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2612182430SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 322.0122.0721.8323.541. (CXX) g++ options: -O3 -fPIC -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.230060090012001500SE +/- 3.84, N = 3SE +/- 4.48, N = 3SE +/- 3.71, N = 3SE +/- 3.67, N = 310831108113111661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2510152025SE +/- 0.21, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 3SE +/- 0.04, N = 321.3721.3821.2219.901. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2246810SE +/- 0.012, N = 3SE +/- 0.055, N = 3SE +/- 0.027, N = 3SE +/- 0.048, N = 38.3098.3428.3848.9271. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.09, N = 3SE +/- 0.24, N = 3SE +/- 0.25, N = 3SE +/- 0.14, N = 383.5485.6789.5187.35-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 9.53, N = 15SE +/- 11.95, N = 4SE +/- 9.29, N = 3SE +/- 3.93, N = 31096.31078.11117.61057.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21020304050SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.15, N = 3SE +/- 0.21, N = 341.0341.0840.7543.621. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.13, N = 4SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 1511.9612.5311.9412.76-lomp - MIN: 11.46 / MAX: 13.27-lomp - MIN: 12.12 / MAX: 17.12-lomp - MIN: 11.62 / MAX: 12.42-lgomp - MIN: 12.19 / MAX: 19.361. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.01, N = 3SE +/- 0.20, N = 3SE +/- 0.25, N = 3SE +/- 0.19, N = 383.6385.8189.3287.07-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeLLVM Clang 12GCC 10.2918273645SE +/- 0.31, N = 3SE +/- 0.16, N = 337.5035.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 0.43, N = 3SE +/- 0.04, N = 3SE +/- 0.32, N = 3SE +/- 0.47, N = 3229.03244.37244.15243.69MIN: 171.52 / MAX: 237.17MIN: 182.08 / MAX: 252.22MIN: 180.82 / MAX: 252.96-lm - MIN: 181.29 / MAX: 252.31. (CC) gcc options: -O3 -march=native -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.23450.4690.70350.9381.1725SE +/- 0.005, N = 3SE +/- 0.014, N = 3SE +/- 0.006, N = 3SE +/- 0.008, N = 31.0070.9770.9791.0421. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.51171.02341.53512.04682.5585SE +/- 0.025, N = 3SE +/- 0.011, N = 3SE +/- 0.024, N = 3SE +/- 0.005, N = 32.1652.1342.1442.2741. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 2.47, N = 12SE +/- 2.40, N = 12SE +/- 2.24, N = 13SE +/- 2.40, N = 12225.17223.50238.11235.041. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.210002000300040005000SE +/- 33.94, N = 3SE +/- 2.17, N = 3SE +/- 31.63, N = 3SE +/- 46.74, N = 34543.84456.64586.24737.11. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2500K1000K1500K2000K2500KSE +/- 35143.82, N = 15SE +/- 30760.12, N = 3SE +/- 27675.97, N = 4SE +/- 23396.73, N = 152345671.032212779.002351340.562222217.521. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2800K1600K2400K3200K4000KSE +/- 45854.80, N = 15SE +/- 31635.54, N = 3SE +/- 30792.29, N = 8SE +/- 26197.04, N = 33766645.923649832.583589202.593549910.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.210002000300040005000SE +/- 8.16, N = 11SE +/- 37.90, N = 2SE +/- 26.73, N = 34463.14352.44468.24617.11. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21632486480SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 0.77, N = 5SE +/- 0.86, N = 372.3068.3072.2172.361. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassLLVM Clang 12GCC 10.23691215SE +/- 0.05, N = 3SE +/- 0.02, N = 39.749.201. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2140280420560700SE +/- 5.59, N = 12SE +/- 5.80, N = 3SE +/- 6.07, N = 12SE +/- 6.71, N = 3649634646614-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2510152025SE +/- 0.11, N = 4SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.17, N = 1521.9321.7021.6620.77-lomp - MIN: 21.28 / MAX: 24.81-lomp - MIN: 21.21 / MAX: 30.18-lomp - MIN: 21.17 / MAX: 27.18-lgomp - MIN: 19.69 / MAX: 43.191. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.8371.6742.5113.3484.185SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 33.613.713.533.721. (CXX) g++ options: -O3 -march=native -pthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.280160240320400SE +/- 1.38, N = 3SE +/- 0.58, N = 3SE +/- 1.28, N = 3SE +/- 0.42, N = 3356.82349.02357.90367.371. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -ljpeg -lgif -lwebp -lwebpdemux -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 0.39, N = 3SE +/- 0.82, N = 3SE +/- 0.90, N = 3SE +/- 0.68, N = 3221.59219.12230.19228.961. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.44, N = 3SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.17, N = 310210410399-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 34524574614391. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23K6K9K12K15KSE +/- 106.53, N = 3SE +/- 93.71, N = 3SE +/- 59.50, N = 3SE +/- 38.97, N = 313144.813305.313595.313771.11. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2600K1200K1800K2400K3000KSE +/- 14014.88, N = 3SE +/- 28596.87, N = 3SE +/- 23132.25, N = 3SE +/- 26145.63, N = 152719036.202762047.502719539.832640316.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21224364860SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 351.4551.6650.8152.931. (CXX) g++ options: -O3 -march=native -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2200M400M600M800M1000MSE +/- 3939684.14, N = 3SE +/- 3699249.17, N = 3SE +/- 3628743.28, N = 3SE +/- 5768882.04, N = 310860333331067666667106726666711112000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.20.92681.85362.78043.70724.634SE +/- 0.01273, N = 3SE +/- 0.01294, N = 3SE +/- 0.00506, N = 34.096634.119303.95979-fopenmp=libomp - MIN: 3.9-fopenmp=libomp - MIN: 3.88-fopenmp - MIN: 3.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2714212835SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.16, N = 326.9628.0227.4927.831. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21020304050SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.13, N = 345.0045.0746.1344.391. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.35, N = 3SE +/- 0.13, N = 3SE +/- 0.28, N = 3SE +/- 0.19, N = 388.7092.1689.7489.801. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 0.48, N = 3SE +/- 1.02, N = 3SE +/- 0.85, N = 3SE +/- 0.57, N = 3204.60206.36203.64211.57-fopenmp=libomp - MIN: 203.72 / MAX: 206.33-fopenmp=libomp - MIN: 204.24 / MAX: 209.24-fopenmp=libomp - MIN: 201.91 / MAX: 206.13-fopenmp - MIN: 206.88 / MAX: 212.831. (CXX) g++ options: -O3 -march=native -pthread -fvisibility=hidden -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2120240360480600SE +/- 1.23, N = 8SE +/- 0.58, N = 9SE +/- 1.95, N = 9SE +/- 0.63, N = 9523.6522.8534.8515.81. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.23691215SE +/- 0.01936, N = 3SE +/- 0.01452, N = 3SE +/- 0.01340, N = 39.571949.594429.25967-fopenmp=libomp - MIN: 9.46-fopenmp=libomp - MIN: 9.47-fopenmp - MIN: 9.11. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000LLVM Clang 12AMD AOCC 2.3GCC 10.21020304050SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.13, N = 343.2644.1142.601. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21428425670SE +/- 0.08, N = 3SE +/- 0.20, N = 3SE +/- 0.54, N = 3SE +/- 0.15, N = 364.9164.5264.8962.821. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21326395265SE +/- 0.10, N = 3SE +/- 0.81, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 357.9959.2359.0759.87-mabm1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm -lreadline

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23K6K9K12K15KSE +/- 24.92, N = 6SE +/- 21.95, N = 3SE +/- 53.22, N = 3SE +/- 35.65, N = 612981.813129.413188.413397.71. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2700K1400K2100K2800K3500KSE +/- 27502.49, N = 15SE +/- 29853.73, N = 3SE +/- 40118.44, N = 3SE +/- 39730.96, N = 152948093.682954866.802961165.673041527.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 314.4714.3314.0414.201. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23K6K9K12K15KSE +/- 30.75, N = 3SE +/- 46.92, N = 3SE +/- 15.17, N = 5SE +/- 48.22, N = 313010.713082.413212.613400.11. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.04, N = 4SE +/- 0.04, N = 3SE +/- 0.00, N = 2SE +/- 0.09, N = 1511.1111.1411.0110.82-lomp - MIN: 10.92 / MAX: 15.76-lomp - MIN: 10.96 / MAX: 13.28-lomp - MIN: 10.84 / MAX: 12.26-lgomp - MIN: 10.41 / MAX: 17.591. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23K6K9K12K15KSE +/- 49.14, N = 3SE +/- 66.76, N = 3SE +/- 57.72, N = 3SE +/- 76.55, N = 312124.8112227.4712456.1712330.561. (CC) gcc options: -O3

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.24080120160200SE +/- 0.17, N = 3SE +/- 1.10, N = 3SE +/- 1.67, N = 3SE +/- 0.29, N = 3175.69172.90173.31171.191. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 313.8413.9113.6413.991. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.20.82011.64022.46033.28044.1005SE +/- 0.00604, N = 3SE +/- 0.01444, N = 3SE +/- 0.00753, N = 33.583643.644853.55467-fopenmp=libomp - MIN: 3.44-fopenmp=libomp - MIN: 3.5-fopenmp - MIN: 3.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21326395265SE +/- 0.19, N = 4SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 1558.9657.5158.1157.89-lomp - MIN: 57.64 / MAX: 66.68-lomp - MIN: 56.17 / MAX: 62.53-lomp - MIN: 56.81 / MAX: 67.53-lgomp - MIN: 55.89 / MAX: 80.861. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2120240360480600SE +/- 2.13, N = 3SE +/- 1.73, N = 3SE +/- 1.69, N = 3SE +/- 3.29, N = 15538.88552.38550.46545.911. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MLLVM Clang 12GCC 10.220406080100SE +/- 0.06, N = 3SE +/- 0.08, N = 3100.1697.751. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeLLVM Clang 12GCC 10.2306090120150SE +/- 1.02, N = 3SE +/- 0.75, N = 3118.22121.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21632486480SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 373.2872.4672.7871.601. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.210002000300040005000SE +/- 60.32, N = 8SE +/- 10.26, N = 9SE +/- 22.36, N = 9SE +/- 16.39, N = 94704.74769.84661.64715.41. 3.8.1.0

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.250100150200250SE +/- 1.75, N = 9SE +/- 1.62, N = 12SE +/- 1.86, N = 8SE +/- 1.66, N = 9210.35213.57210.72208.93-mstack-alignment=64-mstack-alignment=64-mstack-alignment=641. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2714212835SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 328.1928.6728.4328.811. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21224364860SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.20, N = 350.551.350.751.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 311.1711.4111.3111.20-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 2.72, N = 8SE +/- 1.22, N = 9SE +/- 2.72, N = 9SE +/- 1.32, N = 9838.3835.4853.3843.11. 3.8.1.0

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.37640.75281.12921.50561.882SE +/- 0.011, N = 3SE +/- 0.003, N = 3SE +/- 0.006, N = 3SE +/- 0.018, N = 41.6691.6381.6731.6521. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 1.29, N = 3SE +/- 2.97, N = 3SE +/- 9.00, N = 3SE +/- 1.38, N = 3959.29979.34976.93971.79MIN: 714.89 / MAX: 1039.77MIN: 717.55 / MAX: 1062.34MIN: 633.01 / MAX: 1069.88-lm - MIN: 732.02 / MAX: 1055.821. (CC) gcc options: -O3 -march=native -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassLLVM Clang 12GCC 10.2714212835SE +/- 0.13, N = 3SE +/- 0.26, N = 330.0329.431. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLLVM Clang 12GCC 10.21.25752.5153.77255.036.2875SE +/- 0.037, N = 5SE +/- 0.031, N = 55.5895.484-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.23691215SE +/- 0.01, N = 5SE +/- 0.11, N = 5SE +/- 0.03, N = 5SE +/- 0.10, N = 510.3410.3110.2910.151. (CXX) g++ options: -O3 -march=native -rdynamic

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 3.65, N = 8SE +/- 3.22, N = 9SE +/- 4.68, N = 9SE +/- 2.09, N = 91065.41060.11080.31063.51. 3.8.1.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.11, N = 4SE +/- 0.08, N = 3SE +/- 0.21, N = 3SE +/- 0.05, N = 1513.8613.9114.0814.11-lomp - MIN: 13.44 / MAX: 16.35-lomp - MIN: 13.66 / MAX: 14.54-lomp - MIN: 13.56 / MAX: 21.13-lgomp - MIN: 13.84 / MAX: 23.151. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.2400800120016002000SE +/- 3.74, N = 3SE +/- 9.12, N = 3SE +/- 5.00, N = 31760.571792.271773.67-fopenmp=libomp - MIN: 1745.87-fopenmp=libomp - MIN: 1766.32-fopenmp - MIN: 1750.261. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedAMD AOCC 2.3GCC 10.210002000300040005000SE +/- 29.99, N = 34805.64886.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 373.6474.7774.6174.12-Xclang -mrelax-all-Xclang -mrelax-all-Xclang -mrelax-all1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.21.09692.19383.29074.38765.4845SE +/- 0.015, N = 3SE +/- 0.038, N = 3SE +/- 0.041, N = 3SE +/- 0.022, N = 34.8374.8074.8324.8751. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2816243240SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 336.436.836.736.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.248121620SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 316.0615.9115.9815.901. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.220406080100SE +/- 0.16, N = 3SE +/- 0.09, N = 3SE +/- 0.28, N = 3SE +/- 0.19, N = 379.8780.2179.9479.52

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.20.14470.28940.43410.57880.7235SE +/- 0.004823, N = 3SE +/- 0.000908, N = 3SE +/- 0.000722, N = 30.6430930.6412310.638664-fopenmp=libomp - MIN: 0.61-fopenmp=libomp - MIN: 0.61-fopenmp - MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.20.66421.32841.99262.65683.321SE +/- 0.006, N = 3SE +/- 0.016, N = 3SE +/- 0.035, N = 3SE +/- 0.014, N = 32.9412.9522.9332.9341. (CXX) g++ options: -O3 -fPIC -lm

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 3.09, N = 8SE +/- 0.98, N = 9SE +/- 15.16, N = 9SE +/- 0.78, N = 91055.81054.91061.01055.01. 3.8.1.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.248121620SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 317.3617.3417.29-fopenmp=libomp - MIN: 16.83-fopenmp=libomp - MIN: 16.81-fopenmp - MIN: 16.581. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3AMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2714212835SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 328.1728.2228.1528.131. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.26001200180024003000SE +/- 17.19, N = 3SE +/- 5.95, N = 3SE +/- 2.01, N = 32760.942757.752757.52-fopenmp=libomp - MIN: 2717.59-fopenmp=libomp - MIN: 2734.73-fopenmp - MIN: 2719.351. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3GCC 10.2816243240SE +/- 0.09, N = 332.34MIN: 31.33 / MAX: 42.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0GCC 10.20.5291.0581.5872.1162.645SE +/- 0.027, N = 32.351MIN: 2.27 / MAX: 7.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224GCC 10.20.7291.4582.1872.9163.645SE +/- 0.049, N = 33.240MIN: 3.12 / MAX: 11.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50GCC 10.2612182430SE +/- 0.02, N = 325.07MIN: 23.97 / MAX: 39.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0GCC 10.21.14322.28643.42964.57285.716SE +/- 0.010, N = 35.081MIN: 4.92 / MAX: 14.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 10.21.05172.10343.15514.20685.2585SE +/- 0.015, N = 34.6741. (CXX) g++ options: -fopenmp -O3 -march=native

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeGCC 10.23M6M9M12M15MSE +/- 26371.45, N = 3117312491. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.214002800420056007000SE +/- 38.28, N = 3SE +/- 55.09, N = 12SE +/- 34.74, N = 3SE +/- 215.50, N = 125976593760676721-fopenmp=libomp-fopenmp=libomp-fopenmp=libomp-fopenmp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.2800K1600K2400K3200K4000KSE +/- 11517.61, N = 3SE +/- 47796.61, N = 15SE +/- 58906.79, N = 15SE +/- 36718.95, N = 153545388.923624414.373658044.773470419.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUAMD AOCC 3.0LLVM Clang 12GCC 10.21.00522.01043.01564.02085.026SE +/- 0.00340, N = 3SE +/- 0.00451, N = 3SE +/- 0.30276, N = 152.468502.465614.46777-fopenmp=libomp - MIN: 2.36-fopenmp=libomp - MIN: 2.33-fopenmp - MIN: 2.861. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.22004006008001000SE +/- 20.61, N = 8SE +/- 17.91, N = 9SE +/- 20.04, N = 9SE +/- 19.67, N = 9929.1911.2931.6920.81. 3.8.1.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedAMD AOCC 3.0LLVM Clang 12AMD AOCC 2.3GCC 10.29001800270036004500SE +/- 417.40, N = 3SE +/- 50.52, N = 3SE +/- 12.18, N = 3SE +/- 6.53, N = 33608.84000.94097.34251.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

147 Results Shown

Sysbench
Etcpak
Timed LLVM Compilation
C-Ray
GraphicsMagick
LibRaw
NCNN
GraphicsMagick
ASTC Encoder
Etcpak
SVT-AV1
GraphicsMagick
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
TNN
NCNN
Ogg Audio Encoding
ASTC Encoder
Google SynthMark
Zstd Compression
GraphicsMagick
NCNN
TSCP
NCNN
QuantLib
GraphicsMagick
Etcpak
Liquid-DSP
ONNX Runtime
JPEG XL Decoding
WebP Image Encode
AOM AV1
NCNN
SVT-AV1
libavif avifenc
NCNN
JPEG XL Decoding
NCNN
JPEG XL
LZ4 Compression
NCNN
Zstd Compression
simdjson
Zstd Compression
JPEG XL
simdjson
Basis Universal
simdjson
ONNX Runtime
Liquid-DSP
POV-Ray
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
  Quality 100, Compression Effort 5
libavif avifenc
GraphicsMagick
Basis Universal
libavif avifenc
JPEG XL
Zstd Compression
libavif avifenc
NCNN
JPEG XL
AOM AV1
dav1d
WebP Image Encode
WebP2 Image Encode
SVT-VP9
Zstd Compression
Redis:
  LPUSH
  LPOP
Zstd Compression
LZ4 Compression
AOM AV1
ONNX Runtime
NCNN
simdjson
WebP2 Image Encode
SVT-VP9
ONNX Runtime
GraphicsMagick
LZ4 Compression
Redis
ASTC Encoder
Liquid-DSP
oneDNN
x265
Tachyon
x265
TNN
GNU Radio
oneDNN
SQLite Speedtest
Ngspice
Timed MrBayes Analysis
LZ4 Compression
Redis
RNNoise
LZ4 Compression
NCNN
LZ4 Compression
Gcrypt Library
WebP Image Encode
oneDNN
NCNN
Crypto++
OpenFOAM
AOM AV1
Ngspice
GNU Radio
x264
WebP Image Encode
Zstd Compression
JPEG XL
GNU Radio
WebP Image Encode
dav1d
AOM AV1
Opus Codec Encoding
WavPack Audio Encoding
GNU Radio
NCNN
oneDNN
Zstd Compression
JPEG XL
libavif avifenc
Zstd Compression
Basis Universal
Timed Godot Game Engine Compilation
oneDNN
libavif avifenc
GNU Radio
oneDNN
Basis Universal
oneDNN
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Smallpt
Crafty
ONNX Runtime
Redis
oneDNN
GNU Radio
Zstd Compression